Quantcast

Logistic Politomic Regression in R

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Logistic Politomic Regression in R

Jose Bustos Melo
Hi everyone,
 
I'm trying to do an Logistic Politomic Regression in R. Because I have my resposes variables and the aswer is 0 and 1 in 3 bacterial genes. Somebody know how to do this in R in a easy way?
 
Thank so much,
 
José Bustos
Facultad de Ciencias
Universidad Catolica Ssma.
Concepcion
Chile


--- El lun, 31/8/09, [hidden email] <r-help-request@r-project..org> escribió:


De: [hidden email] <[hidden email]>
Asunto: R-help Digest, Vol 78, Issue 34
Para: [hidden email]
Fecha: lunes, 31 agosto, 2009 7:00


Send R-help mailing list submissions to
    [hidden email]

To subscribe or unsubscribe via the World Wide Web, visit
    https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
    [hidden email]

You can reach the person managing the list at
    [hidden email]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."


Today's Topics:

   1. Re: Best R text editors? (Uli Kleinwechter)
   2. Re: Best R text editors? (Liviu Andronic)
   3. Re: need technique for speeding up R dataframe individual
      element insertion (no deletion though) (Ishwor)
   4. Re: POSIX time conversion doesn't display digit
      (Gabor Grothendieck)
   5. Re: correlation between two 2D point patterns? (Barry Rowlingson)
   6. Re: Google's R Style Guide (Esmail)
   7. Meaning of data set variables (Caio Azevedo)
   8. Re: Meaning of data set variables (Uwe Ligges)
   9. Bootstrap inference for the sample median? (Ajay Shah)
  10. Re: Best R text editors? (Grzes)
  11.  about isoMDS method (Grzes)
  12.  about isoMDS method (Grzes)
  13. Re: Best R text editors? (jaropis)
  14. Re: lrm in Design (loch1)
  15. version 1.5-1 of Rcmdr package released (John Fox)
  16. Re: Best R text editors? (Liviu Andronic)
  17. Change the Order of a Cluster Topology (Rodrigo Aluizio)
  18.  Best R text editors? (Rodrigo Aluizio)
  19. RConsole processing crashes Rgui.exe (Jack Tanner)
  20. Re: correlation between two 2D point patterns? (Charles C. Berry)
  21. Re: RFE: vectorize URLdecode (Jack Tanner)
  22. Re: about isoMDS method (stephen sefick)
  23. abline does not show on plot if slope differs from 0
      (G?bor Pozsgai)
  24. Re: lrm in Design (Gabor Grothendieck)
  25. Re: abline does not show on plot if slope differs from 0
      (jim holtman)
  26. Re: abline does not show on plot if slope differs from 0
      (Gabor Grothendieck)
  27. Re: RConsole processing crashes Rgui.exe (Duncan Murdoch)
  28. Re: standard error associated with correlation coefficient
      (Sunil Suchindran)
  29. Re: Best R text editors? (Wensui Liu)
  30. Re moving the numbers of the X/Y axis (njhuang86)
  31. Re: Re moving the numbers of the X/Y axis (Jorge Ivan Velez)
  32. Complexity parameter in rpart (Andrew Aldersley)
  33. Re: Bootstrap inference for the sample median?
      (Emmanuel Charpentier)
  34. Re: about isoMDS method (Grzes)
  35. Computer Modern Fonts in R graphic (Friedericksen Hope)
  36. Re: Computer Modern Fonts in R graphic (Paul Murrell)
  37. error with summary(vector)?? (Oliver Kullmann)
  38.  test for bimodality&In-Reply-To= (John Sansom)
  39. Re: error with summary(vector)?? (Duncan Murdoch)
  40. Re: RConsole processing crashes Rgui.exe (Duncan Murdoch)
  41. Sapply (Noah Silverman)
  42. SVM coefficients (Noah Silverman)
  43. Re: test for bimodality (Rolf Turner)
  44. Re: about isoMDS method (Gavin Simpson)
  45. Combining: R + Condor in 2009 ? (+foreach maybe?) (Tal Galili)
  46. Re: Sapply (Gabor Grothendieck)
  47. Re: Sapply (Duncan Murdoch)
  48. Trying to rename spatial pts data frame slot that isn't a
      slot() (Tim Clark)
  49. Re: test for bimodality (Christian Hennig)
  50. CRAN (and crantastic) updates this week (Crantastic)
  51. Re: Sapply (Rolf Turner)
  52. Re: Sapply (hadley wickham)
  53. Aggregating irregular time series (R_help Help)
  54. Re: SVM coefficients (Steve Lianoglou)
  55. Re: [R-SIG-Finance] Aggregating irregular time series
      (Gabor Grothendieck)
  56.  Quadratcount image plot colors (phil smart)
  57. RE xcel - two problems - Running RExcel & Working Directory
      error (Raoul)
  58. Re : Re:  Best R text editors? (Wolfgang RAFFELSBERGER)
  59. clarificatin on validate.ols method='cross' (Dylan Beaudette)
  60. Re: RE xcel - two problems - Running RExcel & Working
      Directory error (Richard M. Heiberger)
  61. Re: Computer Modern Fonts in R graphic (Charlie Sharpsteen)
  62. Re: error with summary(vector)?? (Oliver Kullmann)
  63. online classes or online eduction in statistics? esp. time
      series analysis and cointegration? (Luna Laurent)
  64. How to extract the theta values from coxph frailty models
      ([hidden email])
  65. Re: online classes or online eduction in statistics? esp.
      time    series analysis and cointegration? (Robert A LaBudde)
  66. where to find jobs in Datamining and Machine Learning? (Luna Moon)
  67. Re: online classes or online eduction in statistics? esp.
      time    series analysis and cointegration? (Luna Laurent)
  68. Two way joining vs heatmap (Schalk Heunis)
  69. Re: Two way joining vs heatmap (Oscar Bayona)
  70. Re: SVM coefficients (Noah Silverman)
  71. Re: Two way joining vs heatmap (Peter Dalgaard)
  72. Re: Sequence generation (Petr PIKAL)
  73. Re: Trying to rename spatial pts data frame slot that isn't a
      slot() (Paul Hiemstra)
  74. Re: SVM coefficients (Achim Zeileis)
  75. Re: Computer Modern Fonts in R graphic (Liviu Andronic)
  76. Re: Best R text editors? (Martin Maechler)
  77. Re: test for bimodality&In-Reply-To= (Mark Difford)
  78. permutation test - query (Yonatan Nissenbaum)
  79. Re: Google's R Style Guide (David Scott)


----------------------------------------------------------------------

Message: 1
Date: Sun, 30 Aug 2009 12:28:05 +0200
From: Uli Kleinwechter <[hidden email]>
Subject: Re: [R] Best R text editors?
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii; format=flowed

Hi Jonathan,

contributing to your poll: Also Emacs+ESS on Linux.

Uli



------------------------------

Message: 2
Date: Sun, 30 Aug 2009 11:59:52 +0100
From: Liviu Andronic <[hidden email]>
Subject: Re: [R] Best R text editors?
To: Uli Kleinwechter <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=UTF-8

Hello,

On 8/30/09, Uli Kleinwechter <[hidden email]> wrote:
>  contributing to your poll: Also Emacs+ESS on Linux.
>
Could someone give a brief and subjective overview of ESS. I notice
that many people use it, and many describe it as the tool for the
power user. As far as I'm concerned, I've once again looked at Emacs
(after couple of years) and I still don't feel like using it for my
editing purposes.
Thank you
Liviu



------------------------------

Message: 3
Date: Sun, 30 Aug 2009 22:04:18 +1000
From: Ishwor <[hidden email]>
Subject: Re: [R] need technique for speeding up R dataframe individual
    element insertion (no deletion though)
To: [hidden email]
Cc: [hidden email]
Message-ID:
    <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1

Bill, Jim

One word. Thanks tons! :-)

2009/8/13  <[hidden email]>:

> Why do you need an explicit loop at all?
>
> (Also, your loop goes over i in 1:length(cam$end_date) but your code refers to cam$end_date[i+1] -->||<--!!)
>
>
> Here is a suggestion. ?You want to identify places where the date increases but the volume does not change. ?OK, where?
>
> ind <- with(cam, {
> ? ? ? ? ? dx <- as.numeric(diff(strptime(end_date, "%d/%m/%Y")))
> ? ? ? ? ? dt <- diff(vol)
> ? ? ? ? ? which(dx > 0 & dt == 0)
> })
>
> Now adjust the new data frame
>
> cap <- within(cam, {
> ? ? ? ? ? ? levels[ind] <- 1
> ? ? ? ? ? ? levels[ind+1] <- 1
> })
>
[[elided Yahoo spam]]

>
> Bill Venables.
>
> ________________________________________
> From: [hidden email] [[hidden email]] On Behalf Of Ishwor [[hidden email]]
> Sent: 13 August 2009 22:07
> To: [hidden email]
> Subject: [R] need technique for speeding up R dataframe individual element ? ? ?insertion (no deletion though)
>
> Hi fellas,
>
> I am working on a dataframe cam and it involves comparison within the
> 2 columns - t1 and t2 on about 20K rows and 14 columns.
>
> ###
> cap = cam; # this doesn't take long. ~1 secs.
>
>
> for( i in 1:length(cam$end_date))
> ?{
> ? ?x1=strptime(cam$end_date[i], "%d/%m/%Y");
> ? ?x2=strptime(cam$end_date[i+1], "%d/%m/%Y");
>
> ? ?t1= cam$vol[i];
> ? ?t2= cam$vol[i+1];
>
> ? ?if(!is.na(x2) && !is.na(x1) && !is.na(t1) && !is.na(t2))
> ? ?{
> ? ? ?if( (x2>=x1) && (t1==t2) ) # date and vol
> ? ? ?{
> ? ? ? ?cap$levels[i]=1; #make change to specific dataframe cell
> ? ? ? ?cap$levels[i+1]=1;
> ? ? ?}
> ? ?}
> ?}
> ###
>
> Having coded that, i ran a timing profile on this section and each
> 1000'th row comparison is taking ~1.1 minutes on a 2.8Ghz dual-core
> box (which is a test box we use).
> This obviously computes to ~21 minutes for 20k which is definitely not
> where we want it headed. I believe, optimisation(or even different way
> to address indexing inside dataframe) can be had inside the innermost
> `if' and specifically in `cap$levels[i]=1;' but I am a bit at a loss
> having scoured the documentation failing to find anything of value.
> So, my question remains are there any general/specific changes I can
> do to speed up the code execution dramatically?
>
> Thanks folks.
>
> --
> Regards,
> Ishwor Gurung
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


--
Regards,
Ishwor Gurung



------------------------------

Message: 4
Date: Sun, 30 Aug 2009 08:42:09 -0400
From: Gabor Grothendieck <[hidden email]>
Subject: Re: [R] POSIX time conversion doesn't display digit
To: R_help Help <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Try this:

> options(digits = 20)
> as.double(as.POSIXct(a.p))
[1] 1251122400.213


On Sat, Aug 29, 2009 at 11:37 PM, R_help Help<[hidden email]> wrote:

> Hi,
>
> I have the following string that I converted to a POSIXct:
>
>> a <- "2009-08-24 10:00:00.213"
>> a.p <- strptime(a,"%Y-%m-%d %H:%M:%OS")
>> as.double(as.POSIXct(a.p))
> [1] 1251122400
>
> I can't seem to get the decimal fraction of .213 out by casting to
> double or numeric. Is there anyway to make sure that POSIXct spits
> that out? Thank you.
>
> adschai
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


------------------------------

Message: 5
Date: Sun, 30 Aug 2009 14:12:55 +0100
From: Barry Rowlingson <[hidden email]>
Subject: Re: [R] correlation between two 2D point patterns?
To: Bernardo Rangel Tura <[hidden email]>
Cc: r-help <[hidden email]>
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Aug 30, 2009 at 10:46 AM, Bernardo Rangel
Tura<[hidden email]> wrote:

> On Sun, 2009-08-30 at 07:51 +0100, William Simpson wrote:
>> Suppose I have two sets of (x,y) points like this:
>>
>> x1<-runif(n=10)
>> y1<-runif(n=10)
>> A<-cbind(x1,y1)
>>
>> x2<-runif(n=10)
>> y2<-runif(n=10)
>> B<-cbind(x2,y2)
>>
>> I would like to measure how similar the two sets of points are.
>> Something like a correlation coefficient, where 0 means the two
>> patterns are unrelated, and 1 means they are identical. And in
>> addition I'd like to be able to assign a p-value to the calculated
>> statistic.
>>
>> cor(x1,x2)
>> cor(y1,y2)
>> gives two numbers instead of one.
>>
>> cor(A,B)
>> gives a correlation matrix
>>
>> I have looked a little at spatial statistics. I have seen methods
>> that, for each point, search in some neighbourhood around it and then
>> compute the correlation as a function of search radius. That is not
>> what I am looking for. I would like a single number that summarises
>> the strength of the relationship between the two patterns.
>>
>> I will do procrustes on the two point sets first, so that if A is just
>> a rotated, translated, scaled, reflected version of B the two patterns
>> will superimpose and the statistic I'm looking for will say there is
>> perfect correspondence.
>>
>> Thanks very much for any help in finding such a statistic and
>> calculating it using R.
>>
>> Bill
>
> Hi Bill,
>
> If your 2 points set is similar I expect your Euclidean distance is 0,
> so I suggest this script:
>
> dist<-sqrt((x1-x2)^2+(y1-y2)^2) # Euclidean distance
>
> t.test(dist) # test for mean equal 0
I'm not sure what you'd do for two sets of points - compute the sum of
all the distances from points in set 1 to the nearest point in set 2?
That'll be zero for identical patterns, but also zero if you have
superimposed points in set1 but not in set 2.

If rotation and scale and translation are of interest to you then
what you are doing is shape analysis. There's a package for that[1].

http://www.maths.nott.ac.uk/personal/ild/shapes/

Barry

[1] Not quite as catchy as 'there's an app for that'



------------------------------

Message: 6
Date: Sun, 30 Aug 2009 09:23:55 -0400
From: Esmail <[hidden email]>
Subject: Re: [R] Google's R Style Guide
To: r-help <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

hadley wickham wrote:
>> Perhaps most of you have already seen this?
>>
>>  http://google-styleguide.googlecode.com/svn/trunk/google-r-style.html
>>
>> Comments/Critiques?
>
> I made my own version that reflects my personal biases:
> http://had.co.nz/stat405/resources/r-style-guide.html
>

thanks for sharing, I'll check it out.

Cheers,
Esmail



------------------------------

Message: 7
Date: Sun, 30 Aug 2009 11:25:14 -0300
From: Caio Azevedo <[hidden email]>
Subject: [R] Meaning of data set variables
To: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

Hi all,

Does anybody know the meaning of the values 0 - 1, for each variable from
data "sex2" avaible from the package "logistf"?

Thanks in advance.

Best,

Caio

    [[alternative HTML version deleted]]



------------------------------

Message: 8
Date: Sun, 30 Aug 2009 16:30:41 +0200
From: Uwe Ligges <[hidden email]>
Subject: Re: [R] Meaning of data set variables
To: Caio Azevedo <[hidden email]>
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

No need to repost the question. We read it.
Have you looked at ?sex2 ?
If that is not sufficient, have you looked into the cited reference?
If that is also not sufficient, please ask the package maintainer.


Uwe Ligges


Caio Azevedo wrote:

> Hi all,
>
> Does anybody know the meaning of the values 0 - 1, for each variable from
> data "sex2" avaible from the package "logistf"?
>
> Thanks in advance.
>
> Best,
>
> Caio
>
>     [[alternative HTML version deleted]]
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


------------------------------

Message: 9
Date: Sun, 30 Aug 2009 18:43:09 +0530
From: Ajay Shah <[hidden email]>
Subject: [R] Bootstrap inference for the sample median?
To: r-help <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii

Folks,

I have this code fragment:

  set.seed(1001)
  x <- c(0.79211363702017, 0.940536712079832, 0.859757602692931, 0.82529998629531,
         0.973451006822, 0.92378802164835, 0.996679563355802,
         0.943347739494445, 0.992873542980045, 0.870624707845108, 0.935917364493788)
  range(x)
  # from 0.79 to 0.996

  e <- function(x,d) {
    median(x[d])
  }

  b <- boot(x, e, R=1000)
  boot.ci(b)

The 95% confidence interval, as seen with `Normal' and `Basic'
calculations, has an upper bound of 1.0028 and 1.0121.

How is this possible? If I sample with replacement from values which
are all lower than 1, then any sample median of these bootstrap
samples should be lower than 1. The upper cutoff of the 95% confidence
interval should also be below 1.

Is this a bug in the boot package? Or should I be using `Percentile'
or `BCa' in this situation?

--
Ajay Shah                                      http://www.mayin.org/ajayshah 
ajayshah@mayin.org                             http://ajayshahblog.blogspot.com
<*(:-? - wizard who doesn't know the answer.



------------------------------

Message: 10
Date: Sun, 30 Aug 2009 04:35:26 -0700 (PDT)
From: Grzes <[hidden email]>
Subject: Re: [R] Best R text editors?
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii


Hello,

I'm using PLD Linux and in my opinion, if you looking for very easy editor -
Geany!!! It'll be fantastic ;)

--
View this message in context: http://www.nabble.com/Best-R-text-editors--tp25178744p25210782.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 11
Date: Sun, 30 Aug 2009 05:11:01 -0700 (PDT)
From: Grzes <[hidden email]>
Subject: [R]  about isoMDS method
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii


Hi,
For example:
I built a half matrix "w" using a daisy(x, metric = c("euclidean"))
http://www.nabble.com/file/p25211016/1.jpg 

And next I transformed this matrix "w" using isoMDS function, for example
isoMDS(w, k=2) and as result I got:
http://www.nabble.com/file/p25211016/2.jpg 
And now I have a question:

1. If number in matrix  w[2, 1]  (= 0.41538462) match  two points (below) in
matrix created by isoMDS ?
http://www.nabble.com/file/p25211016/3.jpg 

Is this the same point?
--
View this message in context: http://www.nabble.com/about-isoMDS-method-tp25211016p25211016.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 12
Date: Sun, 30 Aug 2009 05:33:23 -0700 (PDT)
From: Grzes <[hidden email]>
Subject: [R]  about isoMDS method
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii


Hi,
For example:
I built a half matrix "w" using a daisy(x, metric = c("euclidean"))

http://www.nabble.com/file/p25211016/1.jpg 

And next I transformed this matrix "w" using isoMDS function, for example
isoMDS(w, k=2) and as result I got:
http://www.nabble.com/file/p25211016/2.jpg 
And now I have two questions:

1. If number in matrix  w[2, 1]  (= 0.41538462) match  two points (below) in
matrix created by isoMDS [1,1:2] and [2,1:2] ?
http://www.nabble.com/file/p25211016/3.jpg 

Is this the same point?

2. If it's true why I can't check my euclidean distance by using this
equation:
d<-
((0.511396296-(-0.129372871))^2+((-0.0171714934)-(-0.2759703494))^2)^(1/2)

I thought that "d" should be 0.41538462 but unfortunately I get d=0.6910586.
Why?
--
View this message in context: http://www.nabble.com/about-isoMDS-method-tp25211016p25211016.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 13
Date: Sun, 30 Aug 2009 14:48:49 +0200
From: jaropis <[hidden email]>
Subject: Re: [R] Best R text editors?
To: [hidden email]
Message-ID: <h7dsfh$ufa$[hidden email]>
Content-Type: text/plain; charset="US-ASCII"

Uli Kleinwechter wrote:

> Hi Jonathan,
>
> contributing to your poll: Also Emacs+ESS on Linux.
same here

J



------------------------------

Message: 14
Date: Sun, 30 Aug 2009 02:19:58 -0700 (PDT)
From: loch1 <[hidden email]>
Subject: Re: [R] lrm in Design
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii


Thank you for your reply, it helped. That thread is about glm, I take it that
lrm behaves in the same way?!?



See:
https://stat.ethz.ch/pipermail/r-help/2009-July/204192.html

and also other posts in that thread.

--
View this message in context: http://www.nabble.com/lrm-in-Design-tp25206737p25209958.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 15
Date: Sun, 30 Aug 2009 10:38:38 -0400
From: "John Fox" <[hidden email]>
Subject: [R] version 1.5-1 of Rcmdr package released
To: <[hidden email]>
Message-ID: <000401ca297f$9048b570$b0da2050$@ca>
Content-Type: text/plain;    charset="us-ascii"

Dear R users,

There is a new version (1.5-1) of the Rcmdr package. Here, from the CHANGES
files, are changes since the last minor version (1.4-0) was released a year
ago:

-------------------------------------------------------

Version 1.4-1

   o  The following updated translations have been added: Catalan (thanks to
Manel Salamero), French (thanks to Philippe Grojean), Japanese (thanks to
Takaharu Araki), Romanian (thanks to Adrian Dusa), and Slovenian (thanks to
Jaro.Lajovic).
   
Version 1.4-2

   o  For cases where translated help files of the form Commander-xx.Rd
won't compile (where "xx" is the language code), translators can now provide
a PDF file named Commander-xx.pdf to be substituted when users select Menu
-> Commander help. (Thanks to Alexey Shipunov for helping me with this
problem.)
   
   o  Dialogs that reset the active data set no longer do so if an error
occurs; previously, resetting the active data set caused error messages to
scroll off out of sight in the Messages window (problem reported by Peter
Dalgaard).
   
   o  Improved error checking and protection in distributions dialogs.
   
Version 1.4-3

   o  Fixed a bug in the scroll-bar for the RHS of model formulas (reported
by Ulrich Halekoh).
   
   o  Fixed problem with some legends in plotMeans() (reported by Rich
Heiberger).
   
   o  Updated Spanish translation (courtesy Rcmdr Spanish translation team).
   
Version 1.4-4
   
   o  Small changes to Hist() to make it more flexible (suggested by Gilles
Marcou).
   
Version 1.4-5

   o  Change to Message(), which posts messages to the Message window, to
prevent the window from freezing in (some) Windows systems under R 2.8.0
(problem reported by Rich Heiberger).
   
   o  Updated Indonesian translation (thanks to I. Made Tirta).
   
Version 1.4-6

   o  Added polr and multinom models to effect-plots, consistent with latest
version of effects package.
   
   o  ttkentry() redefined to trim leading and trailing blanks (to avoid
potential problems when user introduces a leading or trailing blank in a
text field, as reported by Rich Heiberger).
   
   o  An attempt is made to break commands at blanks and commas to fit them
in the script and output windows.
   
   o  Improvements to the one-way and multi-way ANOVA dialogs, including
saving the models that are produced (suggested by Rich Heiberger).
   
   o  Improvements and bug fixes to handling of plug-in menu installation.
   
   o  New function MacOSXP() to test for Mac OS X; used to suppress "Data ->
New data set" menu, since the Mac data editor causes R to hang when called
with an empty data frame (reported by Rob Goedman and Tracy Lightcap).
   
   o  When character values are entered in the data editor after "Data ->
New data set", the corresponding variable is converted to a factor.
   
   o  When console.output is TRUE, messages as well as output now go to the
R Console and the Commander Messages window (as well as the Output window)
is suppressed (with code contributed by Richard Heiberger).
   
   o  Small changes.
   
Version 1.4-7
 
   o  Improved Brazilian Portuguese translation, contributed by Adriano
Azevedo-Filho and Marilia Sa Carvalho.
   
   o  Added function sortVarNames(), which sorts variable names including
numerals into a more "natural" order than does sort(); used in variable
lists when the Rcmdr option "sort.names" is TRUE, which is the default.
   
   o  Small changes.
   
Version 1.4-8

   o  Export data set now allows an arbitrary field separator (suggested by
Marilia Sa Carvalho).
   
   o  Menus can now use translation files supplied by plug-ins.
   
   o  Path names with apostrophes should no longer produce an "unbalanced
parentheses" error.
   
   o  Added files for Debian Linux (from Pietro Battiston and Dirk
Eddelbuettel).
   
   o  Small changes.
   
Version 1.4-9

   o  Upated Russian translation (courtesy of Alexey Shipunov).
   
   o  Fixed bug in BreuschPaganTest() that prevented dialog from functioning
(reported by Michel Pouchain).
   
Version 1.4-10

   o  Corrected problem in paste in Command window in non-Windows systems
(reported by Liviu Andronic).
   
   o  Fixed handling of X11 warnings which was disrupted by showData(), and
added check for "Warning in structure" (problems reported by Liviu
Andronic).
   
   o  Prefixes used when output is directed to the R console are no longer
hard-coded, but can be set by the "prefixes" Rcmdr option (to satisfy a
request by Liviu Andronic).

Version 1.4-11

   o  Correction to .mo file for Brazilian Portuguese translation (supplied
by Adriano Azevedo-Filho following report by Donald Pianto).
   
   o  Added summary() output to principal components (suggested by Liviu
Andronic).
   
Version 1.5-0

   o  Added model-selection dialogs, stepwise and subset (several requests).
   
   o  Modifications to model-definition dialogs (suggested by Ulrike
Gromping): If you select several variables in the variable-list box,
clicking on the
+, *, or : button will enter them into the model formula.
   
   o  The compare-models and Anova-table code for GLMs now uses F-tests when
the dispersion parameter is estimated (after a suggstion by Ulrich Halekoh)..
   
   o  Added optional pairwise p-values to correlations dialog (several
requests); new rcorr.adjust() function, adapted from code sent by Robert A.
Muenchen.
   
   o  New Library() command for loading packages. Packages generally
attached in position 4, to avoid masking objects in Rcmdr and car.
   
   o  Messages in the Messages window are now numbered, according to the
Rcmdr number.messages option (suggested by Liviu Andronic).
   
   o  Added dialog box to merge rows and columns, along with mergeRows()
function (after a suggestion by Stephen Shamblen).
   
   o  Added aggregate dialog box to aggregate variables by factor(s) (after
a suggestion by Stephen Shamblen).
   
   o  Added variable.list.width option (suggested by Liviu Andronic).
   
   o  Under Windows, a check is made for the mdi and a warning message
printed, if it's detected (courtesy of code from Uwe Ligges).
   
   o  Updated Commander.Rd.
   
   o  Small changes.
   
Version 1.5-1

   o  Fixes to links in docs (pointed out by Brian Ripley).

-------------------------------------------------------

New version in the 1.5-x series will be released as updated translations
become available and if bugs surface. Of course, bug reports and suggestions
are appreciated.

Regards,
John
   

-------------------------------------------------------
John Fox
Department of Sociology
Senator William McMaster Professor of Social Statistics
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox



------------------------------

Message: 16
Date: Sun, 30 Aug 2009 15:59:06 +0100
From: Liviu Andronic <[hidden email]>
Subject: Re: [R] Best R text editors?
To: Grzes <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=UTF-8

On 8/30/09, Grzes <[hidden email]> wrote:
>  I'm using PLD Linux and in my opinion, if you looking for very easy editor -
>  Geany!!! It'll be fantastic ;)
>
Yeah, Geany is very simple and comfortable. But there is no direct
link to R, and not being able to evaluate code on the fly is a
show-stopper for me.
Liviu



------------------------------

Message: 17
Date: Sun, 30 Aug 2009 12:36:00 -0300
From: "Rodrigo Aluizio" <[hidden email]>
Subject: [R] Change the Order of a Cluster Topology
To: "'R Help'" <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain

Hi list, recently, for didactic reasons, I had the need to change a cluster
topology (just the order of my groups, defined in an height of choice), not
the results, obviously. Well I’m looking at hclust or agnes (cluster
package) objects for something that defines the line breakings of the
graphics, so I could try something to change this plot ordering. But, I’m
unable to figure that out.

Is there a way to do so?



Here is an example code:



Spp<-matrix(rnorm(1:750,mean=10,sd=2),ncol=15,nrow=50,list(c(paste('Spp',1:5
0,sep='')),c(paste('C',1:15,sep=''))),byrow=T)

Ward<-hclust(dist(Spp),method='ward')

plot(Ward)



Let’s suppose I want to change the order of the groups formed at the height
of 20 (Obviously, only where it’s possible, without altering the greater
groups composition).

Is it possible?



-------------------------------------------------------------

MSc.  <mailto:[hidden email]> Rodrigo Aluizio

Centro de Estudos do Mar/UFPR
Laboratório de Micropaleontologia
Avenida Beira Mar s/n - CEP 83255-000
Pontal do Paraná - PR - Brasil




    [[alternative HTML version deleted]]



------------------------------

Message: 18
Date: Sun, 30 Aug 2009 12:37:23 -0300
From: "Rodrigo Aluizio" <[hidden email]>
Subject: [R]  Best R text editors?
To: "'Liviu Andronic'" <[hidden email]>
Cc: 'R Help' <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain;    charset="iso-8859-1"

Well, on Linux => Emacs+ Ess
On Windows => Tinn-R

-------------------------------------------------------------
MSc. Rodrigo Aluizio
Centro de Estudos do Mar/UFPR
Laborat?rio de Micropaleontologia



------------------------------

Message: 19
Date: Sun, 30 Aug 2009 15:40:51 +0000 (UTC)
From: Jack Tanner <[hidden email]>
Subject: [R] RConsole processing crashes Rgui.exe
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii

Using R 2.9.2 on Windows XP SP3.

1. Edit ~/Rconsole, and set

font = TT Bitstream Vera Sans Mono

2. Start Rgui.exe
3. Go to Edit, GUI Preferfences
4. Rgui.exe crashes

Rgui.exe does not crash if I do not access GUI Preferences (i.e., if I just use
R), and it does correctly use Bitstream Vera Sans Mono as my font. Nor does it
crash if I edit RConsole to set the font back to Lucida Console and then try to
access GUI Preferences.



------------------------------

Message: 20
Date: Sun, 30 Aug 2009 08:59:55 -0700
From: "Charles C. Berry" <[hidden email]>
Subject: Re: [R] correlation between two 2D point patterns?
To: William Simpson <[hidden email]>
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed



Bill,

    prod( cancor( A,B )$cor )

perhaps?

Note that this accounts for linear transformations.

HTH,

Chuck

On Sun, 30 Aug 2009, William Simpson wrote:

> Suppose I have two sets of (x,y) points like this:
>
> x1<-runif(n=10)
> y1<-runif(n=10)
> A<-cbind(x1,y1)
>
> x2<-runif(n=10)
> y2<-runif(n=10)
> B<-cbind(x2,y2)
>
> I would like to measure how similar the two sets of points are.
> Something like a correlation coefficient, where 0 means the two
> patterns are unrelated, and 1 means they are identical. And in
> addition I'd like to be able to assign a p-value to the calculated
> statistic.
>
> cor(x1,x2)
> cor(y1,y2)
> gives two numbers instead of one.
>
> cor(A,B)
> gives a correlation matrix
>
> I have looked a little at spatial statistics. I have seen methods
> that, for each point, search in some neighbourhood around it and then
> compute the correlation as a function of search radius. That is not
> what I am looking for. I would like a single number that summarises
> the strength of the relationship between the two patterns.
>
> I will do procrustes on the two point sets first, so that if A is just
> a rotated, translated, scaled, reflected version of B the two patterns
> will superimpose and the statistic I'm looking for will say there is
> perfect correspondence.
>
> Thanks very much for any help in finding such a statistic and
> calculating it using R.
>
> Bill
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
Charles C. Berry                            (858) 534-2098
                                             Dept of Family/Preventive Medicine
E mailto:[hidden email].edu                UC San Diego
http://famprevmed.ucsd.edu/faculty/cberry/  La Jolla, San Diego 92093-0901



------------------------------

Message: 21
Date: Sun, 30 Aug 2009 16:02:20 +0000 (UTC)
From: Jack Tanner <[hidden email]>
Subject: Re: [R] RFE: vectorize URLdecode
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii

Gabor Grothendieck <ggrothendieck <at> gmail.com> writes:

> URLdecode.vec <- Vectorize(URLdecode)
>
> On Sat, Aug 29, 2009 at 10:31 AM, Jack Tanner<ihok <at> hotmail.com> wrote:
> >
> > Could URLdecode be modified to actually process all elements of the vector,
> > not just the first?

Sure, that's a fine workaround. But is this a legitimate RFE? Should I file
a bug?

It seems like the warning itself is coming from charToRaw(), but the rest
of URLdecode() might need an upgrade to deal with a charToRaw that processes
all the elements in the vector.



------------------------------

Message: 22
Date: Sun, 30 Aug 2009 11:13:54 -0500
From: stephen sefick <[hidden email]>
Subject: Re: [R] about isoMDS method
To: Grzes <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=UTF-8

Are you using euclidean distances?

On Sun, Aug 30, 2009 at 7:33 AM, Grzes<[hidden email]> wrote:

>
> Hi,
> For example:
> I built a half matrix "w" using a daisy(x, metric = c("euclidean"))
>
> http://www.nabble.com/file/p25211016/1.jpg
>
> And next I transformed this matrix "w" using isoMDS function, for example
> isoMDS(w, k=2) and as result I got:
> http://www.nabble.com/file/p25211016/2.jpg
> And now I have two questions:
>
> 1. If number in matrix ?w[2, 1] ?(= 0.41538462) match ?two points (below) in
> matrix created by isoMDS [1,1:2] and [2,1:2] ?
> http://www.nabble.com/file/p25211016/3.jpg
>
> Is this the same point?
>
> 2. If it's true why I can't check my euclidean distance by using this
> equation:
> d<-
> ((0.511396296-(-0.129372871))^2+((-0.0171714934)-(-0.2759703494))^2)^(1/2)
>
> I thought that "d" should be 0.41538462 but unfortunately I get d=0.6910586.
> Why?
> --
> View this message in context: http://www.nabble.com/about-isoMDS-method-tp25211016p25211016.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


--
Stephen Sefick

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods.  We are mammals, and have not exhausted the
annoying little problems of being mammals.

                                -K. Mullis



------------------------------

Message: 23
Date: Sun, 30 Aug 2009 17:25:41 +0100
From: G?bor Pozsgai <[hidden email]>
Subject: [R] abline does not show on plot if slope differs from 0
To: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Hi,

Does anybody have a hint why this does not work?

plot(c(1991,2000), c(-1,10))abline(a=6, b=-1, col="dark grey", lty="dashed")

It does not generate an error message, but the line does not turn up.

If b=0 it works fine. If I change the axes of the plot to
plot(c(1,10), c(-1,10)) it does work fine as well.

Thanks,
Gabor

--
Pozsgai G?bor
www.coleoptera.hu
www.photogabor.com



------------------------------

Message: 24
Date: Sun, 30 Aug 2009 12:26:08 -0400
From: Gabor Grothendieck <[hidden email]>
Subject: Re: [R] lrm in Design
To: loch1 <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Presumably the same idea would apply to lrm if
appropriately adapted.

On Sun, Aug 30, 2009 at 5:19 AM, loch1<[hidden email]> wrote:

>
> Thank you for your reply, it helped. That thread is about glm, I take it that
> lrm behaves in the same way?!?
>
>
>
> See:
> https://stat.ethz.ch/pipermail/r-help/2009-July/204192.html
>
> and also other posts in that thread.
>
> --
> View this message in context: http://www.nabble.com/lrm-in-Design-tp25206737p25209958.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


------------------------------

Message: 25
Date: Sun, 30 Aug 2009 12:35:19 -0400
From: jim holtman <[hidden email]>
Subject: Re: [R] abline does not show on plot if slope differs from 0
To: G?bor Pozsgai <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Very simple, look at the x-axis you are plotting with.   If you want
the line, then use:

abline(a=1996, b=-1, col="dark grey", lty="dashed")


On Sun, Aug 30, 2009 at 12:25 PM, G?bor Pozsgai<[hidden email]> wrote:

> Hi,
>
> Does anybody have a hint why this does not work?
>
> plot(c(1991,2000), c(-1,10))abline(a=6, b=-1, col="dark grey", lty="dashed")
>
> It does not generate an error message, but the line does not turn up.
>
> If b=0 it works fine. If I change the axes of the plot to
> plot(c(1,10), c(-1,10)) it does work fine as well.
>
> Thanks,
> Gabor
>
> --
> Pozsgai G?bor
> www.coleoptera.hu
> www.photogabor.com
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


--
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?



------------------------------

Message: 26
Date: Sun, 30 Aug 2009 12:34:29 -0400
From: Gabor Grothendieck <[hidden email]>
Subject: Re: [R] abline does not show on plot if slope differs from 0
To: G?bor Pozsgai <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

The intercept, a, is the value when x = 0.   Thus if you want y = 6
when x = 1991 then the line is y = 6 - (x-1991) = 1997 - x so a = 1997:

abline(a=1997, b=-1, col="dark grey", lty="dashed")


On Sun, Aug 30, 2009 at 12:25 PM, G?bor Pozsgai<[hidden email]> wrote:

> Hi,
>
> Does anybody have a hint why this does not work?
>
> plot(c(1991,2000), c(-1,10))abline(a=6, b=-1, col="dark grey", lty="dashed")
>
> It does not generate an error message, but the line does not turn up.
>
> If b=0 it works fine. If I change the axes of the plot to
> plot(c(1,10), c(-1,10)) it does work fine as well.
>
> Thanks,
> Gabor
>
> --
> Pozsgai G?bor
> www.coleoptera.hu
> www.photogabor.com
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


------------------------------

Message: 27
Date: Sun, 30 Aug 2009 13:14:34 -0400
From: Duncan Murdoch <[hidden email]>
Subject: Re: [R] RConsole processing crashes Rgui.exe
To: Jack Tanner <[hidden email]>
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 30/08/2009 11:40 AM, Jack Tanner wrote:

> Using R 2.9.2 on Windows XP SP3.
>
> 1. Edit ~/Rconsole, and set
>
> font = TT Bitstream Vera Sans Mono
>
> 2. Start Rgui.exe
> 3. Go to Edit, GUI Preferfences
> 4. Rgui.exe crashes
>
> Rgui.exe does not crash if I do not access GUI Preferences (i.e., if I just use
> R), and it does correctly use Bitstream Vera Sans Mono as my font. Nor does it
> crash if I edit RConsole to set the font back to Lucida Console and then try to
> access GUI Preferences.
>
Thanks for the report.  I can reproduce that; I'll see if I can figure
out what's going wrong.

Duncan Murdoch



------------------------------

Message: 28
Date: Sun, 30 Aug 2009 13:53:33 -0400
From: Sunil Suchindran <[hidden email]>
Subject: Re: [R] standard error associated with correlation
    coefficient
To: Donald Braman <[hidden email]>
Cc: r-help <[hidden email]>
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

Here are some options for confidence intervals.
#by hand

sample_r <- .5
n_sample <- 100
df <- n_sample-1
fisher_z <- 0.5*(log(1+sample_r)-log(1-sample_r))
se_z <- 1/sqrt(n_sample-3)
t_crit <- qt(.975,df ,lower.tail=TRUE)
z_lci <- fisher_z - (t_crit * se_z)
z_uci <- fisher_z + (t_crit * se_z)

(r_lci <- tanh(z_lci))
(r_uci <- tanh(z_uci))
# Compare to the CIr function in psychometric library
#that uses standard normal instead of t

library(psychometric)
CIr(sample_r,n_sample)


And see

http://www.stat.umn.edu/geyer/5601/examp/bse.html

for bootstrap options.

On Thu, Aug 27, 2009 at 3:03 PM, Donald Braman <[hidden email]> wrote:

> I want the standard error associated with a correlation.  I can calculate
> using cor & var, but am wondering if there are libraries that already
> provide this function.
>
>        [[alternative HTML version deleted]]
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>
    [[alternative HTML version deleted]]



------------------------------

Message: 29
Date: Sun, 30 Aug 2009 14:30:29 -0400
From: Wensui Liu <[hidden email]>
Subject: Re: [R] Best R text editors?
To: Rodrigo Aluizio <[hidden email]>
Cc: R Help <[hidden email]>
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

emacs + ess in windows is just as powerful as in linux.
emacs is the only programming editor i would ever need.

On Sun, Aug 30, 2009 at 11:37 AM, Rodrigo Aluizio <[hidden email]>wrote:

> Well, on Linux => Emacs+ Ess
> On Windows => Tinn-R
>
> -------------------------------------------------------------
> MSc. Rodrigo Aluizio
> Centro de Estudos do Mar/UFPR
> Laboratório de Micropaleontologia
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>


--
==============================
WenSui Liu
Blog   : statcompute.spaces.live.com
Tough Times Never Last. But Tough People Do.  - Robert Schuller
==============================

    [[alternative HTML version deleted]]



------------------------------

Message: 30
Date: Sun, 30 Aug 2009 11:30:42 -0700 (PDT)
From: njhuang86 <[hidden email]>
Subject: [R] Re moving the numbers of the X/Y axis
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii


Hi all,

Suppose I have some data that I plot using the histogram command - ie.
hist(x)
Is there an option that will allow me to remove the numbers that appear
along the X and Y axis as I'm just interested in the overall distribution of
the data and not the actual values? Anyways, any help is greatly
appreciated!


--
View this message in context: http://www.nabble.com/Removing-the-numbers-of-the-X-Y-axis-tp25214292p25214292.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 31
Date: Sun, 30 Aug 2009 14:46:30 -0400
From: Jorge Ivan Velez <[hidden email]>
Subject: Re: [R] Re moving the numbers of the X/Y axis
To: njhuang86 <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

Hi njhuang86,
Something like this?
x <- rnorm(100)
hist(x, axes = FALSE)

HTH,
Jorge


On Sun, Aug 30, 2009 at 2:30 PM, njhuang86 <[hidden email]> wrote:

>
> Hi all,
>
> Suppose I have some data that I plot using the histogram command - ie.
> hist(x)
> Is there an option that will allow me to remove the numbers that appear
> along the X and Y axis as I'm just interested in the overall distribution
> of
> the data and not the actual values? Anyways, any help is greatly
> appreciated!
>
>
> --
> View this message in context:
> http://www.nabble.com/Removing-the-numbers-of-the-X-Y-axis-tp25214292p25214292.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
    [[alternative HTML version deleted]]



------------------------------

Message: 32
Date: Sun, 30 Aug 2009 18:55:15 +0000
From: Andrew Aldersley <[hidden email]>
Subject: [R] Complexity parameter in rpart
To: <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain


Hi all,

I'm currently using the 'rpart' function to run some regression analysis and I am at the point where I wish to prune my overfitted trees. Having read the documentation I understand that to do this requires the use of the complexity parameter. My question is how to go about choosing the correct complexity parameter for my tree? In some places (http://www.statmethods.net/advstats/cart.html) I have read that it is best to select the complexity parameter which minimises the cross-validated (x) error of the model, but elsewhere I have read that the optimum cp is the first value on the left above the '1+SE' line of the complexity paramter plot.

I was hoping someone might be able to clarify this minor issue for me.

Many thanks,

Andy

_________________________________________________________________
Save time by using Hotmail to access your other email accounts.

    [[alternative HTML version deleted]]



------------------------------

Message: 33
Date: Sun, 30 Aug 2009 18:24:08 +0200
From: Emmanuel Charpentier <[hidden email]>
Subject: Re: [R] Bootstrap inference for the sample median?
To: [hidden email]
Message-ID: <1251649448.25776.22.camel@PortableToshiba>
Content-Type: text/plain; charset="UTF-8"

Le dimanche 30 ao?t 2009 ? 18:43 +0530, Ajay Shah a ?crit :

> Folks,
>
> I have this code fragment:
>
>   set.seed(1001)
>   x <- c(0.79211363702017, 0.940536712079832, 0.859757602692931, 0.82529998629531,
>          0.973451006822, 0.92378802164835, 0.996679563355802,
>          0.943347739494445, 0.992873542980045, 0..870624707845108, 0.935917364493788)
>   range(x)
>   # from 0.79 to 0.996
>
>   e <- function(x,d) {
>     median(x[d])
>   }
>
>   b <- boot(x, e, R=1000)
>   boot.ci(b)
>
> The 95% confidence interval, as seen with `Normal' and `Basic'
> calculations, has an upper bound of 1.0028 and 1.0121.
>
> How is this possible? If I sample with replacement from values which
> are all lower than 1, then any sample median of these bootstrap
> samples should be lower than 1. The upper cutoff of the 95% confidence
> interval should also be below 1.
Nope. "Normal" and "Basic" try to adjust (some form of) normal
distribution to the sample of your statistic. But the median of such a
small sample is quite far from a normal (hint : it is a discrete
distribution with only very few possible values, at most as many value
as the sample. Exercise : derive the distribution of median(x)...).

To convince yourself, look at the histogram of the bootstrap
distribution of median(x). Contrast with the bootstrap distribution of
mean(x). Meditate. Conclude...

HTH,

                    Emmanuel Charpentier



------------------------------

Message: 34
Date: Sun, 30 Aug 2009 09:40:11 -0700 (PDT)
From: Grzes <[hidden email]>
Subject: Re: [R] about isoMDS method
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii


Yes, I'm using euclidean distances
--
View this message in context: http://www.nabble.com/about-isoMDS-method-tp25211016p25213217.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 35
Date: Sun, 30 Aug 2009 22:14:32 +0200
From: Friedericksen Hope <[hidden email]>
Subject: [R] Computer Modern Fonts in R graphic
To: [hidden email]
Message-ID: <h7emj9$3j0$[hidden email]>
Content-Type: text/plain; charset=UTF-8; format=flowed

Hello all,

I am trying to use computer modern fonts in my r grahics. I tried to do,
as described here: http://www.stat.auckland.ac.nz/~paul/R/CM/CMR.html 
but unfortunately, it does not work.

First of all I downloaded the cm-lgc package and the AFM and PFB-files
from the page and put them in my R working directory, so far, so good.

Then I tried to run the following code:

> sn <- seq(1,7,length=100)
> sm <- seq(0,4,length=100)
> f <- function(x,y) {5.64080973 + 0.12271038*x - 0.27725481 * y + 0.29281216*x*y}
> z <- outer(sn,sm,f)
>
> nrz <- nrow(z)
> ncz <- ncol(z)
> jet.colors <- colorRampPalette( c("yellow", "red") )
> nbcol <- 100
> color <- jet.colors(nbcol)
> zfacet <- z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz]
> facetcol <- cut(zfacet, nbcol)
>
> CM <- Type1Font("CM",
>                      c("cm-lgc/fonts/afm/public/cm-lgc/fcmr8a.afm",
>                        "cm-lgc/fonts/afm/public/cm-lgc/fcmb8a.afm",
>                        "cm-lgc/fonts/afm/public/cm-lgc/fcmri8a.afm",
>                        "cm-lgc/fonts/afm/public/cm-lgc/fcmbi8a.afm",
>                        "cmsyase.afm"))
> postscriptFonts(CM=CM)
> pdf("snxsm.pdf")
> par(family="CM")
> persp(sn,sm,z,xlab="SN", ylab="SM",zlab="VI",theta=-20,phi=20,r=5,shade=0.01,col=color[facetcol])
> dev.off()
It works fine, until the persp() function, there I get:

> persp(sn,sm,z,xlab="SN", ylab="SM",zlab="VI",theta=-20,phi=20,r=5,shade=0.01,col=color[facetcol])
> Error in persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>   Invalid font type
> In addition: Warning messages:
> 1: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>   font family not found in PostScript font database
> 2: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>   font family not found in PostScript font database
> 3: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>   font family not found in PostScript font database
> 4: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>   font family not found in PostScript font database
> 5: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>   font family not found in PostScript font database
Any help is appreciated! Also a hint/link how to install the CM fonts
under R for general use, so that I don't have to have it in my wd all
the time.

Thank you very much!

Greetings,
Friedericksen



------------------------------

Message: 36
Date: Mon, 31 Aug 2009 09:37:39 +1200
From: Paul Murrell <[hidden email]>
Subject: Re: [R] Computer Modern Fonts in R graphic
To: "[hidden email]" <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi


Friedericksen Hope wrote:

> Hello all,
>
> I am trying to use computer modern fonts in my r grahics. I tried to do,
> as described here: http://www.stat.auckland.ac.nz/~paul/R/CM/CMR.html 
> but unfortunately, it does not work.
>
> First of all I downloaded the cm-lgc package and the AFM and PFB-files
> from the page and put them in my R working directory, so far, so good.
>
> Then I tried to run the following code:
>
>> sn <- seq(1,7,length=100)
>> sm <- seq(0,4,length=100)
>> f <- function(x,y) {5.64080973 + 0.12271038*x - 0.27725481 * y + 0.29281216*x*y}
>> z <- outer(sn,sm,f)
>>
>> nrz <- nrow(z)
>> ncz <- ncol(z)
>> jet.colors <- colorRampPalette( c("yellow", "red") )
>> nbcol <- 100
>> color <- jet.colors(nbcol)
>> zfacet <- z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz]
>> facetcol <- cut(zfacet, nbcol)
>>
>> CM <- Type1Font("CM",
>>                      c("cm-lgc/fonts/afm/public/cm-lgc/fcmr8a.afm",
>>                        "cm-lgc/fonts/afm/public/cm-lgc/fcmb8a.afm",
>>                        "cm-lgc/fonts/afm/public/cm-lgc/fcmri8a.afm",
>>                        "cm-lgc/fonts/afm/public/cm-lgc/fcmbi8a.afm",
>>                        "cmsyase.afm"))
>> postscriptFonts(CM=CM)
>> pdf("snxsm.pdf")
>> par(family="CM")
>> persp(sn,sm,z,xlab="SN", ylab="SM",zlab="VI",theta=-20,phi=20,r=5,shade=0.01,col=color[facetcol])
>> dev.off()
>
> It works fine, until the persp() function, there I get:
>
>> persp(sn,sm,z,xlab="SN", ylab="SM",zlab="VI",theta=-20,phi=20,r=5,shade=0.01,col=color[facetcol])
>> Error in persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>   Invalid font type
>> In addition: Warning messages:
>> 1: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>   font family not found in PostScript font database
>> 2: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>   font family not found in PostScript font database
>> 3: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>   font family not found in PostScript font database
>> 4: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>   font family not found in PostScript font database
>> 5: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>   font family not found in PostScript font database

Looks like you are trying to produce PDF output, but you've set up the
fonts in the PostScript font database.  Try using pdfFonts() instead.


> Any help is appreciated! Also a hint/link how to install the CM fonts
> under R for general use, so that I don't have to have it in my wd all
> the time.


The cm-lgc package should have instructions for installing the fonts on
your system
(e.g., http://www.ctan.org/tex-archive/fonts/ps-type1/cm-lgc/INSTALL)
but you would then need to adjust the paths that you specify in the call
to Type1Font().

You could then add the calls to Type1Font() and pdfFonts() to your
..Rprofile so that this font is defined every time you start an R session
(see ?Startup).

Paul


[[elided Yahoo spam]]
>
> Greetings,
> Friedericksen
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

--
Dr Paul Murrell
Department of Statistics
The University of Auckland
Private Bag 92019
Auckland
New Zealand
64 9 3737599 x85392
[hidden email]
http://www.stat.auckland.ac.nz/~paul/



------------------------------

Message: 37
Date: Sun, 30 Aug 2009 22:39:48 +0100
From: Oliver Kullmann <[hidden email]>
Subject: [R] error with summary(vector)??
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii

Hello,

I get

> summary(E)
     level             nodes            ave_nodes            time
Min.   :      1   Min.   :    1.00   Min.   :  10.71   Min.   :  0.0000
1st Qu.: 237414   1st Qu.:    2.00   1st Qu.:  19.70   1st Qu.:  0.0100
Median : 749229   Median :    3.00   Median :  27.01   Median :  0.0100
Mean   : 767902   Mean   :   49.85   Mean   :  98.89   Mean   :  0.2296
3rd Qu.:1111288   3rd Qu.:    7.00   3rd Qu.:  80.38   3rd Qu.:  0.0300
Max.   :2097152   Max.   :70632.00   Max.   :4451.67   Max.   :338.9200
    ave_time          singles    autarkies     depth    ave_reductions
Min.   : 0.0490   Min.   :0   Min.   :0   Min.   :33   Min.   : 0.20
1st Qu.: 0.0910   1st Qu.:0   1st Qu.:0   1st Qu.:52   1st Qu.: 7.50
Median : 0.1270   Median :0   Median :0   Median :52   Median : 9.92
Mean   : 0.4636   Mean   :0   Mean   :0   Mean   :52   Mean   :11.76
3rd Qu.: 0.3860   3rd Qu.:0   3rd Qu.:0   3rd Qu.:52   3rd Qu.:13.46
Max.   :16.2720   Max.   :0   Max.   :0   Max.   :52   Max.   :70.00
> summary(E$nodes)
    Min.  1st Qu.   Median     Mean  3rd Qu.     Max.
    1.00     2.00     3.00    49.85     7.00 70630.00

The max-value for E$nodes is incorrect --- it it 70632, as computed by
summary(E) ??

Oliver



------------------------------

Message: 38
Date: Mon, 31 Aug 2009 09:40:12 +1200
From: "John Sansom" <[hidden email]>
Subject: [R]  test for bimodality&In-Reply-To=
To: <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=US-ASCII

Has a test for bimodality been implemented in R?

Thanks, John

NIWA is the trading name of the National Institute of Water & Atmospheric Research Ltd.



------------------------------

Message: 39
Date: Sun, 30 Aug 2009 17:59:31 -0400
From: Duncan Murdoch <[hidden email]>
Subject: Re: [R] error with summary(vector)??
To: Oliver Kullmann <[hidden email]>
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 30/08/2009 5:39 PM, Oliver Kullmann wrote:

> Hello,
>
> I get
>
>> summary(E)
>      level             nodes            ave_nodes            time
>  Min.   :      1   Min.   :    1.00   Min.   :  10.71   Min.   :  0.0000
>  1st Qu.: 237414   1st Qu.:    2.00   1st Qu.:  19.70   1st Qu.:  0.0100
>  Median : 749229   Median :    3.00   Median :  27.01   Median :  0.0100
>  Mean   : 767902   Mean   :   49.85   Mean   :  98.89   Mean   :  0.2296
>  3rd Qu.:1111288   3rd Qu.:    7.00   3rd Qu.:  80.38   3rd Qu.:  0.0300
>  Max.   :2097152   Max.   :70632.00   Max.   :4451.67   Max.   :338.9200
>     ave_time          singles    autarkies     depth    ave_reductions
>  Min.   : 0.0490   Min.   :0   Min.   :0   Min.   :33   Min.   : 0.20
>  1st Qu.: 0.0910   1st Qu.:0   1st Qu.:0   1st Qu.:52   1st Qu.: 7.50
>  Median : 0.1270   Median :0   Median :0   Median :52   Median : 9.92
>  Mean   : 0.4636   Mean   :0   Mean   :0   Mean   :52   Mean   :11.76
>  3rd Qu.: 0.3860   3rd Qu.:0   3rd Qu.:0   3rd Qu.:52   3rd Qu.:13.46
>  Max.   :16.2720   Max.   :0   Max.   :0   Max.   :52   Max.   :70.00
>> summary(E$nodes)
>     Min.  1st Qu.   Median     Mean  3rd Qu.     Max.
>     1.00     2.00     3.00    49.85     7.00 70630.00
>
> The max-value for E$nodes is incorrect --- it it 70632, as computed by
> summary(E) ??
Read ?summary.  The "digits" parameter is handled differently in your
two cases:

> format(c(49.85, 70632), digits=4)
[1] "   49.85" "70632.00"

> signif(c(49.85, 70632), digits=4)
[1]    49.85 70630.00


Duncan Murdoch



------------------------------

Message: 40
Date: Sun, 30 Aug 2009 18:05:40 -0400
From: Duncan Murdoch <[hidden email]>
Subject: Re: [R] RConsole processing crashes Rgui.exe
To: Jack Tanner <[hidden email]>
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 30/08/2009 11:40 AM, Jack Tanner wrote:

> Using R 2.9.2 on Windows XP SP3.
>
> 1. Edit ~/Rconsole, and set
>
> font = TT Bitstream Vera Sans Mono
>
> 2. Start Rgui.exe
> 3. Go to Edit, GUI Preferfences
> 4. Rgui.exe crashes
>
> Rgui.exe does not crash if I do not access GUI Preferences (i.e., if I just use
> R), and it does correctly use Bitstream Vera Sans Mono as my font. Nor does it
> crash if I edit RConsole to set the font back to Lucida Console and then try to
> access GUI Preferences.
>
I've found the bug, and a partial fix will appear in R-devel and
R-patched shortly.  The problem is that the dialog assumed you're using
one of the fonts in its list, and the Bitstream font isn't there.  If
you'd like to edit the code to allow a more general font selection (any
fixed width font on your system would make sense) I'd consider a patch,
but I don't have time to do that.  All I've done is allow you to type in
any valid font name, or put it into Rconsole, which is more or less
equivalent.

A problem with my partial fix is that the sample text doesn't get
updated after you change fonts:  because of limitations in the ancient
GUI toolkit we use, there are no notifications sent when you select a
font other than by choosing from the predefined list.  The whole thing
needs rewriting (and has for years), but that's certainly not something
I'm planning to take on.

Duncan Murdoch



------------------------------

Message: 41
Date: Sun, 30 Aug 2009 15:08:17 -0700
From: Noah Silverman <[hidden email]>
Subject: [R] Sapply
To: r help <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi,

I need a bit of guidance with the sapply function.  I've read the help
page, but am still a bit unsure how to use it.

I have a large data frame with about 100 columns and 30,000 rows.  One
of the columns is "group" of which there are about 2,000 distinct "groups".

I want to normalize (sum to 1) one of my variables per-group.

Normally, I would just write a huge "for each" loop, but have read that
is hugely inefficient with R.

The old way would be (just an example, syntax might not be perfect):

for (group in data$group){
     for (score in data[data$group == group]){
         new_score <- score / sum(data$score[data$group==group])
     }
}

How would I simplify this with sapply?

Thanks!

--
Noah



------------------------------

Message: 42
Date: Sun, 30 Aug 2009 15:10:09 -0700
From: Noah Silverman <[hidden email]>
Subject: [R] SVM coefficients
To: r help <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello,

I'm using the svm function from the e1071 package.

It works well and gives me nice results.

I'm very curious to see the actual coefficients calculated for each
input variable.  (Other packages, like RapidMiner, show you this
automatically.)

I've tried looking at attributes for the model and do see a
"coefficients" item, but printing it returns an NULL result.

Can someone point me in the right direction?

Thanks

--
Noah



------------------------------

Message: 43
Date: Mon, 31 Aug 2009 10:17:13 +1200
From: Rolf Turner <[hidden email]>
Subject: Re: [R] test for bimodality
To: John Sansom <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset="US-ASCII"; format=flowed


On 31/08/2009, at 9:40 AM, John Sansom wrote:

> Has a test for bimodality been implemented in R?

Doing RSiteSearch("test for bimodality") yields one hit,
which points to

http://finzi.psych.upenn.edu/Rhelp08/2008-September/173308.html

It looks like it might be *some* help to you.

    cheers,

        Rolf Turner

######################################################################
Attention:\ This e-mail message is privileged and confid...{{dropped:9}}



------------------------------

Message: 44
Date: Sun, 30 Aug 2009 23:18:47 +0100
From: Gavin Simpson <[hidden email]>
Subject: Re: [R] about isoMDS method
To: Grzes <[hidden email]>
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain

On Sun, 2009-08-30 at 09:40 -0700, Grzes wrote:
> Yes, I'm using euclidean distances

Sorry I deleted the original. Here a some responses;

First, to be concrete, here is a reproducible example...

set.seed(123)
D <- data.frame(matrix(rnorm(100), ncol = 10))
Dij <- dist(D)
require(MASS)
Dnmds <- isoMDS(Dij, k = 2)

Now look at structure of Dnmds:

> str(Dnmds)
List of 2
$ points: num [1:10, 1:2] -0.356 1.058 -2.114 -0.177 0.575 ...
  ..- attr(*, "dimnames")=List of 2
  .. ..$ : NULL
  .. ..$ : NULL
$ stress: num 12.2

The second matrix you talk about is component $points of the object
returned by isoMDS.

1) The rows of $points pertain to your n samples. In the example I give,
there are 10 rows as we have ten samples. Row 1 is the location of
sample 1 in the 2d nMDS solution. Likewise for Row 2.

2) Why do you think that? nMDS aims to preserve the rank correlation of
dissimilarities, not of the actual dissimilarities. You are also working
with a 2D solution of the nMDS fit to your full dissimilarity matrix.
There will be some level of approximation going on.

You can look at how the well dissimilarities in the nMDS solution
reflect the original distances using function Shepard:

## continuing from the above
S <- Shepard(Dij, Dnmds$points)
plot(S)
lines(yf ~ x, data = S, type = "S", col = "blue", lwd = 2)

You can also compute some measures to help understand how well the nMDS
distances reflect the original distances.

## Note, these are taken from function stressplot in the vegan package
## written by Jari Oksanen, which uses isoMDS from package MASS
## internally.
## you might want to look at metaMDSiter() in that package to do random
## starts to check you haven't converged to a sub-optimal local solution
(stress <- sum((S$y - S$yf)^2) / sum(S$y^2)) ## intermediate calc
(rstress <- 1 - stress) ## non-metric fit, R^2
(ralscal <- cor(S$y, S$yf)^2) ## Linear fit, R2
(stress <- sqrt(stress) * 100)
## compare with
Dnmds$stress ## statistic minimised in nMDS algorithm

In this example, the stress is quite low, hence quite a good fit.

HTH

G
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson             [t] +44 (0)20 7679 0522
ECRC, UCL Geography,          [f] +44 (0)20 7679 0565
Pearson Building,             [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London          [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT.                 [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%



------------------------------

Message: 45
Date: Mon, 31 Aug 2009 01:28:43 +0300
From: Tal Galili <[hidden email]>
Subject: [R] Combining: R + Condor in 2009 ? (+foreach maybe?)
To: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

Hello dear R-help group (and David Smith from REvolution),

I would like to perform parallel computing using R with Condor (hopefully
using foreach or other recommended solutions, if available) for some
"Embarrassingly parallel" problem.
I will start by listing what I found so far, and then go on asking for help..

So far I found the a manual by Xianhong Xie from Rnews_2005-2 (see page 13)
Talking about R and condor:
http://cran.r-project.org/doc/Rnews/Rnews_2005-2.pdf

I also found several references for R and condor in the task views of High
Performance Computing<http://cran.r-project.org/web/views/HighPerformanceComputing.html>
:
http://cran.r-project.org/web/views/HighPerformanceComputing.html
Stating that: "The
GridR<http://cran.r-project.org/web/packages/GridR/index.html> package
by Wegener et al. can be used in a grid computing environment via a web
service, via ssh or via Condor or Globus."
I then found a 2008 lecture slides on the subject here:
http://www.statistik.uni-dortmund.de/useR-2008/tutorials/GridR.pdf

And an articles showing it was already done:
http://www.ecmlpkdd2008.org/files/pdf/workshops/ubiqkd/3.pdf
(But without code examples to my dismay)



What I wish from you is some guidance.
Is there a more updated (formal) material on condor and R then Xianhong Xie
article from 2005?
Is GridR a good way of making the connection?
Is using the foreach package relevant or useful here?

I am not a UNIX person. I never ran R in "batch", and any "step by step"
instructions (either by referring to links or explaining here) would be of
great help.

Thanks in advance,
Tal

















----------------------------------------------


My contact information:
Tal Galili
Phone number: 972-50-3373767
FaceBook: Tal Galili
My Blogs:
http://www.r-statistics.com/
http://www.talgalili.com
http://www.biostatistics.co.il

    [[alternative HTML version deleted]]



------------------------------

Message: 46
Date: Sun, 30 Aug 2009 18:28:55 -0400
From: Gabor Grothendieck <[hidden email]>
Subject: Re: [R] Sapply
To: Noah Silverman <[hidden email]>
Cc: r help <[hidden email]>
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Try this:

data$score <- ave(data$score, data$group, FUN = prop.table)

On Sun, Aug 30, 2009 at 6:08 PM, Noah Silverman<[hidden email]> wrote:

> Hi,
>
> I need a bit of guidance with the sapply function. ?I've read the help page,
> but am still a bit unsure how to use it.
>
> I have a large data frame with about 100 columns and 30,000 rows. ?One of
> the columns is "group" of which there are about 2,000 distinct "groups".
>
> I want to normalize (sum to 1) one of my variables per-group.
>
> Normally, I would just write a huge "for each" loop, but have read that is
> hugely inefficient with R.
>
> The old way would be (just an example, syntax might not be perfect):
>
> for (group in data$group){
> ? ?for (score in data[data$group == group]){
> ? ? ? ?new_score <- score / sum(data$score[data$group==group])
> ? ?}
> }
>
> How would I simplify this with sapply?
>
> Thanks!
>
> --
> Noah
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


------------------------------

Message: 47
Date: Sun, 30 Aug 2009 18:47:34 -0400
From: Duncan Murdoch <[hidden email]>
Subject: Re: [R] Sapply
To: Noah Silverman <[hidden email]>
Cc: r help <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 30/08/2009 6:08 PM, Noah Silverman wrote:

> Hi,
>
> I need a bit of guidance with the sapply function.  I've read the help
> page, but am still a bit unsure how to use it.
>
> I have a large data frame with about 100 columns and 30,000 rows.  One
> of the columns is "group" of which there are about 2,000 distinct "groups".
>
> I want to normalize (sum to 1) one of my variables per-group.
>
> Normally, I would just write a huge "for each" loop, but have read that
> is hugely inefficient with R.
Don't believe what you read, try it.  If the for loop takes 100 times
longer than the fastest method, but it still only takes 10 seconds, is
it worth optimizing?

Duncan Murdoch

>
> The old way would be (just an example, syntax might not be perfect):
>
> for (group in data$group){
>      for (score in data[data$group == group]){
>          new_score <- score / sum(data$score[data$group==group])
>      }
> }
>
> How would I simplify this with sapply?
>
> Thanks!
>
> --
> Noah
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


------------------------------

Message: 48
Date: Sun, 30 Aug 2009 15:54:08 -0700 (PDT)
From: Tim Clark <[hidden email]>
Subject: [R] Trying to rename spatial pts data frame slot that isn't a
    slot()
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii

Dear List,

I am analyzing the home range area of fish and seem to have lost the individuals ID names during my manipulations, and can't find out how to rename them.  I calculated the MCP of the fish using mcp() in Adehabitat.  MCP's were converted to spatial points data frame and exported to qGIS for manipulations.  At this point the ID names were lost.  I brought the manipulated shapefiles back into qGIS, but can't figure out how to rename the individuals.

#Calculate MCP and save as a shapefile
    my.mcp<-mcp(xy, id=id, percent=100)
    spol<-area2spol(my.mcp)
    spdf <- SpatialPolygonsDataFrame(spol, data=data.frame
        +(getSpPPolygonsLabptSlots(spol),
        +row.names=getSpPPolygonsIDSlots(spol)), match.ID = TRUE)
    writeOGR(spdf,dsn=mcp.dir,layer="All Mantas MCP", driver="ESRI
        +Shapefile")

#Read shapefile manipulated in qGIS
    mymcp<-readOGR(dsn=mcp.dir,layer="All mantas MCP land differenc")


My spatial points data frame has a number of "Slot"s, including one that contained the original names called Slot "ID".  However, I can not access this slot using slot() or slotNames(). 

> slotNames(mymcp)
[1] "data"        "polygons"    "plotOrder"   "bbox"      "proj4string"

What am I missing here?  Is Slot "ID" not a slot?  Can I export the ID's with the shapefiles to qGIS?  Can I rename the ID's when I bring them back into R?  When is a slot not a slot()?

Thanks,

TIm




Tim Clark
Department of Zoology
University of Hawaii



------------------------------

Message: 49
Date: Mon, 31 Aug 2009 00:22:56 +0100 (BST)
From: Christian Hennig <[hidden email]>
Subject: Re: [R] test for bimodality
To: [hidden email]
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

Dear John,

you may check dip in package diptest, which tests unimodality vs.
multimuodality (you'll need qDiptab for that, too).

Apologies if this misses the point; I only saw the last responses but not
the original question so this may not be what you are after, or already
mentioned, but I decided that I risk making a fool of myself for a small
chance that it helps you.

Cheers,
Christian

On Mon, 31 Aug 2009, Rolf Turner wrote:

>
> On 31/08/2009, at 9:40 AM, John Sansom wrote:
>
>> Has a test for bimodality been implemented in R?
>
> Doing RSiteSearch("test for bimodality") yields one hit,
> which points to
>
> http://finzi.psych.upenn.edu/Rhelp08/2008-September/173308.html
>
> It looks like it might be *some* help to you.
>
>     cheers,
>
>         Rolf Turner
>
> ######################################################################
> Attention:\ This e-mail message is privileged and confid...{{dropped:9}}
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
*** --- ***
Christian Hennig
University College London, Department of Statistical Science
Gower St., London WC1E 6BT, phone +44 207 679 1698
[hidden email], www.homepages.ucl.ac.uk/~ucakche



------------------------------

Message: 50
Date: Sun, 30 Aug 2009 16:40:06 -0700
From: Crantastic <[hidden email]>
Subject: [R] CRAN (and crantastic) updates this week
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=utf-8

CRAN (and crantastic) updates this week

New packages
------------


Updated packages
----------------



New reviews
-----------



This email provided as a service for the R community by
http://crantastic.org.

Like it?  Hate it?  Please let us know: [hidden email].



------------------------------

Message: 51
Date: Mon, 31 Aug 2009 11:42:23 +1200
From: Rolf Turner <[hidden email]>
Subject: Re: [R] Sapply
To: Achim Zeileis <[hidden email]>
Cc: r help <[hidden email]>, Duncan Murdoch
    <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset="US-ASCII"; delsp=yes; format=flowed


Fortune candidate?

    cheers,

        Rolf Turner

> On 30/08/2009 6:08 PM, Noah Silverman wrote:

    <snip>

>> Normally, I would just write a huge "for each" loop, but have read 
>> that
>> is hugely inefficient with R.
>
On 31/08/2009, at 10:47 AM, Duncan Murdoch wrote:

> Don't believe what you read, try it.  If the for loop takes 100 times
> longer than the fastest method, but it still only takes 10 seconds, is
> it worth optimizing?

######################################################################
Attention:\ This e-mail message is privileged and confid...{{dropped:9}}



------------------------------

Message: 52
Date: Sun, 30 Aug 2009 18:51:57 -0500
From: hadley wickham <[hidden email]>
Subject: Re: [R] Sapply
To: Noah Silverman <[hidden email]>
Cc: r help <[hidden email]>
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Aug 30, 2009 at 5:08 PM, Noah Silverman<[hidden email]> wrote:

> Hi,
>
> I need a bit of guidance with the sapply function. ?I've read the help page,
> but am still a bit unsure how to use it.
>
> I have a large data frame with about 100 columns and 30,000 rows. ?One of
> the columns is "group" of which there are about 2,000 distinct "groups".
>
> I want to normalize (sum to 1) one of my variables per-group.
>
> Normally, I would just write a huge "for each" loop, but have read that is
> hugely inefficient with R.
>
> The old way would be (just an example, syntax might not be perfect):
>
> for (group in data$group){
> ? ?for (score in data[data$group == group]){
> ? ? ? ?new_score <- score / sum(data$score[data$group==group])
> ? ?}
> }
It might be easier to use ddply from the plyr package.  The command
you want would be:

data <- ddply(data, "group", transform, score = score / sum(score))

More information at http://had.co.nz/plyr.

Hadley

--
http://had.co.nz/



------------------------------

Message: 53
Date: Sun, 30 Aug 2009 21:38:44 -0400
From: R_help Help <[hidden email]>
Subject: [R] Aggregating irregular time series
To: [hidden email], [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I have a couple of aggregation operations that I don't know how to
accomplish. Let me give an example. I have the following irregular
time series

time ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?x
10:00:00.021 ? ? ? ? ? ? ? ?20
10:00:00.224 ? ? ? ? ? ? ? ?20
10:00:01.002 ? ? ? ? ? ? ? ?19
10:00:02:948 ? ? ? ? ? ? ? ?20

1) For each entry time, I'd like to get sum of x for the next 2
seconds (excluding itself). Using the above example, the output should
be

time ? ? ? ? ? ? ? ? ? ? ? ?sumx
10:00:00.021 ? ? ? ? ? ? ? ?39
10:00:00.224 ? ? ? ? ? ? ? ?19
10:00:01.442 ? ? ? ? ? ? ? ?20
10:00:02:948 ? ? ? ? ? ? ? ? 0

2) For each i-th of x in the series, what's the first passage time to
x[i]-1. I.e. the output should be

time ? ? ? ? ? ? ? ? ? ? ? ? ? firstPassgeTime
10:00:00.021 ? ? ? ? ? ? ? ?0.981
10:00:00.224 ? ? ? ? ? ? ? ?0.778
10:00:01.442 ? ? ? ? ? ? ? ?NA
10:00:02:948 ? ? ? ? ? ? ? ?NA

Is there any shortcut function that allows me to do the above? Thank you.

adschai



------------------------------

Message: 54
Date: Sun, 30 Aug 2009 22:47:07 -0400
From: Steve Lianoglou <[hidden email]>
Subject: Re: [R] SVM coefficients
To: Noah Silverman <[hidden email]>
Cc: r help <[hidden email]>
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Hi,

On Sun, Aug 30, 2009 at 6:10 PM, Noah Silverman<[hidden email]> wrote:

> Hello,
>
> I'm using the svm function from the e1071 package.
>
> It works well and gives me nice results.
>
> I'm very curious to see the actual coefficients calculated for each input
> variable.  (Other packages, like RapidMiner, show you this automatically.)
>
> I've tried looking at attributes for the model and do see a "coefficients"
> item, but printing it returns an NULL result.
Hmm .. I don't see a "coefficients" attribute, but rather a "coefs"
attribute, which I guess is what you're looking for (?)

Run "example(svm)" to its end and type:

R> m$coefs
             [,1]
[1,]  1.00884130
[2,]  1.27446460
[3,]  2.00000000
[4,] -1.00000000
[5,] -0.35480340
[6,] -0.74043692
[7,] -0.87635311
[8,] -0.04857869
[9,] -0.03721980
[10,] -0.64696793
[11,] -0.57894605

HTH,

-steve

--
Steve Lianoglou
Graduate Student: Computational Systems Biology
| Memorial Sloan-Kettering Cancer Center
| Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact



------------------------------

Message: 55
Date: Sun, 30 Aug 2009 23:08:11 -0400
From: Gabor Grothendieck <[hidden email]>
Subject: Re: [R] [R-SIG-Finance] Aggregating irregular time series
To: R_help Help <[hidden email]>
Cc: [hidden email], [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Try this for the first question:

neighborapply <- function(z, width, FUN) {
    out <- z
    ix <- seq_along(z)
    jx <- findInterval(time(z) + width, time(z))
    out[] <- mapply(function(i, j) FUN(c(0, z[seq(i+1, length = j-i)])), ix, jx)
    out
}

# test - corrected :948 in last line

library(zoo)
library(chron)

Lines <- "Time                              x
10:00:00.021                20
10:00:00.224                20
10:00:01.002                19
10:00:02.948                20"

z <- read.zoo(textConnection(Lines), header = TRUE, FUN = times)

neighborapply(z, times("00:00:02"), sum)

# and here is an alternative neighborapply
# using loops.  The non-loop solution does
# have the disadvantage that it does as
# readily extend to other situations which
# is why we add this second solution to
# the first question.

neighborapply <- function(z, width, FUN) {
    out <- z
    tt <- time(z)
    for(i in seq_along(tt)) {
        for(j in seq_along(tt)) {
            if (tt[j] - tt[i] > width) break
        }
        if (j == length(tt) && tt[j] - tt[i] <= width) j <- j+1
        out[i] <- FUN(c(0, z[seq(i+1, length = j-i-1)]))
    }
    out
}

The second question can be answered along the lines
of the first by modifying neighborapply in the loop solution
appropriately.

On Sun, Aug 30, 2009 at 9:38 PM, R_help Help<[hidden email]> wrote:

> Hi,
>
> I have a couple of aggregation operations that I don't know how to
> accomplish. Let me give an example. I have the following irregular
> time series
>
> time ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?x
> 10:00:00.021 ? ? ? ? ? ? ? ?20
> 10:00:00.224 ? ? ? ? ? ? ? ?20
> 10:00:01.002 ? ? ? ? ? ? ? ?19
> 10:00:02:948 ? ? ? ? ? ? ? ?20
>
> 1) For each entry time, I'd like to get sum of x for the next 2
> seconds (excluding itself). Using the above example, the output should
> be
>
> time ? ? ? ? ? ? ? ? ? ? ? ?sumx
> 10:00:00.021 ? ? ? ? ? ? ? ?39
> 10:00:00.224 ? ? ? ? ? ? ? ?19
> 10:00:01.442 ? ? ? ? ? ? ? ?20
> 10:00:02:948 ? ? ? ? ? ? ? ? 0
>
> 2) For each i-th of x in the series, what's the first passage time to
> x[i]-1. I.e. the output should be
>
> time ? ? ? ? ? ? ? ? ? ? ? ? ? firstPassgeTime
> 10:00:00.021 ? ? ? ? ? ? ? ?0.981
> 10:00:00.224 ? ? ? ? ? ? ? ?0.778
> 10:00:01.442 ? ? ? ? ? ? ? ?NA
> 10:00:02:948 ? ? ? ? ? ? ? ?NA
>
> Is there any shortcut function that allows me to do the above? Thank you.
>
> adschai
>
> _______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only.
> -- If you want to post, subscribe first.
>


------------------------------

Message: 56
Date: Sun, 30 Aug 2009 17:20:52 -0700 (PDT)
From: phil smart <[hidden email]>
Subject: [R]  Quadratcount image plot colors
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii


Hi,

I have two quadratcounts of two different sets of spatial locations in the
UK. I then take the difference of these to produce a final quadratcount. In
this final quadratcount some values can be negative (if there where more in
the first count) or positive (if there where more in the second). Basically
I want to plot this using the image.plot function, but I want all values
under 0 to be 1 gradient of color, and all values above 0 to be another
gradient (color ramp) of color. Is there a way to specify this in R.

Phil
--
View this message in context: http://www.nabble.com/Quadratcount-image-plot-colors-tp25216951p25216951.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 57
Date: Sun, 30 Aug 2009 20:24:28 -0700 (PDT)
From: Raoul <[hidden email]>
Subject: [R] RE xcel - two problems - Running RExcel & Working
    Directory error
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii


Hi,

I am using RExcel both at work and at home and am facing a couple of
challenges. The one at home is that when using RExcel and trying to run any
code that involves changing the working directory, I get the following
error.

Error in command:
setwd("C:\\Raoul\\R\\R Maps_UK Maps_Test\\NUTS_03M_2006_SH\\shape\\data")\n
  cannot change working directory

The one at office is more complicated, I have had R,RExcel & the DCOM
package installed and re-installed numerous times but still get the error
"there is no R server connected to Excel" and when I try to set the server
it does not work.

Could someone please help me out with these problems?

Thanks in advance,
Raoul
--
View this message in context: http://www.nabble.com/RExcel---two-problems---Running-RExcel---Working-Directory-error-tp25217926p25217926.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 58
Date: Mon, 31 Aug 2009 00:00:54 +0200
From: Wolfgang RAFFELSBERGER <[hidden email]>
Subject: [R] Re : Re:  Best R text editors?
To: Liviu Andronic <[hidden email]>
Cc: [hidden email], Grzes <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain

There's also a page on the R-Wiki devoted to the colection of editors :You'll find it in the section "Windows GUI + other editors"http://wiki.r-project.org/rwiki/doku.php?id=getting-started:system-setup:system-setup&s=editorWolfgang----- Message d'origine -----De: Liviu Andronic <[hidden email]>Date: Dimanche, 30 Août 2009, 17:01Objet: Re: [R] Best R text editors?À: Grzes <[hidden email]>Cc: [hidden email]> On 8/30/09, Grzes <[hidden email]> wrote:> >  I'm using PLD Linux and in my opinion, if you looking > for very easy editor -> >  Geany!!! It'll be fantastic ;)> >> Yeah, Geany is very simple and comfortable. But there is no direct> link to R, and not being able to evaluate code on the fly is a> show-stopper for me.> Liviu> > ______________________________________________> [hidden email] mailing list> https://stat.ethz.ch/mailman/listinfo/r-help> PLEASE do read the posting guide http://www.R->
[[elided Yahoo spam]]
ented, minimal, self-contained, reproducible code.

    [[alternative HTML version deleted]]



------------------------------

Message: 59
Date: Sun, 30 Aug 2009 20:56:43 -0700
From: Dylan Beaudette <[hidden email]>
Subject: [R] clarificatin on validate.ols method='cross'
To: r-help <[hidden email]>
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I was hoping to clarify the exact behavior associated with this incantation:

validate(fit.ols, method='cross', B=50)

Output:

          index.orig training    test optimism index.corrected  n
R-square      0.5612   0.5613  0.5171   0.0442          0.5170 50
MSE           1.3090   1.3086  1.3547  -0.0462          1.3552 50
Intercept     0.0000   0.0000 -0.0040   0.0040         -0.0040 50
Slope         1.0000   1.0000  0.9899   0.0101          0.9899 50

Questions:
1. Does this perform 50 replicate, 10-fold CV operations?

2. What do the slope and intercept terms refer to?

3. How can I interpret the 'test R2' ?


Thanks in advance!

Cheers,
Dylan



------------------------------

Message: 60
Date: Mon, 31 Aug 2009 00:25:35 -0400
From: "Richard M. Heiberger" <[hidden email]>
Subject: Re: [R] RE xcel - two problems - Running RExcel & Working
    Directory error
To: Raoul <[hidden email]>
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

1. The embedded blanks often cause problems.  Try the 8.3 version of the
filepath.  To find the 8.3 name, open an MSDOS cmd window and enter
"dir /x 'C:\\Raoul\\R"
There will be a line something like
RMAPS_~1  R Maps_UK Maps_Test

Use the value that appears in the RMAPS_~1 position.

2. try loading rcom at the R prompt before starting RExcel from the
Excel window.

require(rcom)

If that works you will need to modify your Rprofile to make that happen
automatically.


Further questions on RExcel should be sent to the rcom mailing list
[hidden email]
If you need further help, please include the information from RExcel:
In Excel, click on RExcel > About RExcel > Copy to Clipboard > OK
Then move to your email and paste that information.

Rich



------------------------------

Message: 61
Date: Sun, 30 Aug 2009 21:29:43 -0700
From: Charlie Sharpsteen <[hidden email]>
Subject: Re: [R] Computer Modern Fonts in R graphic
To: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

If you are trying to use Computer Modern fonts because the R graphics will
be included in a LaTeX report, you could try the TikZ Device Cameron Bracken
and I wrote:
http://r-forge.r-project.org/projects/tikzdevice

The tikzDevice translates R graphics instructions into a LaTeX-friendly
format that can be included directly into documents where the font used in
the figure will match that used in the rest of the text.

<http://r-forge.r-project.org/projects/tikzdevice>It can be installed from R
with the following command:

install.packages( 'tikzDevice', repos='http://r-forge.r-project.org')

The package is still in beta but we have had very good results so far- tests
we have run with the persp function look great.

For your case, you would replace the call to pdf() with:

pdf("snxsm.tex")

And inside your tex document the figure code stored in snxsm.tex is included
using the \input{} command:

\documentclass{whatever}

....
\usepackage{tikz}

....
\begin{document}

\begin{figure}

  \input{snxsm}

\end{figure}

....
\end{document}

Note you will need the TikZ package installed in your LaTeX distribution.
Installation of LaTeX packages is covered in Part 2 of the tikzDevice
vignette:

vignette( 'tikzDevice' )

Hope this helps!

-Charlie


On Sun, Aug 30, 2009 at 1:14 PM, Friedericksen Hope <
[hidden email]> wrote:

> Hello all,
>
> I am trying to use computer modern fonts in my r grahics. I tried to do, as
> described here: http://www.stat.auckland.ac.nz/~paul/R/CM/CMR.html but
> unfortunately, it does not work.
>
> First of all I downloaded the cm-lgc package and the AFM and PFB-files from
> the page and put them in my R working directory, so far, so good.
>
> Then I tried to run the following code:
>
>  sn <- seq(1,7,length=100)
>> sm <- seq(0,4,length=100)
>> f <- function(x,y) {5.64080973 + 0.12271038*x - 0.27725481 * y +
>> 0.29281216*x*y}
>> z <- outer(sn,sm,f)
>>
>> nrz <- nrow(z)
>> ncz <- ncol(z)
>> jet.colors <- colorRampPalette( c("yellow", "red") ) nbcol <- 100
>> color <- jet.colors(nbcol)
>> zfacet <- z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz]
>> facetcol <- cut(zfacet, nbcol)
>>
>> CM <- Type1Font("CM",
>>                     c("cm-lgc/fonts/afm/public/cm-lgc/fcmr8a.afm",
>>                       "cm-lgc/fonts/afm/public/cm-lgc/fcmb8a.afm",
>>                       "cm-lgc/fonts/afm/public/cm-lgc/fcmri8a.afm",
>>                       "cm-lgc/fonts/afm/public/cm-lgc/fcmbi8a.afm",
>>                       "cmsyase.afm")) postscriptFonts(CM=CM)
>> pdf("snxsm.pdf")
>> par(family="CM")
>> persp(sn,sm,z,xlab="SN",
>> ylab="SM",zlab="VI",theta=-20,phi=20,r=5,shade=0.01,col=color[facetcol])
>> dev.off()
>>
>
> It works fine, until the persp() function, there I get:
>
>  persp(sn,sm,z,xlab="SN",
>> ylab="SM",zlab="VI",theta=-20,phi=20,r=5,shade=0.01,col=color[facetcol])
>> Error in persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",
>>  :  Invalid font type
>> In addition: Warning messages:
>> 1: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>  font family not found in PostScript font database
>> 2: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>  font family not found in PostScript font database
>> 3: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>  font family not found in PostScript font database
>> 4: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>  font family not found in PostScript font database
>> 5: In persp.default(sn, sm, z, xlab = "SN", ylab = "SM", zlab = "VI",  :
>>  font family not found in PostScript font database
>>
>
> Any help is appreciated! Also a hint/link how to install the CM fonts under
> R for general use, so that I don't have to have it in my wd all the time.
>
[[elided Yahoo spam]]

>
> Greetings,
> Friedericksen
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
    [[alternative HTML version deleted]]



------------------------------

Message: 62
Date: Mon, 31 Aug 2009 06:06:10 +0100
From: Oliver Kullmann <[hidden email]>
Subject: Re: [R] error with summary(vector)??
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii

Ah, thank you! (A bit strange.)

Now looking more closer to it, I wonder whether I could achieve
the following output formatting:

> fivenum(E$nodes,1)
[1]     1     2     3     7 70632
> round(mean(E$nodes),2)
[1] 49.85

That's a reasonable way of showing the numbers involved, since
E$nodes is declared as integer data.

Now could I get "summary" to show me the numbers in this way, i.e.
     Min.  1st Qu.   Median     Mean  3rd Qu.     Max.
     1     2         3          49.85     7       70630

?? I guess not. Perhaps it would be helpful to see how "summary"
achieves its result here --- how to display the code of this
function? (Spent some time with the documentation and on the
Internet, but can't find the right combination of key words,
and a nice section "List everything related to functions" doesn't
seem to be available.)

Thanks for your help.

Oliver


On Sun, Aug 30, 2009 at 10:39:48PM +0100, Oliver Kullmann wrote:

> Hello,
>
> I get
>
> > summary(E)
>      level             nodes            ave_nodes            time
>  Min.   :      1   Min.   :    1.00   Min.   :  10.71   Min.   :  0.0000
>  1st Qu.: 237414   1st Qu.:    2.00   1st Qu.:  19.70   1st Qu.:  0.0100
>  Median : 749229   Median :    3.00   Median :  27.01   Median :  0.0100
>  Mean   : 767902   Mean   :   49.85   Mean   :  98.89   Mean   :  0.2296
>  3rd Qu.:1111288   3rd Qu.:    7.00   3rd Qu.:  80.38   3rd Qu.:  0.0300
>  Max.   :2097152   Max.   :70632.00   Max.   :4451.67   Max.   :338.9200
>     ave_time          singles    autarkies     depth    ave_reductions
>  Min.   : 0.0490   Min.   :0   Min.   :0   Min.   :33   Min.   : 0.20
>  1st Qu.: 0.0910   1st Qu.:0   1st Qu.:0   1st Qu.:52   1st Qu.: 7.50
>  Median : 0.1270   Median :0   Median :0   Median :52   Median : 9.92
>  Mean   : 0.4636   Mean   :0   Mean   :0   Mean   :52   Mean   :11.76
>  3rd Qu.: 0.3860   3rd Qu.:0   3rd Qu.:0   3rd Qu.:52   3rd Qu.:13.46
>  Max.   :16.2720   Max.   :0   Max.   :0   Max.   :52   Max.   :70.00
> > summary(E$nodes)
>     Min.  1st Qu.   Median     Mean  3rd Qu.     Max.
>     1.00     2.00     3.00    49.85     7.00 70630.00
>
> The max-value for E$nodes is incorrect --- it it 70632, as computed by
> summary(E) ??
>
> Oliver
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


------------------------------

Message: 63
Date: Sun, 30 Aug 2009 22:50:08 -0700
From: Luna Laurent <[hidden email]>
Subject: [R] online classes or online eduction in statistics? esp.
    time    series analysis and cointegration?
To: [hidden email], [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

Hi all,

I am looking for low cost online education in statistics. I am thinking of
taking online classes on time series analysis and cointegration, etc.

Of course, if there are free video lectures, that would be great. However I
couldn't find any free video lectures at upper-undergraduate and graduate
level which formally going through the whole timeseries education... That's
why I would like to enroll in some sort of online degree classes. However, I
don't want to earn the certificate or the degree; I just want to audit the
online class specifically in time series analysis and cointegration. Could
anybody recommend such online education in statistics esp. in time series
and cointegration, at low cost? Hopefully it's not going to be like a few
thousand dollars for one class.

[[elided Yahoo spam]]

    [[alternative HTML version deleted]]



------------------------------

Message: 64
Date: Mon, 31 Aug 2009 01:57:19 -0400
From: [hidden email]
Subject: [R] How to extract the theta values from coxph frailty models
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain;    charset=ISO-8859-1;    DelSp="Yes";
    format="flowed"

Hello,
I am working on the frailty model using coxph function. I am running 
some simulations and want to store the variance of frailty (theta) 
values from each simulation result. Can anyone help me how to extract 
the theta values from the results. I appreciate any help.

Thanks
Shankar Viswanathan



------------------------------

Message: 65
Date: Mon, 31 Aug 2009 02:07:49 -0400
From: Robert A LaBudde <[hidden email]>
Subject: Re: [R] online classes or online eduction in statistics? esp.
    time    series analysis and cointegration?
To: Luna Laurent <[hidden email]>
Cc: [hidden email], [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii; format=flowed

http://www.statistics.com/ for online courses without college credit,
taught by well known faculty typically using their own books.



At 01:50 AM 8/31/2009, Luna Laurent wrote:

>Hi all,
>
>I am looking for low cost online education in statistics. I am thinking of
>taking online classes on time series analysis and cointegration, etc.
>
>Of course, if there are free video lectures, that would be great. However I
>couldn't find any free video lectures at upper-undergraduate and graduate
>level which formally going through the whole timeseries education... That's
>why I would like to enroll in some sort of online degree classes. However, I
>don't want to earn the certificate or the degree; I just want to audit the
>online class specifically in time series analysis and cointegration. Could
>anybody recommend such online education in statistics esp. in time series
>and cointegration, at low cost? Hopefully it's not going to be like a few
>thousand dollars for one class.
>
[[elided Yahoo spam]]
>
>         [[alternative HTML version deleted]]
>
>______________________________________________
>[hidden email] mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

================================================================
Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: [hidden email]
Least Cost Formulations, Ltd.            URL: http://lcfltd.com/
824 Timberlake Drive                     Tel: 757-467-0954
Virginia Beach, VA 23464-3239            Fax: 757-467-2947

"Vere scire est per causas scire"



------------------------------

Message: 66
Date: Sun, 30 Aug 2009 23:12:46 -0700
From: Luna Moon <[hidden email]>
Subject: [R] where to find jobs in Datamining and Machine Learning?
To: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

Hi all,

Could anybody give me some pointers to good job sites for
statistician/datamininer and machine learners?

Thanks a lot!

    [[alternative HTML version deleted]]



------------------------------

Message: 67
Date: Sun, 30 Aug 2009 23:13:41 -0700
From: Luna Laurent <[hidden email]>
Subject: Re: [R] online classes or online eduction in statistics? esp.
    time    series analysis and cointegration?
To: Robert A LaBudde <[hidden email]>
Cc: [hidden email], [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

What's their fee structures?

Thanks a lot Robert!

2009/8/30 Robert A LaBudde <[hidden email]>

> http://www.statistics.com/ for online courses without college credit,
> taught by well known faculty typically using their own books.
>
>
>
>
> At 01:50 AM 8/31/2009, Luna Laurent wrote:
>
>> Hi all,
>>
>> I am looking for low cost online education in statistics. I am thinking of
>> taking online classes on time series analysis and cointegration, etc.
>>
>> Of course, if there are free video lectures, that would be great. However
>> I
>> couldn't find any free video lectures at upper-undergraduate and graduate
>> level which formally going through the whole timeseries education...
>> That's
>> why I would like to enroll in some sort of online degree classes. However,
>> I
>> don't want to earn the certificate or the degree; I just want to audit the
>> online class specifically in time series analysis and cointegration. Could
>> anybody recommend such online education in statistics esp. in time series
>> and cointegration, at low cost? Hopefully it's not going to be like a few
>> thousand dollars for one class.
>>
[[elided Yahoo spam]]

>>
>>        [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> [hidden email] mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> ================================================================
> Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: [hidden email]
> Least Cost Formulations, Ltd.            URL: http://lcfltd.com/
> 824 Timberlake Drive                     Tel: 757-467-0954
> Virginia Beach, VA 23464-3239            Fax: 757-467-2947
>
> "Vere scire est per causas scire"
> ================================================================
>
>
    [[alternative HTML version deleted]]



------------------------------

Message: 68
Date: Mon, 31 Aug 2009 08:25:46 +0200
From: Schalk Heunis <[hidden email]>
Subject: [R] Two way joining vs heatmap
To: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=ISO-8859-1

Hi

STATISTICA has a function called "Two-way joining" (see
http://www.statsoft.com/TEXTBOOK/stcluan.html#twotwo) and the
reference material states that this is based on the method as
published by Hartigan (found this paper:
http://www.jstor.org/pss/2284710 through wikipedia).

What is the relationship (if any) between the "heatmap" function in R
and this technique? Is there an alternative function to use?

Thanks for the help!
Schalk Heunis



------------------------------

Message: 69
Date: Mon, 31 Aug 2009 08:40:16 +0200
From: Oscar Bayona <[hidden email]>
Subject: Re: [R] Two way joining vs heatmap
To: Schalk Heunis <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]>
Content-Type: text/plain

Hi all How can I un subscrive this forum?

2009/8/31 Schalk Heunis <[hidden email]>

> Hi
>
> STATISTICA has a function called "Two-way joining" (see
> http://www.statsoft.com/TEXTBOOK/stcluan.html#twotwo) and the
> reference material states that this is based on the method as
> published by Hartigan (found this paper:
> http://www.jstor.org/pss/2284710 through wikipedia).
>
> What is the relationship (if any) between the "heatmap" function in R
> and this technique? Is there an alternative function to use?
>
[[elided Yahoo spam]]
> Schalk Heunis
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>

    [[alternative HTML version deleted]]



------------------------------

Message: 70
Date: Mon, 31 Aug 2009 00:32:19 -0700
From: Noah Silverman <[hidden email]>
Subject: Re: [R] SVM coefficients
Cc: r help <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Steve,

That doesn't work.

I just trained an SVM with 80 variables.
svm_model$coefs gives me  a list of 10,000 items.  My training set is
30,000 examples of 80 variables, so I have no idea what the 10,000 items
represent.

There should be some attribute that lists the "weights" for each of the
80 variables.

--
Noah

On 8/30/09 7:47 PM, Steve Lianoglou wrote:

> Hi,
>
> On Sun, Aug 30, 2009 at 6:10 PM, Noah Silverman<[hidden email]>  wrote:
>   
>> Hello,
>>
>> I'm using the svm function from the e1071 package.
>>
>> It works well and gives me nice results.
>>
>> I'm very curious to see the actual coefficients calculated for each input
>> variable.  (Other packages, like RapidMiner, show you this automatically.)
>>
>> I've tried looking at attributes for the model and do see a "coefficients"
>> item, but printing it returns an NULL result.
>>     
> Hmm .. I don't see a "coefficients" attribute, but rather a "coefs"
> attribute, which I guess is what you're looking for (?)
>
> Run "example(svm)" to its end and type:
>
> R>  m$coefs
>               [,1]
>   [1,]  1.00884130
>   [2,]  1.27446460
>   [3,]  2.00000000
>   [4,] -1.00000000
>   [5,] -0.35480340
>   [6,] -0.74043692
>   [7,] -0.87635311
>   [8,] -0.04857869
>   [9,] -0.03721980
> [10,] -0.64696793
> [11,] -0.57894605
>
> HTH,
>
> -steve
>
>


------------------------------

Message: 71
Date: Mon, 31 Aug 2009 09:37:06 +0200
From: Peter Dalgaard <[hidden email]>
Subject: Re: [R] Two way joining vs heatmap
To: Oscar Bayona <[hidden email]>
Cc: [hidden email], Schalk Heunis <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=UTF-8

Oscar Bayona wrote:
> Hi all How can I un subscrive this forum?

By following the instructions on

> https://stat.ethz.ch/mailman/listinfo/r-help



--
   O__  ---- Peter Dalgaard             ?ster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics     PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen   Denmark      Ph:  (+45) 35327918
~~~~~~~~~~ - ([hidden email])              FAX: (+45) 35327907



------------------------------

Message: 72
Date: Mon, 31 Aug 2009 09:39:17 +0200
From: Petr PIKAL <[hidden email]>
Subject: Re: [R] Sequence generation
To: Barry Rowlingson <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]>
   
Content-Type: text/plain; charset="US-ASCII"

Hi

[hidden email] napsal dne 29.08.2009 21:49:54:

> On Sat, Aug 29, 2009 at 8:14 PM, njhuang86<[hidden email]> wrote:
> >
> > Hey guys,
> >
> > I was wondering how to create this sequence: 1, 2, 3, 1, 2, 3, 1, 2,
3, 1,

> > 2, 3... with the '1, 2, 3' repeated over 10 times.
>
> rep(1:3,10)  # rep repeats its first argument according to the number
> in its second argument
>
> > Also, is there a simple method to generate 1, 1, 1, 2, 2, 2, 3, 3, 3?
>
>  The second argument can be a vector, so what you want here is:
>
>  rep(1:3,c(3,3,3))
or

rep(1:10,each=3)
[1]  1  1  1  2  2  2  3  3  3  4  4  4  5  5  5  6  6  6  7  7  7  8  8
8  9  9  9 10 10 10

Regards
Petr


>
>  but you can create the second vector here also using rep! Hence:
>
>  rep(1:3,rep(3,3))
>
> Barry
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 73
Date: Mon, 31 Aug 2009 09:50:57 +0200
From: Paul Hiemstra <[hidden email]>
Subject: Re: [R] Trying to rename spatial pts data frame slot that
    isn't a slot()
To: Tim Clark <[hidden email]>
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi Tim,

I don't know the answer to your problem, but I do know that you might
consider reposting this question to the r-sig-geo mailing list. There
you will find a geographically oriented audience, which will probably
lead to better and faster answers.

cheers,
Paul

Tim Clark wrote:

> Dear List,
>
> I am analyzing the home range area of fish and seem to have lost the individuals ID names during my manipulations, and can't find out how to rename them.  I calculated the MCP of the fish using mcp() in Adehabitat.  MCP's were converted to spatial points data frame and exported to qGIS for manipulations.  At this point the ID names were lost.  I brought the manipulated shapefiles back into qGIS, but can't figure out how to rename the individuals.
>
> #Calculate MCP and save as a shapefile
>     my.mcp<-mcp(xy, id=id, percent=100)
>     spol<-area2spol(my.mcp)
>     spdf <- SpatialPolygonsDataFrame(spol, data=data.frame
>         +(getSpPPolygonsLabptSlots(spol),
>         +row.names=getSpPPolygonsIDSlots(spol)), match.ID = TRUE)
>     writeOGR(spdf,dsn=mcp.dir,layer="All Mantas MCP", driver="ESRI
>         +Shapefile")
>
> #Read shapefile manipulated in qGIS
>     mymcp<-readOGR(dsn=mcp.dir,layer="All mantas MCP land differenc")
>
>
> My spatial points data frame has a number of "Slot"s, including one that contained the original names called Slot "ID".  However, I can not access this slot using slot() or slotNames(). 
>
>   
>> slotNames(mymcp)
>>     
> [1] "data"        "polygons"    "plotOrder"   "bbox"      "proj4string"
>
> What am I missing here?  Is Slot "ID" not a slot?  Can I export the ID's with the shapefiles to qGIS?  Can I rename the ID's when I bring them back into R?  When is a slot not a slot()?
>
> Thanks,
>
> TIm
>
>
>
>
> Tim Clark
> Department of Zoology
> University of Hawaii
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>   

--
Drs. Paul Hiemstra
Department of Physical Geography
Faculty of Geosciences
University of Utrecht
Heidelberglaan 2
P.O. Box 80.115
3508 TC Utrecht
Phone:  +3130 274 3113 Mon-Tue
Phone:  +3130 253 5773 Wed-Fri
http://intamap.geo.uu.nl/~paul



------------------------------

Message: 74
Date: Mon, 31 Aug 2009 09:54:06 +0200 (CEST)
From: Achim Zeileis <[hidden email]>
Subject: Re: [R] SVM coefficients
To: Noah Silverman <[hidden email]>
Cc: r help <[hidden email]>, [hidden email]
Message-ID:
    <[hidden email]>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Mon, 31 Aug 2009, Noah Silverman wrote:

> Steve,
>
> That doesn't work.
>
> I just trained an SVM with 80 variables.
> svm_model$coefs gives me  a list of 10,000 items.  My training set is 30,000
> examples of 80 variables, so I have no idea what the 10,000 items represent.

Presumably, the coefficients of the support vectors times the training
labels, see help("svm", package = "e1071"). See also
   http://www.jstatsoft.org/v15/i09/
for some background information and the different formulations available.

> There should be some attribute that lists the "weights" for each of the 80
> variables.

Not sure what you are looking for. Maybe David, the author auf svm() (and
now Cc), can help.
Z

> --
> Noah
>
> On 8/30/09 7:47 PM, Steve Lianoglou wrote:
>> Hi,
>>
>> On Sun, Aug 30, 2009 at 6:10 PM, Noah Silverman<[hidden email]>
>> wrote:
>>
>>> Hello,
>>>
>>> I'm using the svm function from the e1071 package.
>>>
>>> It works well and gives me nice results.
>>>
>>> I'm very curious to see the actual coefficients calculated for each input
>>> variable.  (Other packages, like RapidMiner, show you this automatically.)
>>>
>>> I've tried looking at attributes for the model and do see a "coefficients"
>>> item, but printing it returns an NULL result.
>>>
>> Hmm .. I don't see a "coefficients" attribute, but rather a "coefs"
>> attribute, which I guess is what you're looking for (?)
>>
>> Run "example(svm)" to its end and type:
>>
>> R>  m$coefs
>>               [,1]
>>   [1,]  1.00884130
>>   [2,]  1.27446460
>>   [3,]  2.00000000
>>   [4,] -1.00000000
>>   [5,] -0.35480340
>>   [6,] -0.74043692
>>   [7,] -0.87635311
>>   [8,] -0.04857869
>>   [9,] -0.03721980
>> [10,] -0.64696793
>> [11,] -0.57894605
>>
>> HTH,
>>
>> -steve
>>
>>
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>


------------------------------

Message: 75
Date: Mon, 31 Aug 2009 09:19:54 +0100
From: Liviu Andronic <[hidden email]>
Subject: Re: [R] Computer Modern Fonts in R graphic
To: Charlie Sharpsteen <[hidden email]>
Cc: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain; charset=UTF-8

Hello
And sorry for the brief highjacking.

On 8/31/09, Charlie Sharpsteen <[hidden email]> wrote:
>  The tikzDevice translates R graphics instructions into a LaTeX-friendly
>  format that can be included directly into documents where the font used in
>  the figure will match that used in the rest of the text.
>
Is there any support for Sweave? What would be the preferred way to
include TikZ graphics in an Sweave document?
Thank you
Liviu



------------------------------

Message: 76
Date: Mon, 31 Aug 2009 11:20:57 +0200
From: Martin Maechler <[hidden email]>
Subject: Re: [R] Best R text editors?
To: [hidden email], Liviu Andronic
    <[hidden email]>
Cc: [hidden email], Uli Kleinwechter
    <[hidden email]>
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii

>>>>> "LA" == Liviu Andronic <[hidden email]>
>>>>>     on Sun, 30 Aug 2009 11:59:52 +0100 writes:

    LA> Hello,
    LA> On 8/30/09, Uli Kleinwechter <[hidden email]> wrote:
    >> contributing to your poll: Also Emacs+ESS on Linux.
    >>
    LA> Could someone give a brief and subjective overview of ESS.. I notice
    LA> that many people use it, and many describe it as the tool for the
    LA> power user. As far as I'm concerned, I've once again looked at Emacs
    LA> (after couple of years) and I still don't feel like using it for my
    LA> editing purposes.

I'm forwarding this to the ESS-dedicated mailing list
'ESS-help'.

Note however that you can get quite a bit of documentation from the
ESS web page, http://ESS.r-project.org/
Note that the tab 'Documentation' has four subtabs,
"Manuals", "Articles", "Presentations" and "Reference Cards".

Further, on the "Getting Help" tab, there are also links to
the ESS section of the R- Wiki and the Emacs - Wiki.

Martin Maechler, ETH Zurich



------------------------------

Message: 77
Date: Mon, 31 Aug 2009 02:30:31 -0700 (PDT)
From: Mark Difford <[hidden email]>
Subject: Re: [R] test for bimodality&In-Reply-To=
To: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii


Hi John,

>> Has a test for bimodality been implemented in R?

You may find the code at the URL below useful. It was written by Jeremy
Tantrum (a PhD of Werner Stuetzle's). Amongst other things there is a
function to plot the unimodal and bimodal Gaussian smoothers closest to the
observed data. A dip-test statistic is also calculated.

Regards, Mark.

http://www.stat.washington.edu/wxs/Stat593-s03/Code/jeremy-unimodality.R



John Sansom wrote:

>
> Has a test for bimodality been implemented in R?
>
> Thanks, John
>
> NIWA is the trading name of the National Institute of Water & Atmospheric
> Research Ltd.
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
View this message in context: http://www.nabble.com/test-for-bimodality-In-Reply-To%3D-tp25216164p25220627.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 78
Date: Mon, 31 Aug 2009 12:37:54 +0300
From: Yonatan Nissenbaum <[hidden email]>
Subject: [R] permutation test - query
To: [hidden email]
Message-ID:
    <[hidden email]..com>
Content-Type: text/plain

Hi,

My query is regarding permutation test and reshuffling of genotype/phenotype
data
I have been using the haplo.stats package of R. for haplotype analysis and I
would like to perform an analysis which I'm requesting your advice.
I have a data set of individuals genotyped for 12 SNP and a dichotomous
phenotype.
At first, I have tested each of those SNP independently in order to bypass
multiple testing issues and
I have found that 5 out of those 12 SNP had a p value <0.05.
The analysis that I would like to run is a permutation simulation (lets say
10,000 times) that will estimate the probability to receive
significant results by chance for 5 SNP or more.
I'm aware that permutation test are available in R however, I could not find
the way of writing the right algorithm
that will count at each run only the cases by which at least 5 SNP (out of
12) were significant.
Any assistance on that matter will be most appreciated.

Many thanks,

Jonathan

    [[alternative HTML version deleted]]



------------------------------

Message: 79
Date: Mon, 31 Aug 2009 21:54:34 +1200
From: David Scott <[hidden email]>
Subject: Re: [R] Google's R Style Guide
To: [hidden email]
Cc: [hidden email]
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

(Ted Harding) wrote:

> On 29-Aug-09 17:51:54, diegol wrote:
>> Max Kuhn wrote:
>>> Perhaps this is obvious, but Ive never understood why this is the
>>> general convention:
>>>
>>>> An opening curly brace should never go on its own line;
>>> I tend to do this:
>>>
>>> f <- function()
>>> {
>>>   if (TRUE)
>>>     {
>>>       cat("TRUE!!\n")
>>>     } else {
>>>       cat("FALSE!!\n")
>>>     }
>>> }
>> I favor your approach. BUT I add one more level of indentation.
>> Your function would look like:
>>
>> f <- function()
>>   {
>>     if (TRUE)
>>       {
>>         cat("TRUE!!\n")
>>       } else {
>>         cat("FALSE!!\n")
>>       }
>>   }
>>
>> This way I quickly identify the beginning of the function, which is
>> the one line at the top of the expression AND sticking to the left
>> margin.
>> In your code you use this same indentation in the if/else construct.
>> I find it also useful for the function itself.
>
> When I want to rely on indentation and vertical alignments to keep
> track of program structure, I would tend to write the above like
>
>   f <-
>   function()
>   { if (TRUE)
>     {
>       cat("TRUE!!\n")
>     } else
>     {
>       cat("FALSE!!\n")
>     }
>   }
>
> so that an opening "{" is aligned with the keyword it is associated
> with, and then at the end of the block so also is the closing "}".
>
> However, in this case (if I keep all the "{...}" for the sake of
> structure) I would also tend to "save on lines" with
>
>   f <-
>   function()
>   { if (TRUE)
>     { cat("TRUE!!\n")  } else
>     { cat("FALSE!!\n") }
>   }
>
> which is still clear enough for me. This probably breaks most
> "guidelines"! But in practice it depends on what it is, and on
> how readily I find I can read it.
>
> Ted.
>
I have to say Ted, I find this as ugly as sin and you would have to
break my legs to make me code like this.

I am with Hadley on not taking extra lines and I think this is really
unclear because it is so disjointed. And the 'else' way over to the
right I just think is crazy.

It just goes to show how personal this can be because despite my
loathing this code I know Ted to be a thoughtful and experienced R user.

I think this discussion is valuable, and have previously asked about
style which I think is very important. Base R does suffer from very
inconsistent naming and as I think Duncan said it makes it very
difficult sometimes to remember names when you have variations in case
and separators as with things related to system.

David



_________________________________________________________________
David Scott    Department of Statistics
        The University of Auckland, PB 92019
        Auckland 1142,    NEW ZEALAND
Phone: +64 9 923 5055, or +64 9 373 7599 ext 85055
Email:    [hidden email],  Fax: +64 9 373 7018

Director of Consulting, Department of Statistics



------------------------------

_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

End of R-help Digest, Vol 78, Issue 34
**************************************



     
        [[alternative HTML version deleted]]


______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Logistic Politomic Regression in R

Juliet Hannah
Check out Chapter 7 of Laura Thompson's R Companion to Agresti (you can find it
online).

It will show you how to fit proportional odds models (polr in MASS,
and lrm in the Design library) and multinomial
regression models.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Loading...