Rating R Helpers

classic Classic list List threaded Threaded
21 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Rating R Helpers

Doran, Harold
Since R is open source and help may come from varied levels of
experience on R-Help, I wonder if it might be helpful to construct a
method that can be used to "rate" those who provide help on this list.

This is something that is done on other comp lists, like
http://www.experts-exchange.com/.

I think some of the reasons for this are pretty transparent, but I
suppose one reason is that one could decide to implement the advise of
those with "superior" or "expert" levels. In other words, you can trust
the advice of someone who is more experienced more than someone who is
not. Currently, there is no way to discern who on this list is really an
R expert and who is not. Of course, there is R core, but most people
don't actually know who these people are (at least I surmise that to be
true).

If this is potentially useful, maybe one way to begin the development of
such ratings is to allow the original poster to "rate" the level of help
from those who responded. Maybe something like a very simple
questionnaire on a likert-like scale that the original poster would
respond to upon receiving help which would lead to the accumulation of
points for the responders. Higher points would result in higher levels
of expertise (e.g., novice, ..., wizaRd).

Just a random thought. What do others think?

Harold




        [[alternative HTML version deleted]]

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Frank Harrell
Doran, Harold wrote:

> Since R is open source and help may come from varied levels of
> experience on R-Help, I wonder if it might be helpful to construct a
> method that can be used to "rate" those who provide help on this list.
>
> This is something that is done on other comp lists, like
> http://www.experts-exchange.com/.
>
> I think some of the reasons for this are pretty transparent, but I
> suppose one reason is that one could decide to implement the advise of
> those with "superior" or "expert" levels. In other words, you can trust
> the advice of someone who is more experienced more than someone who is
> not. Currently, there is no way to discern who on this list is really an
> R expert and who is not. Of course, there is R core, but most people
> don't actually know who these people are (at least I surmise that to be
> true).
>
> If this is potentially useful, maybe one way to begin the development of
> such ratings is to allow the original poster to "rate" the level of help
> from those who responded. Maybe something like a very simple
> questionnaire on a likert-like scale that the original poster would
> respond to upon receiving help which would lead to the accumulation of
> points for the responders. Higher points would result in higher levels
> of expertise (e.g., novice, ..., wizaRd).
>
> Just a random thought. What do others think?
>
> Harold
>

Not necessary a bad idea but how do you take into account how much each
responder is paid to respond?

Frank

--
Frank E Harrell Jr   Professor and Chair           School of Medicine
                      Department of Biostatistics   Vanderbilt University

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Frank Harrell
Department of Biostatistics, Vanderbilt University
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

hadley wickham
In reply to this post by Doran, Harold
A nice model might be http://www.workingwithrails.com/.

It also suggests other possible measures of authority:

 * member of R core
 * packages written by/contributed to
 * R conference attended/presented at
 * contributer to wiki

Hadley

On 11/30/07, Doran, Harold <[hidden email]> wrote:

> Since R is open source and help may come from varied levels of
> experience on R-Help, I wonder if it might be helpful to construct a
> method that can be used to "rate" those who provide help on this list.
>
> This is something that is done on other comp lists, like
> http://www.experts-exchange.com/.
>
> I think some of the reasons for this are pretty transparent, but I
> suppose one reason is that one could decide to implement the advise of
> those with "superior" or "expert" levels. In other words, you can trust
> the advice of someone who is more experienced more than someone who is
> not. Currently, there is no way to discern who on this list is really an
> R expert and who is not. Of course, there is R core, but most people
> don't actually know who these people are (at least I surmise that to be
> true).
>
> If this is potentially useful, maybe one way to begin the development of
> such ratings is to allow the original poster to "rate" the level of help
> from those who responded. Maybe something like a very simple
> questionnaire on a likert-like scale that the original poster would
> respond to upon receiving help which would lead to the accumulation of
> points for the responders. Higher points would result in higher levels
> of expertise (e.g., novice, ..., wizaRd).
>
> Just a random thought. What do others think?
>
> Harold
>
>
>
>
>         [[alternative HTML version deleted]]
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


--
http://had.co.nz/

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

markleeds
In reply to this post by Doran, Harold
>From: "Doran, Harold" <[hidden email]>
>Date: 2007/11/30 Fri PM 02:12:36 CST
>To: R Help <[hidden email]>
>Subject: [R] Rating R Helpers

Of course, it's just MHO, but I think your suggestion is more work than neccesary because it becomes pretty obvious who the R-experts are once you've been on the list for more than 3 months.

Also, the rating system wouldn't work well
because the quality of the rating would depend
on the experience of the person who asked the
question. A beginner might think a particular answer
is good while a more experienced person might think
it was bad.

                               Mark














>Since R is open source and help may come from varied levels of
>experience on R-Help, I wonder if it might be helpful to construct a
>method that can be used to "rate" those who provide help on this list.
>
>This is something that is done on other comp lists, like
>http://www.experts-exchange.com/.
>
>I think some of the reasons for this are pretty transparent, but I
>suppose one reason is that one could decide to implement the advise of
>those with "superior" or "expert" levels. In other words, you can trust
>the advice of someone who is more experienced more than someone who is
>not. Currently, there is no way to discern who on this list is really an
>R expert and who is not. Of course, there is R core, but most people
>don't actually know who these people are (at least I surmise that to be
>true).
>
>If this is potentially useful, maybe one way to begin the development of
>such ratings is to allow the original poster to "rate" the level of help
>from those who responded. Maybe something like a very simple
>questionnaire on a likert-like scale that the original poster would
>respond to upon receiving help which would lead to the accumulation of
>points for the responders. Higher points would result in higher levels
>of expertise (e.g., novice, ..., wizaRd).
>
>Just a random thought. What do others think?
>
>Harold
>
>
>
>
> [[alternative HTML version deleted]]
>
>______________________________________________
>[hidden email] mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Matthew Keller
Also just MHO, but I think it is a good idea, if for no other reason
than it creates an additional incentive for people to respond and to
do so in a thoughtful way. I think the rating systems on some other
boards, and those used at commercial sites such as Amazon, testify to
the fact that people often do care about their ratings, and this, in
turn, tends to bring the discourse to a higher level. Of course, it
would also help newbies get some idea about how to weight responses -
something that takes quite a long time otherwise (longer than 3 months
I'd argue). However, I'm not sure that the ratings should be
restricted to only those who asked the question given that it isn't
only the questioner who has a valid opinion on responses. Why not open
it to everyone (and increase sample sizes)?

Mark - I agree that the quality of the ratings themselves is going to
depend on the questioner/raters. But the noise will wash out over
time, or in stats speak, your mean rating should be unbiased estimate
of your response helpfulness and its variance should decrease as more
people rate your responses :). I guess that assumes independence
though...

In any event, I'd love to see it happen,

Matt



On Nov 30, 2007 1:53 PM,  <[hidden email]> wrote:

> >From: "Doran, Harold" <[hidden email]>
> >Date: 2007/11/30 Fri PM 02:12:36 CST
> >To: R Help <[hidden email]>
> >Subject: [R] Rating R Helpers
>
> Of course, it's just MHO, but I think your suggestion is more work than neccesary because it becomes pretty obvious who the R-experts are once you've been on the list for more than 3 months.
>
> Also, the rating system wouldn't work well
> because the quality of the rating would depend
> on the experience of the person who asked the
> question. A beginner might think a particular answer
> is good while a more experienced person might think
> it was bad.
>
>                                Mark
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> >Since R is open source and help may come from varied levels of
> >experience on R-Help, I wonder if it might be helpful to construct a
> >method that can be used to "rate" those who provide help on this list.
> >
> >This is something that is done on other comp lists, like
> >http://www.experts-exchange.com/.
> >
> >I think some of the reasons for this are pretty transparent, but I
> >suppose one reason is that one could decide to implement the advise of
> >those with "superior" or "expert" levels. In other words, you can trust
> >the advice of someone who is more experienced more than someone who is
> >not. Currently, there is no way to discern who on this list is really an
> >R expert and who is not. Of course, there is R core, but most people
> >don't actually know who these people are (at least I surmise that to be
> >true).
> >
> >If this is potentially useful, maybe one way to begin the development of
> >such ratings is to allow the original poster to "rate" the level of help
> >from those who responded. Maybe something like a very simple
> >questionnaire on a likert-like scale that the original poster would
> >respond to upon receiving help which would lead to the accumulation of
> >points for the responders. Higher points would result in higher levels
> >of expertise (e.g., novice, ..., wizaRd).
> >
> >Just a random thought. What do others think?
> >
> >Harold
> >
> >
> >
> >
> >       [[alternative HTML version deleted]]
> >
> >______________________________________________
> >[hidden email] mailing list
> >https://stat.ethz.ch/mailman/listinfo/r-help
> >PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> >and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



--
Matthew C Keller
Asst. Professor of Psychology
University of Colorado at Boulder
www.matthewckeller.com

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Max Nevill-3
In reply to this post by Doran, Harold
Maybe I haven't been on the list long enough, but how bad is the R
advice on this list? Is a rating system even necessary?

As Mark Leeds pointed out, there is the issue that what a beginner
thinks is good advice isn't what an expert thinks in good advice. There
is also the converse, expert advice is sometimes quite confusing to
beginners, but perfectly sensible to other experts.



Doran, Harold brought next idea :

> Since R is open source and help may come from varied levels of
> experience on R-Help, I wonder if it might be helpful to construct a
> method that can be used to "rate" those who provide help on this list.
>
> This is something that is done on other comp lists, like
> http://www.experts-exchange.com/.
>
> I think some of the reasons for this are pretty transparent, but I
> suppose one reason is that one could decide to implement the advise of
> those with "superior" or "expert" levels. In other words, you can trust
> the advice of someone who is more experienced more than someone who is
> not. Currently, there is no way to discern who on this list is really an
> R expert and who is not. Of course, there is R core, but most people
> don't actually know who these people are (at least I surmise that to be
> true).
>
> If this is potentially useful, maybe one way to begin the development of
> such ratings is to allow the original poster to "rate" the level of help
> from those who responded. Maybe something like a very simple
> questionnaire on a likert-like scale that the original poster would
> respond to upon receiving help which would lead to the accumulation of
> points for the responders. Higher points would result in higher levels
> of expertise (e.g., novice, ..., wizaRd).
>
> Just a random thought. What do others think?
>
> Harold
>
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Bill.Venables
In reply to this post by Doran, Harold
This seems a little impractical to me.  People respond so much at random
and most only tackle questions with which they feel comfortable.  As
it's not a competition in any sense, it's going to be hard to rank
people in any effective way.  But suppose you succeed in doing so, then
what?

To me a much more urgent initiative is some kind of user online review
system for packages, even something as simple as that used by Amazon.com
has for customer review of books.

I think the need for this is rather urgent, in fact.  Most packages are
very good, but I regret to say some are pretty inefficient and others
downright dangerous.  You don't want to discourage people from
submitting their work to CRAN, but at the same time you do want some
mechanism that allows users to relate their experience with it, good or
bad.  


Bill Venables
CSIRO Laboratories
PO Box 120, Cleveland, 4163
AUSTRALIA
Office Phone (email preferred): +61 7 3826 7251
Fax (if absolutely necessary):  +61 7 3826 7304
Mobile:                         +61 4 8819 4402
Home Phone:                     +61 7 3286 7700
mailto:[hidden email]
http://www.cmis.csiro.au/bill.venables/ 

-----Original Message-----
From: [hidden email] [mailto:[hidden email]]
On Behalf Of Doran, Harold
Sent: Saturday, 1 December 2007 6:13 AM
To: R Help
Subject: [R] Rating R Helpers

Since R is open source and help may come from varied levels of
experience on R-Help, I wonder if it might be helpful to construct a
method that can be used to "rate" those who provide help on this list.

This is something that is done on other comp lists, like
http://www.experts-exchange.com/.

I think some of the reasons for this are pretty transparent, but I
suppose one reason is that one could decide to implement the advise of
those with "superior" or "expert" levels. In other words, you can trust
the advice of someone who is more experienced more than someone who is
not. Currently, there is no way to discern who on this list is really an
R expert and who is not. Of course, there is R core, but most people
don't actually know who these people are (at least I surmise that to be
true).

If this is potentially useful, maybe one way to begin the development of
such ratings is to allow the original poster to "rate" the level of help
from those who responded. Maybe something like a very simple
questionnaire on a likert-like scale that the original poster would
respond to upon receiving help which would lead to the accumulation of
points for the responders. Higher points would result in higher levels
of expertise (e.g., novice, ..., wizaRd).

Just a random thought. What do others think?

Harold




        [[alternative HTML version deleted]]

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Gabor Grothendieck
On Dec 1, 2007 2:21 AM,  <[hidden email]> wrote:

> To me a much more urgent initiative is some kind of user online review
> system for packages, even something as simple as that used by Amazon.com
> has for customer review of books.
>
> I think the need for this is rather urgent, in fact.  Most packages are
> very good, but I regret to say some are pretty inefficient and others
> downright dangerous.  You don't want to discourage people from
> submitting their work to CRAN, but at the same time you do want some
> mechanism that allows users to relate their experience with it, good or
> bad.

You can get a very rough idea of this automatically by making a list of which
packages  are dependents of other packages.

library(pkgDepTools) # from BioC
AP <- available.packages()
dep <- AP[, "Depends"]
deps <- unlist(sapply(dep, pkgDepTools:::cleanPkgField))
sort(table(deps))

Of course some packages are more naturally end-user oriented and so
would never make such a list and others may be good but just no one
knows about them so they have never been used.   Some packages
might get on the list because an author has two packages and one uses
the other so an enhancement could be to eliminate dependencies with
a common author.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Mark Kimpel
In reply to this post by Bill.Venables
I'll throw one more idea into the mix. I agree with Bill that a rating
system for respondents is probably not that practical and of not the highest
importance. It also seems like a recipe for creating inter-personal problems
that the list doesn't need.

I do like Bill's idea of a review system for packages, which could be
incorporated into my idea that follows...

What I would find useful would be some sort of tagging system for messages.
I can't count the times I've remembered seeing a message that addresses a
question I have down the road but, when Googled, I can't find it. It would
be so nice, for example, to reliably be able to find all messages related to
a certain package or package function posted within the last X days. This
could be implemented as simply as asking posters to provide keywords at the
end of a message, but it would be great if they could somehow be pulled out
of a message and stored in a DB. For instance keywords could be surrounded
by a sequence of special characters, which a parser could then extract and
store in a DB along with the message.

Of course, this would be work to set up, but how many of our "experts" who
so kindly give of their time, get exasperated when similar questions keep
popping up on the list? Also, if we had a web-accessable DB, the responses,
not the responders, could be rated as to how well a reply takes care of an
issue. Thus, over time, a sort of auto-wiki could be born. I can think of
more uses for this as well. For example a developer could quickly check to
see what usability problems or suggestions have cropped up of on individual
package.

Mark

On Dec 1, 2007 2:21 AM, <[hidden email]> wrote:

> This seems a little impractical to me.  People respond so much at random
> and most only tackle questions with which they feel comfortable.  As
> it's not a competition in any sense, it's going to be hard to rank
> people in any effective way.  But suppose you succeed in doing so, then
> what?
>
> To me a much more urgent initiative is some kind of user online review
> system for packages, even something as simple as that used by Amazon.com
> has for customer review of books.
>
> I think the need for this is rather urgent, in fact.  Most packages are
> very good, but I regret to say some are pretty inefficient and others
> downright dangerous.  You don't want to discourage people from
> submitting their work to CRAN, but at the same time you do want some
> mechanism that allows users to relate their experience with it, good or
> bad.
>
>
> Bill Venables
> CSIRO Laboratories
> PO Box 120, Cleveland, 4163
> AUSTRALIA
> Office Phone (email preferred): +61 7 3826 7251
> Fax (if absolutely necessary):  +61 7 3826 7304
> Mobile:                         +61 4 8819 4402
> Home Phone:                     +61 7 3286 7700
> mailto:[hidden email]
> http://www.cmis.csiro.au/bill.venables/
>
> -----Original Message-----
> From: [hidden email] [mailto:[hidden email]]
> On Behalf Of Doran, Harold
> Sent: Saturday, 1 December 2007 6:13 AM
> To: R Help
> Subject: [R] Rating R Helpers
>
> Since R is open source and help may come from varied levels of
> experience on R-Help, I wonder if it might be helpful to construct a
> method that can be used to "rate" those who provide help on this list.
>
> This is something that is done on other comp lists, like
> http://www.experts-exchange.com/.
>
> I think some of the reasons for this are pretty transparent, but I
> suppose one reason is that one could decide to implement the advise of
> those with "superior" or "expert" levels. In other words, you can trust
> the advice of someone who is more experienced more than someone who is
> not. Currently, there is no way to discern who on this list is really an
> R expert and who is not. Of course, there is R core, but most people
> don't actually know who these people are (at least I surmise that to be
> true).
>
> If this is potentially useful, maybe one way to begin the development of
> such ratings is to allow the original poster to "rate" the level of help
> from those who responded. Maybe something like a very simple
> questionnaire on a likert-like scale that the original poster would
> respond to upon receiving help which would lead to the accumulation of
> points for the responders. Higher points would result in higher levels
> of expertise (e.g., novice, ..., wizaRd).
>
> Just a random thought. What do others think?
>
> Harold
>
>
>
>
>        [[alternative HTML version deleted]]
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



--
--
Mark W. Kimpel MD
Neuroinformatics
Department of Psychiatry
Indiana University School of Medicine

        [[alternative HTML version deleted]]

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Gabor Grothendieck
I don't think r-help is really intended for packages although for some
very popular packages questions appear on it anyways sometimes.

On Dec 1, 2007 11:28 AM, Mark Kimpel <[hidden email]> wrote:

> I'll throw one more idea into the mix. I agree with Bill that a rating
> system for respondents is probably not that practical and of not the highest
> importance. It also seems like a recipe for creating inter-personal problems
> that the list doesn't need.
>
> I do like Bill's idea of a review system for packages, which could be
> incorporated into my idea that follows...
>
> What I would find useful would be some sort of tagging system for messages.
> I can't count the times I've remembered seeing a message that addresses a
> question I have down the road but, when Googled, I can't find it. It would
> be so nice, for example, to reliably be able to find all messages related to
> a certain package or package function posted within the last X days. This
> could be implemented as simply as asking posters to provide keywords at the
> end of a message, but it would be great if they could somehow be pulled out
> of a message and stored in a DB. For instance keywords could be surrounded
> by a sequence of special characters, which a parser could then extract and
> store in a DB along with the message.
>
> Of course, this would be work to set up, but how many of our "experts" who
> so kindly give of their time, get exasperated when similar questions keep
> popping up on the list? Also, if we had a web-accessable DB, the responses,
> not the responders, could be rated as to how well a reply takes care of an
> issue. Thus, over time, a sort of auto-wiki could be born. I can think of
> more uses for this as well. For example a developer could quickly check to
> see what usability problems or suggestions have cropped up of on individual
> package.
>
> Mark
>
> On Dec 1, 2007 2:21 AM, <[hidden email]> wrote:
>
>
> > This seems a little impractical to me.  People respond so much at random
> > and most only tackle questions with which they feel comfortable.  As
> > it's not a competition in any sense, it's going to be hard to rank
> > people in any effective way.  But suppose you succeed in doing so, then
> > what?
> >
> > To me a much more urgent initiative is some kind of user online review
> > system for packages, even something as simple as that used by Amazon.com
> > has for customer review of books.
> >
> > I think the need for this is rather urgent, in fact.  Most packages are
> > very good, but I regret to say some are pretty inefficient and others
> > downright dangerous.  You don't want to discourage people from
> > submitting their work to CRAN, but at the same time you do want some
> > mechanism that allows users to relate their experience with it, good or
> > bad.
> >
> >
> > Bill Venables
> > CSIRO Laboratories
> > PO Box 120, Cleveland, 4163
> > AUSTRALIA
> > Office Phone (email preferred): +61 7 3826 7251
> > Fax (if absolutely necessary):  +61 7 3826 7304
> > Mobile:                         +61 4 8819 4402
> > Home Phone:                     +61 7 3286 7700
> > mailto:[hidden email]
> > http://www.cmis.csiro.au/bill.venables/
> >
> > -----Original Message-----
> > From: [hidden email] [mailto:[hidden email]]
> > On Behalf Of Doran, Harold
> > Sent: Saturday, 1 December 2007 6:13 AM
> > To: R Help
> > Subject: [R] Rating R Helpers
> >
> > Since R is open source and help may come from varied levels of
> > experience on R-Help, I wonder if it might be helpful to construct a
> > method that can be used to "rate" those who provide help on this list.
> >
> > This is something that is done on other comp lists, like
> > http://www.experts-exchange.com/.
> >
> > I think some of the reasons for this are pretty transparent, but I
> > suppose one reason is that one could decide to implement the advise of
> > those with "superior" or "expert" levels. In other words, you can trust
> > the advice of someone who is more experienced more than someone who is
> > not. Currently, there is no way to discern who on this list is really an
> > R expert and who is not. Of course, there is R core, but most people
> > don't actually know who these people are (at least I surmise that to be
> > true).
> >
> > If this is potentially useful, maybe one way to begin the development of
> > such ratings is to allow the original poster to "rate" the level of help
> > from those who responded. Maybe something like a very simple
> > questionnaire on a likert-like scale that the original poster would
> > respond to upon receiving help which would lead to the accumulation of
> > points for the responders. Higher points would result in higher levels
> > of expertise (e.g., novice, ..., wizaRd).
> >
> > Just a random thought. What do others think?
> >
> > Harold
> >
> >
> >
> >
> >        [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > [hidden email] mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> > ______________________________________________
> > [hidden email] mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
>
> --
> --
> Mark W. Kimpel MD
> Neuroinformatics
> Department of Psychiatry
> Indiana University School of Medicine
>
>
>        [[alternative HTML version deleted]]
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Johannes Huesing
In reply to this post by Mark Kimpel
Mark Kimpel <[hidden email]> [Sat, Dec 01, 2007 at 05:28:28PM CET]:
> What I would find useful would be some sort of tagging system for messages.

Hrm. I find tags immensely useful for entities which do not contain primarily
text, such as photos. I am at doubt how keywords are important when they are
not found in the text. There are situations where the first keyword that comes
to mind is tiptoed around in the message, but I don't know if this is often
the case.

> I can't count the times I've remembered seeing a message that addresses a
> question I have down the road but, when Googled, I can't find it.

Is it a problem of way too many false positives or a problem of false negatives?
Tags may help out in the second case, but in my experiencd it is rare.

[...]
>
> Of course, this would be work to set up, but how many of our "experts" who
> so kindly give of their time, get exasperated when similar questions keep
> popping up on the list?

Do you think that people who keep asking similar questions do so because they
didn't do their homework first, or that they Googled and failed?

--
Johannes H�sing               There is something fascinating about science.
                              One gets such wholesale returns of conjecture
mailto:[hidden email]  from such a trifling investment of fact.                
http://derwisch.wikidot.com         (Mark Twain, "Life on the Mississippi")


______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

slre
In reply to this post by Doran, Harold
Package review is a nice idea. But you raise a worrying point.
Are any of the 'downright dangerous' packages on CRAN?
If so, er... why?


>>> <[hidden email]> 12/01/07 7:21 AM >>>
>I think the need for this is rather urgent, in fact.  Most packages are
>very good, but I regret to say some are pretty inefficient and others
>downright dangerous.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

John Sorkin
In reply to this post by Bill.Venables
I believe we need to know the following about packages:
(1) Does the package do what it purports to do, i.e. are the results valid?
(2) Have the results generated by the package been validate against some other statistical package, or hand-worked example?
(3) Are the methods used in the soundly based?
(4) Does the package documentation refer to referred papers or textbooks?
(5) In addition to the principle result, does the package return ancillary values that allow for proper interpretation of the main result, (e.g. lm gives estimates of the betas and their SEs, but also generates residuals)?.
(6) Is the package easy to use, i.e. do the parameters used when invoking the package chosen so as to allow the package to be flexible?
(7) Are the error messages produced by the package helpful?
(8) Does the package conform to standards of R coding and good programming principles in general?
(9) Does the package interact will with the larger R environment, e.g. does it have a plot method etc.?
(10) Is the package well documented internally, i.e. is the code easy to follow, are the comments in the code adequate?
(11) Is the package well documented externally, i.e. through man pages and perhaps other documentation (e.g. MASS and its associated textbook)?

In addition to package evaluation and reviews, we also need some plan for the future of R. Who will maintain, modify, and extend packages after the principle author, or authors, retire? Software is never "done". Errors need to be corrected, programs need to be modified to accommodate changes in software and hardware. I have reasonable certainty that commercial software (e.g. SAS) will be available in 10-years (and that PROC MIXED will still be a part of SAS). I am far less sanguine about any number of R packages.
John



John Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)

>>> <[hidden email]> 12/1/2007 2:21 AM >>>
This seems a little impractical to me.  People respond so much at random
and most only tackle questions with which they feel comfortable.  As
it's not a competition in any sense, it's going to be hard to rank
people in any effective way.  But suppose you succeed in doing so, then
what?

To me a much more urgent initiative is some kind of user online review
system for packages, even something as simple as that used by Amazon.com
has for customer review of books.

I think the need for this is rather urgent, in fact.  Most packages are
very good, but I regret to say some are pretty inefficient and others
downright dangerous.  You don't want to discourage people from
submitting their work to CRAN, but at the same time you do want some
mechanism that allows users to relate their experience with it, good or
bad.  


Bill Venables
CSIRO Laboratories
PO Box 120, Cleveland, 4163
AUSTRALIA
Office Phone (email preferred): +61 7 3826 7251
Fax (if absolutely necessary):  +61 7 3826 7304
Mobile:                         +61 4 8819 4402
Home Phone:                     +61 7 3286 7700
mailto:[hidden email]
http://www.cmis.csiro.au/bill.venables/ 

-----Original Message-----
From: [hidden email] [mailto:[hidden email]]
On Behalf Of Doran, Harold
Sent: Saturday, 1 December 2007 6:13 AM
To: R Help
Subject: [R] Rating R Helpers

Since R is open source and help may come from varied levels of
experience on R-Help, I wonder if it might be helpful to construct a
method that can be used to "rate" those who provide help on this list.

This is something that is done on other comp lists, like
http://www.experts-exchange.com/.

I think some of the reasons for this are pretty transparent, but I
suppose one reason is that one could decide to implement the advise of
those with "superior" or "expert" levels. In other words, you can trust
the advice of someone who is more experienced more than someone who is
not. Currently, there is no way to discern who on this list is really an
R expert and who is not. Of course, there is R core, but most people
don't actually know who these people are (at least I surmise that to be
true).

If this is potentially useful, maybe one way to begin the development of
such ratings is to allow the original poster to "rate" the level of help
from those who responded. Maybe something like a very simple
questionnaire on a likert-like scale that the original poster would
respond to upon receiving help which would lead to the accumulation of
points for the responders. Higher points would result in higher levels
of expertise (e.g., novice, ..., wizaRd).

Just a random thought. What do others think?

Harold




        [[alternative HTML version deleted]]

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.

Confidentiality Statement:
This email message, including any attachments, is for th...{{dropped:6}}

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Pablo G Goicoechea-2
In reply to this post by Mark Kimpel

   I'm just a R user who joined the list searching solution for a problem.
   I do not think rating helpers is a good idea. For once, they do it freely;
   no  need  to harrash (?) anybody. On the other hand, it could have the
   opposite  effect;  people afraid to get a bad rating do not post their
   potentially valid answers.
   But more importantly, the list is full with examples of how to accomplish
   the same result with different approaches. Some might be more elegant than
   others, but for sure they show R potentialities. And I have seen several
   corrections/discussions among old timers themselves.
   Packages reviews are another issue. But if anybody is going through all that
   work, why not to make the appropriate corrections to the packages? They are
   GPL, aren't they?
   Best
   Pablo
   Mark Kimpel escribió:

I'll throw one more idea into the mix. I agree with Bill that a rating
system for respondents is probably not that practical and of not the highest
importance. It also seems like a recipe for creating inter-personal problems
that the list doesn't need.

I do like Bill's idea of a review system for packages, which could be
incorporated into my idea that follows...

What I would find useful would be some sort of tagging system for messages.
I can't count the times I've remembered seeing a message that addresses a
question I have down the road but, when Googled, I can't find it. It would
be so nice, for example, to reliably be able to find all messages related to
a certain package or package function posted within the last X days. This
could be implemented as simply as asking posters to provide keywords at the
end of a message, but it would be great if they could somehow be pulled out
of a message and stored in a DB. For instance keywords could be surrounded
by a sequence of special characters, which a parser could then extract and
store in a DB along with the message.

Of course, this would be work to set up, but how many of our "experts" who
so kindly give of their time, get exasperated when similar questions keep
popping up on the list? Also, if we had a web-accessable DB, the responses,
not the responders, could be rated as to how well a reply takes care of an
issue. Thus, over time, a sort of auto-wiki could be born. I can think of
more uses for this as well. For example a developer could quickly check to
see what usability problems or suggestions have cropped up of on individual
package.

Mark

On Dec 1, 2007 2:21 AM, [1]<[hidden email]> wrote:



This seems a little impractical to me.  People respond so much at random
and most only tackle questions with which they feel comfortable.  As
it's not a competition in any sense, it's going to be hard to rank
people in any effective way.  But suppose you succeed in doing so, then
what?

To me a much more urgent initiative is some kind of user online review
system for packages, even something as simple as that used by Amazon.com
has for customer review of books.

I think the need for this is rather urgent, in fact.  Most packages are
very good, but I regret to say some are pretty inefficient and others
downright dangerous.  You don't want to discourage people from
submitting their work to CRAN, but at the same time you do want some
mechanism that allows users to relate their experience with it, good or
bad.


Bill Venables
CSIRO Laboratories
PO Box 120, Cleveland, 4163
AUSTRALIA
Office Phone (email preferred): +61 7 3826 7251
Fax (if absolutely necessary):  +61 7 3826 7304
Mobile:                         +61 4 8819 4402
Home Phone:                     +61 7 3286 7700
[2]mailto:[hidden email]
[3]http://www.cmis.csiro.au/bill.venables/

-----Original Message-----
From: [4][hidden email] [[5]mailto:[hidden email]]
On Behalf Of Doran, Harold
Sent: Saturday, 1 December 2007 6:13 AM
To: R Help
Subject: [R] Rating R Helpers

Since R is open source and help may come from varied levels of
experience on R-Help, I wonder if it might be helpful to construct a
method that can be used to "rate" those who provide help on this list.

This is something that is done on other comp lists, like
[6]http://www.experts-exchange.com/.

I think some of the reasons for this are pretty transparent, but I
suppose one reason is that one could decide to implement the advise of
those with "superior" or "expert" levels. In other words, you can trust
the advice of someone who is more experienced more than someone who is
not. Currently, there is no way to discern who on this list is really an
R expert and who is not. Of course, there is R core, but most people
don't actually know who these people are (at least I surmise that to be
true).

If this is potentially useful, maybe one way to begin the development of
such ratings is to allow the original poster to "rate" the level of help
from those who responded. Maybe something like a very simple
questionnaire on a likert-like scale that the original poster would
respond to upon receiving help which would lead to the accumulation of
points for the responders. Higher points would result in higher levels
of expertise (e.g., novice, ..., wizaRd).

Just a random thought. What do others think?

Harold




       [[alternative HTML version deleted]]

______________________________________________
[7][hidden email] mailing list
[8]https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
[9]http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

______________________________________________
[10][hidden email] mailing list
[11]https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
[12]http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.







   --

   Pablo G Goicoechea

   Bioteknología Saila / Dpto Biotecnología

   NEIKER-Tecnalia

   Apdo 46

   01080 Vitoria-Gasteiz (SPAIN)

   Phone: +34 902 540 546 Fax: +34 902 540 547

   [13][hidden email]

References

   1. mailto:[hidden email]
   2. mailto:[hidden email]
   3. http://www.cmis.csiro.au/bill.venables/
   4. mailto:[hidden email]
   5. mailto:[hidden email]
   6. http://www.experts-exchange.com/
   7. mailto:[hidden email]
   8. https://stat.ethz.ch/mailman/listinfo/r-help
   9. http://www.R-project.org/posting-guide.html
  10. mailto:[hidden email]
  11. https://stat.ethz.ch/mailman/listinfo/r-help
  12. http://www.R-project.org/posting-guide.html
  13. mailto:[hidden email]
______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Mike Prager
In reply to this post by John Sorkin
"John Sorkin" <[hidden email]> wrote:

> I believe we need to know the following about packages:
> (1) Does the package do what it purports to do, i.e. are the results valid?
> (2) Have the results generated by the package been validate against some other statistical package, or hand-worked example?
> (3) Are the methods used in the soundly based?
> (4) Does the package documentation refer to referred papers or textbooks?
> (5) In addition to the principle result, does the package return ancillary values that allow for proper interpretation of the main result, (e.g. lm gives estimates of the betas and their SEs, but also generates residuals)?.
> (6) Is the package easy to use, i.e. do the parameters used when invoking the package chosen so as to allow the package to be flexible?
> (7) Are the error messages produced by the package helpful?
> (8) Does the package conform to standards of R coding and good programming principles in general?
> (9) Does the package interact will with the larger R environment, e.g. does it have a plot method etc.?
> (10) Is the package well documented internally, i.e. is the code easy to follow, are the comments in the code adequate?
> (11) Is the package well documented externally, i.e. through man pages and perhaps other documentation (e.g. MASS and its associated textbook)?
>
> In addition to package evaluation and reviews, we also need some plan for the future of R. Who will maintain, modify, and extend packages after the principle author, or authors, retire? Software is never "done". Errors need to be corrected, programs need to be modified to accommodate changes in software and hardware. I have reasonable certainty that commercial software (e.g. SAS) will be available in 10-years (and that PROC MIXED will still be a part of SAS). I am far less sanguine about any number of R packages.
> John

Interesting questions.

Re, the future : LaTeX provides an example. The more complex
packages tend to stop developing when the original programmer
loses interest. Sometimes another person picks one up, but not
frequently. I think, for example, of the many slide-preparation
packages, each more complex than the next, that have come and
gone during my relatively short (15 yr) professional use of
LaTeX.

At its root, this is a rather deep question: how open-source,
largely volunteer-developed software can survive over the long
term, while continuing to improve and maintain high standards.
We are rather early in the history of free software development
to know the answer.

--
Mike Prager, NOAA, Beaufort, NC
* Opinions expressed are personal and not represented otherwise.
* Any use of tradenames does not constitute a NOAA endorsement.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

p_connolly
In reply to this post by slre
On Sun, 02-Dec-2007 at 11:20PM +0000, S Ellison wrote:

|> Package review is a nice idea. But you raise a worrying point.
|> Are any of the 'downright dangerous' packages on CRAN?

Don't know about "dangerous", but I would like the opportunity to
provide feedback or to have seen feedback from others when I
contemplating using a package I see on CRAN.  This is particularly
true for package maintainers who seem to have vanished.

Such a "rating" would have a natural place.  I don't see where ratings
for helpers would exist in cyberspace for enquirers to the list.  A
compulsory part of everyone's email signatures?  Don't like that idea.


--
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.  
   ___    Patrick Connolly  
 {~._.~}           Great minds discuss ideas    
 _( Y )_            Middle minds discuss events
(:_~*~_:)       Small minds discuss people  
 (_)-(_)                             ..... Anon
         
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Ben Bolker
In reply to this post by slre

S Ellison wrote
Package review is a nice idea. But you raise a worrying point.
Are any of the 'downright dangerous' packages on CRAN?
If so, er... why?


>>> <Bill.Venables@csiro.au> 12/01/07 7:21 AM >>>
>I think the need for this is rather urgent, in fact.  Most packages are
>very good, but I regret to say some are pretty inefficient and others
>downright dangerous.
Presumably because the primary requirement for packages being
accepted on CRAN is that they pass "R CMD check".  This is a fine
minimum standard -- it means that packages will definitely install --
but there's nothing to stop anyone posting a package full of
statistical nonsense to CRAN, as far as I know.   I'm _not_ suggesting
that R-core should take up this challenge, but this is where ratings
come in.

  Ben Bolker
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Ben Bolker
In reply to this post by John Sorkin

John Sorkin wrote
I believe we need to know the following about packages:
(1) Does the package do what it purports to do, i.e. are the results valid?
(2) Have the results generated by the package been validate against some other statistical package, or hand-worked example?
(3) Are the methods used in the soundly based?
(4) Does the package documentation refer to referred papers or textbooks?
(5) In addition to the principle result, does the package return ancillary values that allow for proper interpretation of the main result, (e.g. lm gives estimates of the betas and their SEs, but also generates residuals)?.
(6) Is the package easy to use, i.e. do the parameters used when invoking the package chosen so as to allow the package to be flexible?
(7) Are the error messages produced by the package helpful?
(8) Does the package conform to standards of R coding and good programming principles in general?
(9) Does the package interact will with the larger R environment, e.g. does it have a plot method etc.?
(10) Is the package well documented internally, i.e. is the code easy to follow, are the comments in the code adequate?
(11) Is the package well documented externally, i.e. through man pages and perhaps other documentation (e.g. MASS and its associated textbook)?
Numbers 1 to 3 are critical.  The rest would be very nice to know (and should be part of a rating
system), but in the end are more likely to lead to frustration than outright errors ... (i.e., you'll
find out soon enough if a package is poorly documented, then you just won't use it).
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

p_connolly
In reply to this post by Ben Bolker
On Tue, 04-Dec-2007 at 05:32PM -0800, Ben Bolker wrote:

|>
|>
|>
|> S Ellison wrote:
|> >
|> > Package review is a nice idea. But you raise a worrying point.
|> > Are any of the 'downright dangerous' packages on CRAN?
|> > If so, er... why?
|> >
|> >
|> >>>> <[hidden email]> 12/01/07 7:21 AM >>>
|> >>I think the need for this is rather urgent, in fact.  Most packages are
|> >>very good, but I regret to say some are pretty inefficient and others
|> >>downright dangerous.
|> >
|> >
|>
|> Presumably because the primary requirement for packages being
|> accepted on CRAN is that they pass "R CMD check".  This is a fine
|> minimum standard -- it means that packages will definitely install --

That's not quite true.  Package BRugs will go halfway through the
installation before a Linux user is given the information that it will
not work with Linux.  The automated way the packages are listed
doesn't manage to collect that bit of information (and that's nothing
anyone should be ashamed of).  

Somewhere for adding information such as that could help avoid the
need for many people finding that out for themselves.

best

--
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.  
   ___    Patrick Connolly  
 {~._.~}           Great minds discuss ideas    
 _( Y )_            Middle minds discuss events
(:_~*~_:)       Small minds discuss people  
 (_)-(_)                             ..... Anon
         
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Rating R Helpers

Gabor Grothendieck
On Dec 5, 2007 12:49 AM, Patrick Connolly <[hidden email]> wrote:

> On Tue, 04-Dec-2007 at 05:32PM -0800, Ben Bolker wrote:
>
> |>
> |>
> |>
> |> S Ellison wrote:
> |> >
> |> > Package review is a nice idea. But you raise a worrying point.
> |> > Are any of the 'downright dangerous' packages on CRAN?
> |> > If so, er... why?
> |> >
> |> >
> |> >>>> <[hidden email]> 12/01/07 7:21 AM >>>
> |> >>I think the need for this is rather urgent, in fact.  Most packages are
> |> >>very good, but I regret to say some are pretty inefficient and others
> |> >>downright dangerous.
> |> >
> |> >
> |>
> |> Presumably because the primary requirement for packages being
> |> accepted on CRAN is that they pass "R CMD check".  This is a fine
> |> minimum standard -- it means that packages will definitely install --
>
> That's not quite true.  Package BRugs will go halfway through the
> installation before a Linux user is given the information that it will
> not work with Linux.  The automated way the packages are listed
> doesn't manage to collect that bit of information (and that's nothing
> anyone should be ashamed of).
>
> Somewhere for adding information such as that could help avoid the
> need for many people finding that out for themselves.
>

The bioconductor packages have the DESCRIPTION file's SystemRequirements
field listed on the net so you can know what they are prior to
downloading the file.
For CRAN packages this information seems not to be shown on the net.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
12