R Square Help (this debate again, i know!)

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

R Square Help (this debate again, i know!)

surreyj
Hello everyone,

I have been using R to do some behavioural economic analysis for my masters thesis, specifically fitting demand curves using nls.  

E.g.

Formula: y ~ c + b * x - a * exp(x)

Parameters:
   Estimate Std. Error t value Pr(>|t|)    
c -0.445097   0.080823  -5.507 0.005304 **
b -0.777105   0.059528 -13.054 0.000199 ***
a  0.011908   0.003886   3.064 0.037495 *  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1046 on 4 degrees of freedom

Number of iterations to convergence: 1
Achieved convergence tolerance: 1.119e-06
  (1 observation deleted due to missingness)

Now all the other demand literature reports the %/proportion of variance accounted for (or R squared) as well as the parameter values and standard error.

From reading these forums I can see that R squared isn't a feature in NLS with reasoning backing this up.

I had a chat to a supervisor and he suggested I post to here and see if someone can give me a reference/references backing up why I shouldn't use r-squared.  However he also said that it is used throughout all of the other demand literature and it would appear odd to not have it in my thesis.  
My second supervisor agrees it needs to be included in the analysis.

Can someone please advise me how to do this in R?  I suppose I could always use excel or something but I would rather make use of the code I already have and get it looping through quickly.  (The R-squared is the last thing that is holding me up for finishing my results section).

Any advice would be greatly appreciated.

Regards

Surrey :)


Reply | Threaded
Open this post in threaded view
|

Re: R Square Help (this debate again, i know!)

Dieter Menne
surreyj wrote
Now all the other demand literature reports the %/proportion of variance accounted for (or R squared) as well as the parameter values and standard error.
...
I had a chat to a supervisor and he suggested I post to here and see if someone can give me a reference/references backing up why I shouldn't use r-squared.
..
For references to the printed literature , see

http://markmail.org/message/qoup5oerbxchejmy

I sympathize with the point you mention in a more general context. There many "against-all-reviewers" wisdoms in this community

-- F-test in ANOVA (early years)
-- Exegesis/Venables
-- "Nesting" (most helpful sentence by D Bates: "I never understood this")
-- p-values (most lately brought up by lmer's refusal to produce these)
-- r-squared for nonlinear

that will prevail in the long run, but we (oldies) make students or colleagues in applied fields suffer by not producing these. It is easy for FoxBatsRipley to tell statistical reviewers that they are wrong, but what about surreyi's master thesis against the sheer mass of papers with nonlinear R^2 and "more-p-better-paper" reviewers? Or Frank Harrell against a Mayo Clinics Medical professor? (Sorry, Frank, I made up the example)

Surprisingly, it's mostly the not-so-top papers that cause problems here. When Lancet, New English or BMJ reject some statistical argument, they have good reasons.

Dieter