# how calculation degrees freedom

11 messages
Open this post in threaded view
|

## how calculation degrees freedom

 Hi, I' m having a hard time understanding the computation of degrees of freedom when runing nlme() on the following model:       > formula(my data.gd) dLt ~ Lt | ID   TasavB<- function(Lt, Linf, K) (K*(Linf-Lt))       my model.nlme <- nlme (dLt ~ TasavB(Lt, Linf, K),   data = my data.gd,   fixed = list(Linf ~ 1, K ~ 1),   start = list(fixed = c(70, 0.4)),   na.action= na.include, naPattern = ~!is.na(dLt))       > summary(my model.nlme) Nonlinear mixed-effects model fit by maximum likelihood   Model: dLt ~ TasavB(Lt, Linf, K)  Data: my data.gd        AIC      BIC    logLik   13015.63 13051.57 -6501.814   Random effects:  Formula: list(Linf ~ 1 , K ~ 1 )  Level: ID  Structure: General positive-definite             StdDev   Corr     Linf 7.3625291 Linf          K 0.0845886 -0.656 Residual 1.6967358         Fixed effects: list(Linf + K ~ 1)         Value Std.Error   DF  t-value p-value Linf 69.32748 0.4187314  402 165.5655  <.0001    K  0.31424 0.0047690 2549  65.8917  <.0001   Standardized Within-Group Residuals:       Min         Q1         Med        Q3      Max  -3.98674 -0.5338083 -0.02783649 0.5261591 4.750609   Number of Observations: 2952 Number of Groups: 403 >   Why are the DF of Linf and K different? I would apreciate if you could point me to a reference   Note: I working with Splus 6.1. for Windows Lic. Gabriela Escati Peñaloza Biología y Manejo de Recursos Acuáticos Centro Nacional Patagónico(CENPAT). CONICET Bvd. Brown s/nº. (U9120ACV)Pto. Madryn Chubut Argentina Tel: 54-2965/451301/451024/451375/45401 (Int:277) Fax: 54-29657451543                 ---------------------------------  1GB gratis, Antivirus y Antispam  Correo Yahoo!, el mejor correo web del mundo  Abrí tu cuenta aquí         [[alternative HTML version deleted]] ______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Open this post in threaded view
|

## Re: how calculation degrees freedom

 On 1/27/06, gabriela escati peñaloza <[hidden email]> wrote: > Hi, I' m having a hard time understanding the computation of degrees of freedom So do I and I'm one of the authors of the package :-) > when runing nlme() on the following model: > >   > formula(my data.gd) > dLt ~ Lt | ID > >   TasavB<- function(Lt, Linf, K) (K*(Linf-Lt)) > >   my model.nlme <- nlme (dLt ~ TasavB(Lt, Linf, K), >   data = my data.gd, >   fixed = list(Linf ~ 1, K ~ 1), >   start = list(fixed = c(70, 0.4)), >   na.action= na.include, naPattern = ~!is.na(dLt)) > >   > summary(my model.nlme) > Nonlinear mixed-effects model fit by maximum likelihood >   Model: dLt ~ TasavB(Lt, Linf, K) >  Data: my data.gd >        AIC      BIC    logLik >   13015.63 13051.57 -6501.814 >   Random effects: >  Formula: list(Linf ~ 1 , K ~ 1 ) >  Level: ID >  Structure: General positive-definite >             StdDev   Corr >     Linf 7.3625291 Linf >        K 0.0845886 -0.656 > Residual 1.6967358 >   Fixed effects: list(Linf + K ~ 1) >         Value Std.Error   DF  t-value p-value > Linf 69.32748 0.4187314  402 165.5655  <.0001 >    K  0.31424 0.0047690 2549  65.8917  <.0001 >   Standardized Within-Group Residuals: >       Min         Q1         Med        Q3      Max >  -3.98674 -0.5338083 -0.02783649 0.5261591 4.750609 >   Number of Observations: 2952 > Number of Groups: 403 > > > >   Why are the DF of Linf and K different? I would apreciate if you could point me to a reference The algorithm is described in Pinheiro and Bates (2000) "Mixed-effects Models in S and S-PLUS" published by Springer.  See section 2.4.2 I would point out that there is effectively no difference between a t-distribution with 402 df and a t-distribution with 2549 df so the actual number of degrees of freedom is irrelevant in this case.  All you need to know is that it is "large". I will defer to any of the "degrees of freedom police" who post to this list to give you an explanation of why there should be different degrees of freedom.  I have been studying mixed-effects models for nearly 15 years and I still don't understand. >   Note: I working with Splus 6.1. for Windows Technically this email list is for questions about R.  There is another list, [hidden email], for questions about S-PLUS. > > > Lic. Gabriela Escati Peñaloza > Biología y Manejo de Recursos Acuáticos > Centro Nacional Patagónico(CENPAT). > CONICET > Bvd. Brown s/nº. > (U9120ACV)Pto. Madryn > Chubut > Argentina > > Tel: 54-2965/451301/451024/451375/45401 (Int:277) > Fax: 54-29657451543 > > --------------------------------- >  1GB gratis, Antivirus y Antispam >  Correo Yahoo!, el mejor correo web del mundo >  Abrí tu cuenta aquí >         [[alternative HTML version deleted]] > > > > ______________________________________________ > [hidden email] mailing list > https://stat.ethz.ch/mailman/listinfo/r-help> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html> > ______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Open this post in threaded view
|

## Re: how calculation degrees freedom

Open this post in threaded view
|

## Re: how calculation degrees freedom

 Søren Højsgaard <[hidden email]> writes: > Along similar lines, I've noticed that the anova() function for lmer > models now only reports the mean squares to go into the numerator > but "nothing for the denominator" of an F-statistic; probably in > recognition of the degree of freedom problem. It could be nice, > however, if anova() produced even an approximate anova table which > can be obtained from Wald tests. The anova function could then print > that "these p-values are large sample ones and hence only > approximate"... I'm reasonably convinced by now that the relevant denominator is always the residual variance, but it is happening via deep magic that I don't quite understand... (and is quite counterintuitive to people who are used to the traditional ANOVA decompositions in orthogonal designs) While we're on the subject: It would be desirable to have Wald tests for specific terms rather than the "Type 1" (sorry, Bill) progressive ANOVA table. Just like we already have in lme(). --    O__  ---- Peter Dalgaard             Øster Farimagsgade 5, Entr.B   c/ /'_ --- Dept. of Biostatistics     PO Box 2099, 1014 Cph. K  (*) \(*) -- University of Copenhagen   Denmark          Ph:  (+45) 35327918 ~~~~~~~~~~ - ([hidden email])                  FAX: (+45) 35327907 ______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Open this post in threaded view
|

## Re: how calculation degrees freedom

Open this post in threaded view
|

## Re: how calculation degrees freedom

 In reply to this post by Peter Dalgaard On 27 Jan 2006 23:08:28 +0100, Peter Dalgaard <[hidden email]> wrote: > Søren Højsgaard <[hidden email]> writes: > > > Along similar lines, I've noticed that the anova() function for lmer > > models now only reports the mean squares to go into the numerator > > but "nothing for the denominator" of an F-statistic; probably in > > recognition of the degree of freedom problem. It could be nice, > > however, if anova() produced even an approximate anova table which > > can be obtained from Wald tests. The anova function could then print > > that "these p-values are large sample ones and hence only > > approximate"... > > I'm reasonably convinced by now that the relevant denominator is > always the residual variance, but it is happening via deep magic that > I don't quite understand... (and is quite counterintuitive to people > who are used to the traditional ANOVA decompositions in orthogonal > designs) Not deep magic for you, Peter.  The slot called rXy in the fitted model is analogous to the first p components of the "effects" component in an lm model.  Cut it up according to the terms and sum the squares. > While we're on the subject: It would be desirable to have Wald tests > for specific terms rather than the "Type 1" (sorry, Bill) progressive > ANOVA table. Just like we already have in lme(). I think this is the point where I mention the Open Source nature of project.  Sorry to say that it is not a priority for me right now. ______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Open this post in threaded view
|

## Re: how calculation degrees freedom

 In reply to this post by Douglas Bates Douglas Bates <[hidden email]> writes: > > Of course, Monte Carlo p-values have their problems, but the world > > is not perfect.... > > Another approach is to use mcmcsamp to derive a sample from the > posterior distribution of the parameters using Markov Chain Monte > Carlo sampling.  If you are interested in intervals rather than > p-values the HPDinterval function from the coda package can create > those. > We (Søren and I) actually had a look at that, and it seems not to solve the problem. Rather, mcmcsamp tends to reproduce the Wald style inference (infinite DF) if you use a suitably vague prior. It's a bit hard to understand clearly, but I think the crux is that any Bayes inference only depends on data through the likelihood function. The distribution of the likelihood never enters (the hardcore Bayesian of course won't care). However, the nature of DF corrections is that the LRT does not have its asymptotic distribution, and mcmc has no way of picking that up. --    O__  ---- Peter Dalgaard             Øster Farimagsgade 5, Entr.B   c/ /'_ --- Dept. of Biostatistics     PO Box 2099, 1014 Cph. K  (*) \(*) -- University of Copenhagen   Denmark          Ph:  (+45) 35327918 ~~~~~~~~~~ - ([hidden email])                  FAX: (+45) 35327907 ______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Open this post in threaded view
|

## Re: how calculation degrees freedom

 In reply to this post by Douglas Bates Douglas Bates <[hidden email]> writes: > > I'm reasonably convinced by now that the relevant denominator is > > always the residual variance, but it is happening via deep magic that > > I don't quite understand... (and is quite counterintuitive to people > > who are used to the traditional ANOVA decompositions in orthogonal > > designs) > > Not deep magic for you, Peter.  The slot called rXy in the fitted > model is analogous to the first p components of the "effects" > component in an lm model.  Cut it up according to the terms and sum > the squares. Yes, that's the bit I do understand. The magic bit is that at that point, you have removed the effect of the Z, in a penalized fashion, and somehow this manages to give the right thing.   Consider a balanced design with a number of observations for each subject, and subjects divided into groups, subject effects considered random. In traditional aov, you end up dividing the "group" SS by the "subject" SS. However in lmer, you remove the subject effect (in a BLUP sort of way) from everything and this gets the "group" SS adjusted so that it matches the residual SS instead. I'm quite prepared to believe it; I just don't think it is trivial. --    O__  ---- Peter Dalgaard             Øster Farimagsgade 5, Entr.B   c/ /'_ --- Dept. of Biostatistics     PO Box 2099, 1014 Cph. K  (*) \(*) -- University of Copenhagen   Denmark          Ph:  (+45) 35327918 ~~~~~~~~~~ - ([hidden email])                  FAX: (+45) 35327907 ______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Open this post in threaded view
|

## Re: how calculation degrees freedom

 In reply to this post by Peter Dalgaard In connection with calculating Monte Carlo p-values based on sampled data sets: The calculations involve something like    update(lmer.model, data=newdata) where newdata is a simulated dataset comming from simulate(lmer.model). I guess the update could be faster if one could supply the update function with the parameter estimates from the original fit of the lmer.model as starting values. Is this possible to achieve?? Best Søren ________________________________ Fra: [hidden email] på vegne af Peter Dalgaard Sendt: lø 28-01-2006 01:12 Til: Douglas Bates Cc: Søren Højsgaard; [hidden email] Emne: Re: [R] how calculation degrees freedom Douglas Bates <[hidden email]> writes: > > Of course, Monte Carlo p-values have their problems, but the world > > is not perfect.... > > Another approach is to use mcmcsamp to derive a sample from the > posterior distribution of the parameters using Markov Chain Monte > Carlo sampling.  If you are interested in intervals rather than > p-values the HPDinterval function from the coda package can create > those. > We (Søren and I) actually had a look at that, and it seems not to solve the problem. Rather, mcmcsamp tends to reproduce the Wald style inference (infinite DF) if you use a suitably vague prior. It's a bit hard to understand clearly, but I think the crux is that any Bayes inference only depends on data through the likelihood function. The distribution of the likelihood never enters (the hardcore Bayesian of course won't care). However, the nature of DF corrections is that the LRT does not have its asymptotic distribution, and mcmc has no way of picking that up. --    O__  ---- Peter Dalgaard             Øster Farimagsgade 5, Entr.B   c/ /'_ --- Dept. of Biostatistics     PO Box 2099, 1014 Cph. K  (*) \(*) -- University of Copenhagen   Denmark          Ph:  (+45) 35327918 ~~~~~~~~~~ - ([hidden email])                  FAX: (+45) 35327907 ______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide! http://www.R-project.org/posting-guide.html