inconsistency between anova() and summary() of glmmPQL

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

inconsistency between anova() and summary() of glmmPQL

I.Szentirmai
Dear All,

Could anyone explain me how it is possible that one factor
in a glmmPQL model is non-significant according to the
anova() function, whereas it turns out to be significant
(or at least some of its levels differ significantly from
some other levels) according to the summary() function.
What is the truth, which results shall I believe? And, is
there any other way of testing for the overall effect of a
factor in glmmPQL, than anova()?

Thanks for help,
Istvan

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Reply | Threaded
Open this post in threaded view
|

Re: inconsistency between anova() and summary() of glmmPQL

Liaw, Andy
To quote one of my professors, it usually doesn't make sense to ask
questions like `Is variable X significant?'  (Or, sort of more formally,
testing H0: beta_j = 0 vs. H1: beta_j != 0.)  If `X' is the _only_ variable
you will ever consider, then the question can make sense.  Otherwise, you
need more context: what other variables are you putting into the model?

The `inconsistency' you saw is because of difference in context.  The test
you see in summary() adds terms in the model sequentially, so provides tests
of a sequence of nested models.  OTHO, each row in the output of anova() is
comparing two models: the model with all terms (`full model') vs. the one
with all terms except the term being considered (`reduced model').  Which
one is `right' depends on which hypothesis matches your research question.

HTH,
Andy

From: I.Szentirmai

>
> Dear All,
>
> Could anyone explain me how it is possible that one factor
> in a glmmPQL model is non-significant according to the
> anova() function, whereas it turns out to be significant
> (or at least some of its levels differ significantly from
> some other levels) according to the summary() function.
> What is the truth, which results shall I believe? And, is
> there any other way of testing for the overall effect of a
> factor in glmmPQL, than anova()?
>
> Thanks for help,
> Istvan
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>
>

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Reply | Threaded
Open this post in threaded view
|

Re: inconsistency between anova() and summary() of glmmPQL

Liaw, Andy
In reply to this post by I.Szentirmai
My apologies:  I got the descriptions of tests in summary() and anova()
backward.

Cheers,
Andy

From: I.Szentirmai

>
> Dear Andy,
>
> Thanks a lot for clarifying this for me.
>
> However, to me it seems that anova is the one that
> provides the results of a sequential test since whether a
> factor is significant depends on its position among all
> factors in the model. I ran my model including f1, f2 and
> f3 with different orders of these factors (model 1:
> y=f1+f2+f3; model 2: y=f3+f2+f1), and I got really
> different results. The results in the summary output
> however, did not depend on the order of the factors.
>
> Maybe I didn't get you right, but this seems to be in
> contrast with what you wrote me.
>
> Thanks,
> Istvan
>
>
> On Wed, 1 Mar 2006 11:53:16 -0500
>   "Liaw, Andy" <[hidden email]> wrote:
> > To quote one of my professors, it usually doesn't make
> >sense to ask
> > questions like `Is variable X significant?'  (Or, sort
> >of more formally,
> > testing H0: beta_j = 0 vs. H1: beta_j != 0.)  If `X' is
> >the _only_ variable
> > you will ever consider, then the question can make
> >sense.  Otherwise, you
> > need more context: what other variables are you putting
> >into the model?
> >
> > The `inconsistency' you saw is because of difference in
> >context.  The test
> > you see in summary() adds terms in the model
> >sequentially, so provides tests
> > of a sequence of nested models.  OTHO, each row in the
> >output of anova() is
> > comparing two models: the model with all terms (`full
> >model') vs. the one
> > with all terms except the term being considered
> >(`reduced model').  Which
> > one is `right' depends on which hypothesis matches your
> >research question.
> >
> > HTH,
> > Andy
> >
> >From: I.Szentirmai
> >>
> >> Dear All,
> >>
> >> Could anyone explain me how it is possible that one
> >>factor
> >> in a glmmPQL model is non-significant according to the
> >> anova() function, whereas it turns out to be significant
> >> (or at least some of its levels differ significantly
> >>from
> >> some other levels) according to the summary() function.
> >> What is the truth, which results shall I believe? And,
> >>is
> >> there any other way of testing for the overall effect of
> >>a
> >> factor in glmmPQL, than anova()?
> >>
> >> Thanks for help,
> >> Istvan
> >>
> >> ______________________________________________
> >> [hidden email] mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide!
> >> http://www.R-project.org/posting-guide.html
> >>
> >>
> >
> >
> >
> --------------------------------------------------------------
> ----------------
> > Notice:  This e-mail message, together with any
> >attachments, contains information of Merck & Co., Inc.
> >(One Merck Drive, Whitehouse Station, New Jersey, USA
> >08889), and/or its affiliates (which may be known outside
> >the United States as Merck Frosst, Merck Sharp & Dohme or
> >MSD and in Japan, as Banyu) that may be confidential,
> >proprietary copyrighted and/or legally privileged. It is
> >intended solely for the use of the individual or entity
> >named on this message.  If you are not the intended
> >recipient, and have received this message in error,
> >please notify us immediately by reply e-mail and then
> >delete it from your system.
> >
> --------------------------------------------------------------
> ----------------
>
>

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Reply | Threaded
Open this post in threaded view
|

glmmML

I.Szentirmai
In reply to this post by Liaw, Andy
Dear R Users,

I'm looking for a similar function as step() or drop1() for glmmML models,
but couldn't yet find any. I would appreciate if anyone could help me find
such a function.

Thanks,
Istvan

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Reply | Threaded
Open this post in threaded view
|

Re: glmmML

Prof Brian Ripley
yOn Mon, 27 Mar 2006, Szentirmai Istvan wrote:

> Dear R Users,
>
> I'm looking for a similar function as step() or drop1() for glmmML models,
> but couldn't yet find any. I would appreciate if anyone could help me find
> such a function.

If you read the help for those functions, you will see that 'all' you need
is to specify an extractAIC() function for such models.

However, I don't think even defining AIC for them is that easy (what do
you do about zero variance parameters, for example?).

--
Brian D. Ripley,                  [hidden email]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford,             Tel:  +44 1865 272861 (self)
1 South Parks Road,                     +44 1865 272866 (PA)
Oxford OX1 3TG, UK                Fax:  +44 1865 272595

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Reply | Threaded
Open this post in threaded view
|

why does lmer not give p valuses for quasibinomial family?

I.Szentirmai
Dear All,

I'm running a binomial model using the lmer() function, and I get p values
for the parameter estimates only with family=binomial, but not with
quasibinomial? Why is that so? I wanted to use quasibinomial family, because
my data were overdispersed.

Thanks,
Istvan

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Reply | Threaded
Open this post in threaded view
|

Re: why does lmer not give p valuses for quasibinomial family?

Spencer Graves
          Please provide a replicatable example.  Consider the following:

 > library(lme4)
 > library(mlmRev)
 > fit1q <- lmer(use~1+(1|district), data=Contraception,
family=quasibinomial)
 > fit2q <- lmer(use~age+(1|district), data=Contraception,
family=quasibinomial)
 > anova(fit1q, fit2q)
Data: Contraception
Models:
fit1q: use ~ 1 + (1 | district)
fit2q: use ~ age + (1 | district)
       Df     AIC     BIC  logLik  Chisq Chi Df Pr(>Chisq)
fit1q  2  2538.5  2549.6 -1267.2
fit2q  3  2537.9  2554.7 -1266.0 2.5254      1     0.1120
 >
          hope this helps,
          spencer graves

Szentirmai Istvan wrote:

> Dear All,
>
> I'm running a binomial model using the lmer() function, and I get p values
> for the parameter estimates only with family=binomial, but not with
> quasibinomial? Why is that so? I wanted to use quasibinomial family, because
> my data were overdispersed.
>
> Thanks,
> Istvan
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html