

Dear members,
Let do a example of simple GLMM with x and G as fixed factors and R as
random factor:
(note that question is the same with GLM or even LM):
x < rnorm(100)
y < rnorm(100)
G < as.factor(sample(c("A", "B", "C", "D"), 100, replace = TRUE))
R < as.factor(rep(1:25, 4))
library(lme4)
m < lmer(y ~ x + G + (1  R))
summary(m)$coefficients
I get the fixed effect fit and their SE
> summary(m)$coefficients
Estimate Std. Error t value
(Intercept) 0.07264454 0.1952380 0.3720820
x 0.02519892 0.1238621 0.2034433
GB 0.10969225 0.3118371 0.3517614
GC 0.09771555 0.2705523 0.3611706
GD 0.12944760 0.2740012 0.4724344
The estimate for GA is not shown as it is fixed to 0. Normal, it is the
reference level.
But is there a way to get SE for GA of isit nonsense question because
GA is fixed to 0 ?
______________
I propose here a solution but I don't know if it is correct. It is based
on reordering levels and averaging se for all reordering:
G < relevel(G, "A")
m < lmer(y ~ x + G + (1  R))
sA < summary(m)$coefficients
G < relevel(G, "B")
m < lmer(y ~ x + G + (1  R))
sB < summary(m)$coefficients
G < relevel(G, "C")
m < lmer(y ~ x + G + (1  R))
sC < summary(m)$coefficients
G < relevel(G, "D")
m < lmer(y ~ x + G + (1  R))
sD < summary(m)$coefficients
seA < mean(sB["GA", "Std. Error"], sC["GA", "Std. Error"], sD["GA",
"Std. Error"])
seB < mean(sA["GB", "Std. Error"], sC["GB", "Std. Error"], sD["GB",
"Std. Error"])
seC < mean(sA["GC", "Std. Error"], sB["GC", "Std. Error"], sD["GC",
"Std. Error"])
seD < mean(sA["GD", "Std. Error"], sB["GD", "Std. Error"], sC["GD",
"Std. Error"])
seA; seB; seC; seD
Thanks,
Marc
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


On 12/30/18 5:31 PM, Marc Girondot via Rhelp wrote:
> Dear members,
>
> Let do a example of simple GLMM with x and G as fixed factors and R as
> random factor:
>
> (note that question is the same with GLM or even LM):
>
> x < rnorm(100)
> y < rnorm(100)
> G < as.factor(sample(c("A", "B", "C", "D"), 100, replace = TRUE))
> R < as.factor(rep(1:25, 4))
>
> library(lme4)
>
> m < lmer(y ~ x + G + (1  R))
> summary(m)$coefficients
>
> I get the fixed effect fit and their SE
>
> > summary(m)$coefficients
> Estimate Std. Error t value
> (Intercept) 0.07264454 0.1952380 0.3720820
> x 0.02519892 0.1238621 0.2034433
> GB 0.10969225 0.3118371 0.3517614
> GC 0.09771555 0.2705523 0.3611706
> GD 0.12944760 0.2740012 0.4724344
>
> The estimate for GA is not shown as it is fixed to 0. Normal, it is the
> reference level.
>
> But is there a way to get SE for GA of isit nonsense question because
> GA is fixed to 0 ?
In a way, yes it's a nonsense question, as you say.
If you really want an SE for GA then reparametrise so that GA is
meaningful:
m2 < lmer(y ~ x + 0 + G + (1  R))
Note that with this formulation GA will be there, "(Intercept)" will
disappear, and GB, GC and GD will now mean something different.
GA from m2 = (Intercept) from m
GB from m2 = (Intercept) + GB from m
GC from m2 = (Intercept) + GC from m
GD from m2 = (Intercept) + GD from m
I haven't followed what you've done below, but I think that you are
making things unnecessarily complicated and life difficult for yourself.
cheers,
Rolf

Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +6493737599 ext. 88276
>
> ______________
>
> I propose here a solution but I don't know if it is correct. It is based
> on reordering levels and averaging se for all reordering:
>
> G < relevel(G, "A")
> m < lmer(y ~ x + G + (1  R))
> sA < summary(m)$coefficients
>
> G < relevel(G, "B")
> m < lmer(y ~ x + G + (1  R))
> sB < summary(m)$coefficients
>
> G < relevel(G, "C")
> m < lmer(y ~ x + G + (1  R))
> sC < summary(m)$coefficients
>
> G < relevel(G, "D")
> m < lmer(y ~ x + G + (1  R))
> sD < summary(m)$coefficients
>
> seA < mean(sB["GA", "Std. Error"], sC["GA", "Std. Error"], sD["GA",
> "Std. Error"])
> seB < mean(sA["GB", "Std. Error"], sC["GB", "Std. Error"], sD["GB",
> "Std. Error"])
> seC < mean(sA["GC", "Std. Error"], sB["GC", "Std. Error"], sD["GC",
> "Std. Error"])
> seD < mean(sA["GD", "Std. Error"], sB["GD", "Std. Error"], sC["GD",
> "Std. Error"])
>
> seA; seB; seC; seD
>
>
> Thanks,
>
> Marc
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


In reply to this post by R help mailing list2
maybe qvcalc https://cran.rproject.org/web/packages/qvcalc/index.html
is useful for you.
Marc Girondot via Rhelp wrote/hat geschrieben on/am 30.12.2018 05:31:
> Dear members,
>
> Let do a example of simple GLMM with x and G as fixed factors and R as
> random factor:
>
> (note that question is the same with GLM or even LM):
>
> x < rnorm(100)
> y < rnorm(100)
> G < as.factor(sample(c("A", "B", "C", "D"), 100, replace = TRUE))
> R < as.factor(rep(1:25, 4))
>
> library(lme4)
>
> m < lmer(y ~ x + G + (1  R))
> summary(m)$coefficients
>
> I get the fixed effect fit and their SE
>
>> summary(m)$coefficients
> Estimate Std. Error t value
> (Intercept) 0.07264454 0.1952380 0.3720820
> x 0.02519892 0.1238621 0.2034433
> GB 0.10969225 0.3118371 0.3517614
> GC 0.09771555 0.2705523 0.3611706
> GD 0.12944760 0.2740012 0.4724344
>
> The estimate for GA is not shown as it is fixed to 0. Normal, it is the
> reference level.
>
> But is there a way to get SE for GA of isit nonsense question because
> GA is fixed to 0 ?
>
> ______________
>
> I propose here a solution but I don't know if it is correct. It is based
> on reordering levels and averaging se for all reordering:
>
> G < relevel(G, "A")
> m < lmer(y ~ x + G + (1  R))
> sA < summary(m)$coefficients
>
> G < relevel(G, "B")
> m < lmer(y ~ x + G + (1  R))
> sB < summary(m)$coefficients
>
> G < relevel(G, "C")
> m < lmer(y ~ x + G + (1  R))
> sC < summary(m)$coefficients
>
> G < relevel(G, "D")
> m < lmer(y ~ x + G + (1  R))
> sD < summary(m)$coefficients
>
> seA < mean(sB["GA", "Std. Error"], sC["GA", "Std. Error"], sD["GA",
> "Std. Error"])
> seB < mean(sA["GB", "Std. Error"], sC["GB", "Std. Error"], sD["GB",
> "Std. Error"])
> seC < mean(sA["GC", "Std. Error"], sB["GC", "Std. Error"], sD["GC",
> "Std. Error"])
> seD < mean(sA["GD", "Std. Error"], sB["GD", "Std. Error"], sC["GD",
> "Std. Error"])
>
> seA; seB; seC; seD
>
>
> Thanks,
>
> Marc
>
> ______________________________________________
> [hidden email] mailing list  To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/rhelp> PLEASE do read the posting guide
> http://www.Rproject.org/postingguide.html> and provide commented, minimal, selfcontained, reproducible code.
>
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


On 1/2/19 9:35 AM, Marc Girondot wrote:
> Hello members of the list,
>
> I asked 3 days ago a question about "how to get the SE of all effects
> after a glm or glmm". I post here a synthesis of the answer and a new
> solution:
>
> For example:
>
> x < rnorm(100)
>
> y < rnorm(100)
>
> G < as.factor(sample(c("A", "B", "C", "D"), 100, replace = TRUE)); G <
> relevel(G, "A")
>
>
> m < glm(y ~ x + G)
>
> summary(m)$coefficients
>
>
> No SE for A level in G category is calculated.
>
>
> * Here is a synthesis of the answers:
>
>
> 1/ The first solution was proposed by Rolf Turner
> < [hidden email]>. It was to add a + 0 in the formula and then
> it is possible to have the SE for the 4 levels (it works also with
> objects obtained with lme4:lmer() ):
>
> m1 < glm(y ~ x + G +0)
>
> summary(m1)$coefficients
>
>
> However, this solution using + 0 does not works if more than one
> category is included. Only the levels of the first one have all the SE
> estimated.
Well, you only asked about the setting in which there was only one
categorical predictor. If there are, e.g. two (say "G" and "H") try
m2 < glm(y ~ x + G:H + 0)
I would suggest that you learn a bit about how the formula structure
works in linear models.
cheers,
Rolf Turner
P.S. Your use of relevel() is redundant/irrelevant in this context.
R. T.

Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +6493737599 ext. 88276
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Please keep communications onlist.
On 1/2/19 10:57 AM, Marc Girondot wrote:
> Le 01/01/2019 à 22:35, Rolf Turner a écrit :
>> On 1/2/19 9:35 AM, Marc Girondot wrote:
>>> Hello members of the list,
>>>
>>> I asked 3 days ago a question about "how to get the SE of all effects
>>> after a glm or glmm". I post here a synthesis of the answer and a new
>>> solution:
>>>
>>> For example:
>>>
>>> x < rnorm(100)
>>>
>>> y < rnorm(100)
>>>
>>> G < as.factor(sample(c("A", "B", "C", "D"), 100, replace = TRUE)); G
>>> < relevel(G, "A")
>>>
>>>
>>> m < glm(y ~ x + G)
>>>
>>> summary(m)$coefficients
>>>
>>>
>>> No SE for A level in G category is calculated.
>>>
>>>
>>> * Here is a synthesis of the answers:
>>>
>>>
>>> 1/ The first solution was proposed by Rolf Turner
>>> < [hidden email]>. It was to add a + 0 in the formula and
>>> then it is possible to have the SE for the 4 levels (it works also
>>> with objects obtained with lme4:lmer() ):
>>>
>>> m1 < glm(y ~ x + G +0)
>>>
>>> summary(m1)$coefficients
>>>
>>>
>>> However, this solution using + 0 does not works if more than one
>>> category is included. Only the levels of the first one have all the
>>> SE estimated.
>>
>> Well, you only asked about the setting in which there was only one
>> categorical predictor. If there are, e.g. two (say "G" and "H") try
>>
>> m2 < glm(y ~ x + G:H + 0)
>>
>> I would suggest that you learn a bit about how the formula structure
>> works in linear models.
>>
>> cheers,
>>
>> Rolf Turner
>>
>> P.S. Your use of relevel() is redundant/irrelevant in this context.
>>
>> R. T.
>>
> Thanks for the advises. But based on my little knowledge of formula
> structure in linear models, A+B is not the same than A:B.
That is very true! But I never suggested using "A+B". In the context
of an additive model there is *NO WAY* to make sense of parameters
corresponding to each level of each factor. Consequently there can be
no way to form estimates of such parameters or of the standard errors of
such estimates. They cannot be made meaningful. (This is, in effect,
the reason for the existence of the  rather confusing 
overparametrised model.)
> The first structure used 6 parameters and the second one 14.
Well, it depends on how many levels each of A and B has! But yes, the
numbers of parameters will be different. They are different models.
> Then adding
> +0 does not solve the problem... or perhaps I am wrong ?
>
> Thanks for your time.
For an *additive* model "+0" does indeed not solve the problem. In this
context the "problem" has no solution.
You might get some insight by reading about "the cell means model" in
"Linear Models" by Shayle R. Searle:
@book{searle1997,
title={Linear Models},
author={Searle, S.R.},
isbn={9780471184997},
year={1997},
publisher={Wiley}
}
If you use the model I suggested (i.e. G:H + 0) you get an explicit
estimate for each cell mean, and the standard errors of these estimates.
cheers,
Rolf Turner

Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +6493737599 ext. 88276
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.

