I am wondering whether it is possible to "translate" a simple mixed-models analysis using the function lme
(package nlme) with a pdIdent
-line to an equivalent analysis for the lmer
(package lme4).
In short, I am looking for a simple copy of the following analysis using the lmer
-function. The data is from another example in the net:
library(nlme)
Q <- factor(c(1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2))
R <- factor(c(1, 1 , 2, 2, 3, 3, 1, 1, 2, 2, 3, 3))
y <- c(51.43, 51.28, 50.93, 50.75, 50.47, 50.83, 51.91, 52.43, 52.26, 52.33, 51.58, 51.23)
DS <- data.frame(Q, R, y)
DS$Q <- as.factor(DS$Q)
DS$R <- as.factor(DS$R)
lme3 <- lme(y ~ Q, random = list(R = pdIdent(~ Q - 1)), data = DS)
summary(lme3)
and this yields variances of 0.3979
for Q1
and Q2
Random effects:
Formula: ~Q - 1 | R
Structure: Multiple of an Identity
Q1 Q2 Residual
StdDev: 0.3979283 0.3979283 0.2202835
on a 2x3 matrix. The random effect are
> ranef(lme3)
Q1 Q2
1 0.35263487 0.1849888
2 -0.09393962 0.2933806
3 -0.25869525 -0.4783694
I am aware that in lmer
one gets close by specifing a random intercept for the Q:R
-Interaction
lmer3 <- lmer(y ~ Q + (-1 + 1 | Q:R), data = DS)
summary(lmer3)
which yields a variance for the interaction term equal to the lme calculations
Random effects:
Groups Name Variance Std.Dev.
Q:R (Intercept) 0.15835 0.3979
Residual 0.04852 0.2203
Number of obs: 12, groups: Q:R, 6
and an 1x6 random effects matrix:
$`Q:R`
(Intercept)
1:1 0.35263436
1:2 -0.09393948
1:3 -0.25869488
2:1 0.18498852
2:2 0.29338022
2:3 -0.47836874
with conditional variances for “Q:R”
However, I am not quite sure whether the two models are the same as for example one has to check twice normality of random effect in the first model, but only once in the second. So, any suggestions to these results and to improve the copy? Maybe using a dummy-variable?
A good way to confirm that two models are equivalent is to compare the log-likelihoods:
> logLik(lme3)
'log Lik.' -4.889587 (df=4)
> logLik(lmer3)
'log Lik.' -4.889587 (df=4)
Roughly speaking, this is a sufficient condition to consider the two models the same. (However, different packages may drop parameter-independent constants from log-likelihoods, so you may get different log-likelihoods from different packages even for equivalent models ...)
You'll also get an equivalent model this way:
DS <- transform(DS, QR = interaction(Q,R))
summary(lme(y~Q, random = ~1|QR, data = DS))
Finally, it's not clear what your -1 + 1
is doing in your lmer
call: y ~ Q + (1 | Q:R)
should work fine.