rlme4lsmeansposthocpairwise

Lsmeans producing identical p values when controlling for two other factors


Previously I have had no issues when using lsmeans to identify significant differences between groups while controlling for other factors using lme4 models. However, with the following dataset looking at fluoresence lsmeans produces identical p values regardless of other factor levels

A subset of the data used in this exmample can be found here: https://drive.google.com/file/d/0B3-esLisG8EbTzA3cjVpRGtjREU/view?usp=sharing

Data

Response(s): here 1/0 presence/absence. (but also average pixel intensity and cbind precentage cover)

Fixed factor 1: heat treatment - 2 levels

Fixed factor 2: competition treatment - 2 levels

Fixed factor 3: time treatment - 2 levels

Random factor: none

Model creation

library(lme4)
model<-glm(presence ~ heat.treatment + competition.treatment + time.post.mating.hrs,  binomial(link= "logit"), data=gfptest)

Orginally interaction terms were included but their presence was non-significant based on AIC testing. Using drop1 for significance testing on fixed factor removal heat is important

drop1(model, test= "Chi")
# presence ~ heat.treatment + competition.treatment + time.post.mating.hrs
#                      Df Deviance    AIC    LRT Pr(>Chi)   
#<none>                     30.589 38.589                   
#heat.treatment         1   39.114 45.114 8.5251 0.003503 **
#competition.treatment  1   30.876 36.876 0.2868 0.592297   
#time.post.mating.hrs   1   32.410 38.410 1.8206 0.177237   

I would like to test for the difference between control and heat treaments while controlling for the competition treatment and time treatment eg. is presence significantly different between controls and heats at timepoint 0.5 hours and no competition, is presence significantly different between controls and heats at timepoint 24 hours and no competition, etc. I've tried lsmeans functions (multcomp yeilds similar results)

lsmeans(model, pairwise~heat.treatment+competition.treatment+time.post.mating.hrs, adjust="tukey")

and more explicitly

model <- lsmeans(model, "heat.treatment", by = "competition.treatment", at = list( time.post.mating.hrs = "0.5"))
modelsum<- summary(model, infer= c(TRUE,TRUE), level= .90, adjust= "bon", by="competition.treatment")
modelsum

pairs(model)

However, both give identical p values within each group combination; something which does not seem accurate when looking at boxplots and doing pairwise mann-whitney-U tests on data ranks

$contrasts
 contrast                                            estimate           SE df z.ratio p.value
control,single,0.5 - heat,single,0.5              18.3718560 2224.3464134 NA   0.008  1.0000
control,competition,0.5 - heat,competition,0.5    18.3718560 2224.3464134 NA   0.008  1.0000
control,single,24 - heat,single,24                18.3718560 2224.3464134 NA   0.008  1.0000
control,competition,24 - heat,competition,24      18.3718560 2224.3464134 NA   0.008  1.0000

I have tried exploring the dataframe to eliminate the cause of identical p-values. The issue is still apparent with reducing the number of factors to two and using a different response variable/error distribution.

Any help with resolving the lsmean/similar package issue would be appreciated. As a secondary option any advice on whether its acceptable to do poisson/binomial glm()s then follow up post-hoc testing with t-test/mann-whitneys


Solution

  • It seems very odd that you would note that the P values are all the same and apparently did not notice that the estimated differences and standard errors are all the same. Most people look at those first. (If you didn't, I recommend it highly; we should talk about effects and such, the P values are just accessories.)

    Anyway, the explanation is that you fitted an additive model -- one with no interactions. Such a model specifies that the effect of one factor is exactly the same, regardless of the levels of any other factor. And that is exactly what you are seeing.

    In short, this has nothing to do with lsmeans() and everything to do with the model you fitted.