Get p-values for effects/variables from (g)lmer models

2019/08/21

Tags: psychology R

For some reason (see GLMM FAQ and here for more detail), the lmer function in the lme4 package doesn’t provide p-values for the coefficients (i.e., whether the betas you got are different from 0 or not).

If you use lme4::glmer, you’ll find that there are p-values listed in the summary of the model. However, I found two problems of them.

  1. First, those p-values are testing the significance of the coefficients of the linear model, not the effects of our idependent variables (i.e., whether it makes a difference if the variable changes or not). Although if you only have two levels for each of your variables, the significance of the coefficients could sort of be used to check the significance of effects.

  2. Another problem with the p-values provided by glmer is that, they are results of Wald Z-test which is not a very good method (see GLMM FAQ). Essentially the likelihood ratio test (LRT) is always better than Wald test. See an example here to see that sometimes the Wald test p-values and LRT p-values can give you different significance.

So, for me, who mainly deal with data collected from cognitive psychology experiments, a convenient way to calculate the p-values for the experimental variables from linear mixed models is needed.

Some suggest that I can use the Anova function in the car package. But the p-values it provides also come from Wald chi-square tests. If you are OK with the Wald test, they can be helpful. For (g)lmer models, the car::Anova can address the first problem listed above (i.e., give you the significances of the effects instead of the coefficients). Note that if every factor of your glmer model only has two levels, the p-values in Anova results (type III Wald chisquare test) will be the same as the Wald Z-test in the summary of your glmer model.

Example (data is here):

library(tidyverse)
library(lme4)
library(car)

## read the data. 
## DV = depedent variable
## A, B, C = fixed effects factors
## S, I, = random effects factors
rtdata <- read_csv(file.path("./exampledata.csv"))
## convert A, B, C, S, I to factors
fctr_cols <- colnames(rtdata)[-1]
rtdata[fctr_cols] <- lapply(rtdata[fctr_cols], as_factor)

## fit an lmm and a glmm
options(contrasts = c("contr.sum", "contr.poly"))
lmm1 <- lmer(DV ~ A * B * C + (1 | S) + (1 | I), rtdata)
glmm1 <- glmer(DV ~ A * B * C + (1 | S) + (1 | I), rtdata, 
               family = Gamma(link=identity), 
               control = glmerControl(optimizer="bobyqa"))

summary(lmm1)

# Linear mixed model fit by REML ['lmerMod']
# Formula: DV ~ A * B * C + (1 | S) + (1 | I)
#    Data: rtdata

# REML criterion at convergence: 46222.6

# Scaled residuals: 
#     Min      1Q  Median      3Q     Max 
# -2.5634 -0.5519 -0.1915  0.2768  7.9204 

# Random effects:
#  Groups   Name        Variance Std.Dev.
#  I        (Intercept)  4639     68.11  
#  S        (Intercept)  9718     98.58  
#  Residual             30157    173.66  
# Number of obs: 3494, groups:  I, 118; S, 32

# Fixed effects:
#             Estimate Std. Error t value
# (Intercept) 617.2975    18.7566  32.911
# A1          -20.5304     2.9422  -6.978
# B1          -12.8787     2.9507  -4.365
# C1           -6.0019     6.9360  -0.865
# A1:B1        -8.4169     2.9450  -2.858
# A1:C1         1.3357     2.9431   0.454
# B1:C1         2.8175     2.9479   0.956
# A1:B1:C1     -0.9589     2.9447  -0.326

# Correlation of Fixed Effects:
#          (Intr) A1     B1     C1     A1:B1  A1:C1  B1:C1 
# A1       -0.002                                          
# B1       -0.001  0.001                                   
# C1        0.002 -0.003  0.001                            
# A1:B1     0.000 -0.009 -0.009  0.001                     
# A1:C1    -0.001  0.012  0.002 -0.005  0.004              
# B1:C1     0.001  0.002  0.012 -0.003 -0.005  0.001       
# A1:B1:C1  0.000  0.004 -0.005  0.000  0.013 -0.009 -0.009


summary(glmm1)

# Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
#  Family: Gamma  ( identity )
# Formula: DV ~ A * B * C + (1 | S) + (1 | I)
#    Data: rtdata
# Control: glmerControl(optimizer = "bobyqa")

#      AIC      BIC   logLik deviance df.resid 
#  44518.0  44585.8 -22248.0  44496.0     3483 

# Scaled residuals: 
#     Min      1Q  Median      3Q     Max 
# -1.6330 -0.5973 -0.2142  0.2849  9.3370 

# Random effects:
#  Groups   Name        Variance  Std.Dev.
#  I        (Intercept) 2.542e+03 50.4143 
#  S        (Intercept) 2.650e+03 51.4763 
#  Residual             7.222e-02  0.2687 
# Number of obs: 3494, groups:  I, 118; S, 32

# Fixed effects:
#             Estimate Std. Error t value Pr(>|z|)    
# (Intercept)  654.865      6.926  94.545  < 2e-16 ***
# A1           -21.591      2.196  -9.832  < 2e-16 ***
# B1           -13.413      2.225  -6.030 1.64e-09 ***
# C1            -3.338      6.022  -0.554    0.579    
# A1:B1        -10.278      2.208  -4.654 3.26e-06 ***
# A1:C1          0.495      2.236   0.221    0.825    
# B1:C1          2.688      2.214   1.214    0.225    
# A1:B1:C1      -1.054      2.215  -0.476    0.634    
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

# Correlation of Fixed Effects:
#          (Intr) A1     B1     C1     A1:B1  A1:C1  B1:C1 
# A1        0.003                                          
# B1       -0.019 -0.027                                   
# C1        0.028  0.008 -0.013                            
# A1:B1     0.006 -0.053 -0.083  0.014                     
# A1:C1     0.002 -0.024  0.004 -0.026  0.018              
# B1:C1    -0.042  0.007 -0.030 -0.032  0.010 -0.029       
# A1:B1:C1  0.010  0.015  0.007 -0.004 -0.037 -0.070 -0.064

Call car::Anova on lmer model:

## in this example, I want to investigate the interaction, so use type=3
car::Anova(lmm1, type=3)

# Analysis of Deviance Table (Type III Wald chisquare tests)
# 
# Response: DV
#                 Chisq Df Pr(>Chisq)    
# (Intercept) 1083.1362  1  < 2.2e-16 ***
# A             48.6906  1  2.997e-12 ***
# B             19.0495  1  1.274e-05 ***
# C              0.7488  1   0.386864    
# A:B            8.1682  1   0.004263 ** 
# A:C            0.2060  1   0.649944    
# B:C            0.9135  1   0.339185    
# A:B:C          0.1060  1   0.744694    
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Call car::Anova on glmer model:

car::Anova(glmm1, type=3)

# Analysis of Deviance Table (Type III Wald chisquare tests)
# 
# Response: DV
#                 Chisq Df Pr(>Chisq)    
# (Intercept) 8938.7224  1  < 2.2e-16 ***
# A             96.6718  1  < 2.2e-16 ***
# B             36.3555  1  1.644e-09 ***
# C              0.3072  1     0.5794    
# A:B           21.6573  1  3.260e-06 ***
# A:C            0.0490  1     0.8248    
# B:C            1.4737  1     0.2248    
# A:B:C          0.2264  1     0.6342    
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

## Note that the valus in column "Pr(>Chisq)" here is the same as the 
## column "Pr(>|z|)" in the "Fixed Effects" part of "summary(glmm1)"

Again, the Wald test is not very good. There are other ways to get p-values for each of the variables in your experiment design, see Mixed Models: Testing Significance of Effects. Besides them, I found the package afex can be very useful. It combines functions from lme4 and other packages that help to calculate p-values for variables, making model fitting and p-value calculating relatively easy. You can use the afex::mixed function to fit the model and choose a method you want to obtain p-values. The default method “KR” (Kenward-Roger) as well as method=“S” (Satterthwaite) support LMMs and other methods (“LRT” = likelihood-ratio tests and “PB” = parametric bootstrap) support both LMMs (estimated via lmer) and GLMMs (estimated via glmer). For more detail, see the afex documentation.

One big problem I encountered when using afex is that, sometimes when fitting a very complex (G)LMM, the program gave me a lot of convergence warnings. Because I don’t know any technique details of fitting (G)LMM, I can only look for peoples' instructions and answers online and follow them to solve the convergence warnings. But almost all of them are teaching me how to deal with the convergence warnings for a (g)lmer model, not for a afex::mixed model. So I decide to stay with the lme4 packge and learn how to calculate the p-values with LRT method for the newly fitted model after solving the convergence problem.

In LRT, the full model and a restricted model are compared using the function anova. The question and answers here show that the model generated by update(lm, .~.-FactorA) is not ideal for this purpose if the model has interactions (value ~ FactorA*FactorB), because the contrast matrix in the generated model is changed and not a subset of the contrast matrix used in the original model. I put the example here:

set.seed(101)
d <- data.frame(A = rep(c("a1", "a2"), each = 50), B = c("b1", "b2"), value = rnorm(100))
options(contrasts=c('contr.sum','contr.poly'))
m1 <- lm(value ~ A * B, data = d)
m1

# Call:
# lm(formula = value ~ A * B, data = d)
#
# Coefficients:
# (Intercept)           A1           B1        A1:B1
#    -0.03719     -0.08678      0.12290      0.04380

m2 <- update(m1, .~. - A)
m2

# Call:
# lm(formula = value ~ B + A:B, data = d)
#
# Coefficients:
# (Intercept)           B1       Bb1:A1       Bb2:A1
#    -0.03719      0.12290     -0.04298     -0.13058

Look at the coefficients of m2, they are “B1, Bb1:A1, Bb2:A1”, not “B1, A1:B1”. According to the answers, in order to get the “type 3” like p-values, the restricted model should have its “higher order remained”. The correct method to achieve this should be: (1) use the function model.matrix to manually create a contrast matrix that only removes the columns of the variable that you want to remove; (2) fit a restricted model with this manually created contrast matrix.

By the way, the author of this question is also the author of afex, and I believe this method is adopted in afex to calculate p-values when using LRT method.

Below is my example of how to manually conduct LRT for a variable (A) in a lmer model:

## Say we have a glmer model that gives us convergence warnings. Then we 
## followed `?convergence` and finally fit a new model glmm1 that doesn't 
## give warnings. So we want to use LRT to test glmm2's fixed effects.

## create design matrix
X <- model.matrix(~ A * B * C, data = rtdata)

## fit models dropping one effect at a time
## change from 1:ncol(X) to 2:ncol(X)
## to avoid a no intercept model
mm <- lapply(1:ncol(X), function(i) {
  lmer(DV ~ 0 + X[, -i] + (1 | S) + (1 | I), data = rtdata)
})
full_model <- lmer(DV ~ A * B * C + (1 | S) + (1 | I), data = rtdata)
names(mm) <- c(colnames(X))
## mm stores 8 restricted models, each of them excludes one fixed 
## effect which we want to test

## use `anova` to test. For example test the effct of A:
anova(full_model, mm$A1)

# refitting model(s) with ML (instead of REML)
# Data: rtdata
# Models:
# mm$A1: DV ~ 0 + X[, -i] + (1 | S) + (1 | I)
# full_model: DV ~ A * B * C + (1 | S) + (1 | I)
#            Df   AIC   BIC logLik deviance  Chisq Chi Df Pr(>Chisq)    
# mm$A1      10 46328 46390 -23154    46308                             
# full_model 11 46282 46350 -23130    46260 48.377      1  3.516e-12 ***
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

What if there’s a variable that has more than 2 levels? Just exclude multiple columns that corresponds to it from the contrast matrix:

## generate some data, S = random effect factor, A and B = fixed effect factor, 
## A has 3 levels. DV = dependent variable
rtdata2 <- data.frame(
  S = factor(S <- rep(seq_along(n <- sample(3:8, 60, TRUE)), n)),
  A = sample(c("a1", "a2", "a3"), length(S), TRUE),
  B = sample(c("b1", "b2"), length(S), TRUE),
  DV = rnorm(length(S), 3) + rep(rnorm(length(n)), n))

full_model2 <- lmer(DV ~ A * B + (1 | S), rtdata2)

## create design matrix
X2 <- model.matrix(~ A * B, data = rtdata2)
## exclude the columns that contains "A"
X2_A <- X2[, !grepl(pattern = "A[0-9]$", colnames(X2))]
res_model_A <- lmer(DV ~ 0 + X2_A + (1 | S), data = rtdata2)

## test the effect of A
anova(full_model2, res_model_A)

# refitting model(s) with ML (instead of REML)
# Data: rtdata2
# Models:
# res_model_A: DV ~ 0 + X2_A + (1 | S)
# full_model2: DV ~ A * B + (1 | S)
#             Df    AIC    BIC  logLik deviance  Chisq Chi Df Pr(>Chisq)
# res_model_A  6 1047.8 1070.8 -517.89   1035.8                         
# full_model2  8 1048.1 1078.9 -516.07   1032.1 3.6432      2     0.1618

## similarly, exclude all interaction terms containing "A" if we 
## want to test the effect of interaction
X2_AB <- X2[, !grepl(pattern = "A[0-9]:", colnames(X2))]
res_model_AB <- lmer(DV ~ 0 + X2_AB + (1 | S), data = rtdata2)
anova(full_model2, res_model_AB)

# refitting model(s) with ML (instead of REML)
# Data: rtdata2
# Models:
# res_model_AB: DV ~ 0 + X2_AB + (1 | S)
# full_model2: DV ~ A * B + (1 | S)
#              Df    AIC    BIC  logLik deviance  Chisq Chi Df Pr(>Chisq)  
# res_model_AB  6 1050.2 1073.2 -519.08   1038.2                           
# full_model2   8 1048.1 1078.9 -516.07   1032.1 6.0137      2    0.04945 *
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Takehome

The lme4::lmer function doesn’t provide p-values for coefficients. The lme4::glmer provides p-values for coefficients (but not for factors/effects) using the Wald Z-test, which is not a very good method. So for lmer models, or glmer models which have factor(s) with more than 2 level, we need some methods to obtain the p-values for the effects/variables.

We can use the car::Anova or other packages (e.g., pbkrtest , afex) to obtain p-values. For LRT, in which the full model and a restricted model is compared, the restricted model should have the same “higher order” as the full model (e.g., “A1, B1, A1:B1” -> “B1, A1:B1”). The model.matrix function should be used to created the corrected restricted models. Examples of generating restricted models were shown above.



comments powered by Disqus