I want to conduct a meta analysis of pre-post mean changes and stumbled upon different results from metafor rma and metamean function.
First, I computed the raw mean change with escalc, assuming a correlation between measures of r = 0.5 because it is missing:
data$r <- 0.5
es <- escalc(
measure = "MC",
m1i = M_pre, sd1i = SD_pre,
m2i = M_post, sd2i = SD_post,
ni = n, ri = r,
data = data
Resulting in this data for reprex:
structure(list(M_pre = c(0.551, 0.6, 1.32, 1.293, 1.055357, 1.7125,
0.983, 1.1, 0.581), SD_pre = c(0.261, 0.274, 0.07, 0.462, 0.4004462,
0.1875, 0.273, 0.316, 0.17475), M_post = c(0.92, 1, 1.13, 0.975,
1.067857, 1.125, 0.813, 1.111, 0.376), SD_post = c(0.352, 0.334,
0.06, 0.409, 0.3856812, 0.243, 0.488, 0.344, 0.16325), n = c(7,
7, 39, 14, 56, 8, 6, 9, 11), r = c(0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5), yi = c(-0.369, -0.4, 0.19, 0.318, -0.0125, 0.5875,
0.17, -0.011, 0.205), vi = c(0.0143075714, 0.013588, 0.0001102564,
0.0136976429, 0.0027618317, 0.0060803438, 0.0299081667, 0.0121653333,
0.0026054716)), class = "data.frame", row.names = c(NA, -9L))
Now I used rma and metamean in the following way, ensuring both use REML for random-effects and in both functions the default for pooling effect sizes is inverse-variance weighting as well as the default for computing confidence intervals is based on standard normal distribution.
meta <- rma(yi, vi, data = es, method = "REML")
meta <- metamean(n = n,
mean = yi,
sd = sqrt(vi), # meta uses sd not variance
data = es,
fixed = FALSE,
method.tau = "REML")
Alas, not only is the pooled effect size different but the confidence intervals of indidividual studies and heterogeneity too.
Summary output rma:
Random-Effects Model (k = 9; tau^2 estimator: REML)
logLik deviance AIC BIC AICc
-2.1154 4.2307 8.2307 8.3896 10.6307
tau^2 (estimated amount of total heterogeneity): 0.0885 (SE = 0.0491)
tau (square root of estimated tau^2 value): 0.2975
I^2 (total heterogeneity / total variability): 96.22%
H^2 (total variability / sampling variability): 26.48
Test for Heterogeneity:
Q(df = 8) = 92.0790, p-val < .0001
Model Results:
estimate se zval pval ci.lb ci.ub
0.0809 0.1045 0.7737 0.4391 -0.1240 0.2858
Summary output metamean:
mean 95%-CI %W(random)
1 -0.3690 [-0.4576; -0.2804] 11.0
2 -0.4000 [-0.4864; -0.3136] 11.0
3 0.1900 [ 0.1867; 0.1933] 11.3
4 0.3180 [ 0.2567; 0.3793] 11.2
5 -0.0125 [-0.0263; 0.0013] 11.3
6 0.5875 [ 0.5335; 0.6415] 11.2
7 0.1700 [ 0.0316; 0.3084] 10.7
8 -0.0110 [-0.0831; 0.0611] 11.1
9 0.2050 [ 0.1748; 0.2352] 11.2
Number of studies: k = 9
Number of observations: o = 157
mean 95%-CI
Random effects model 0.0763 [-0.1300; 0.2825]
Quantifying heterogeneity:
tau^2 = 0.0983 [0.0441; 0.3649]; tau = 0.3135 [0.2100; 0.6041]
I^2 = 99.4% [99.3%; 99.5%]; H = 13.11 [11.82; 14.54]
Test of heterogeneity:
Q d.f. p-value
1374.87 8 < 0.0001
Details on meta-analytical method:
- Inverse variance method
- Restricted maximum-likelihood estimator for tau^2
- Q-Profile method for confidence interval of tau^2 and tau
- Untransformed (raw) means
Do you have any ideas where the differences may result from and how I can change that? Are there any other solutions or do you know what the "more accurate" result is? I would rather decide for meta-package, because the forest plot is already much prettier / easier to adjust.
The sd
argument of metamean()
is for the SDs, not the standard errors of the means. In this case, you need to give the SDs of the change scores, which you can get by multiplying sqrt(vi)
by the square root of the sample sizes. So, with sd = sqrt(vi)*sqrt(n)
, you will get the same results.