What’s the difference between fixed and random effects meta-analysis?

If you’re familiar with the meta-analysis literature you will have heard of ‘fixed’ and ‘random’ effect meta-analysis (Borenstein et al., 2009; Gurevitch and Hedges, 1999; Gurevitch et al., 2018; Koricheva et al., 2013; Nakagawa and Santos, 2012; Nakagawa et al., 2017). Understanding the differences between these two models is fundamentally important to understanding meta-analysis more generally. Also, once you understand the difference you’ll realise why fixed-effect meta-analyses are not something we should normally apply in ecology and evolution. The assumptions are not too sensible when we are talking about highly variable meta-analytic data taken from many, many species. In fact, even a random effects meta-analysis is not going to cut it in most cases (more on that later).

Despite the limitations, these models have an important place in the meta-analytic literature. They are also excellent stepping stones to understanding the assumptions inherent to meta-analytic models and how we can build these models more appropriately for our kind of data.

This tutorial will walk you through the differences between fixed and random effect models with a focus on understanding how we are synthesising different effect sizes across studies. At the end of this tutorial I hope you understand:

  1. The distinct difference between fixed and random effect meta-analytic models
  2. How to fit these models using metafor (the most popular package for meta-analysis in R)
  3. How to interpret the output from each model
  4. How you can reproduce the results of the metafor models yourself

Simulating some meta-analytic data

To demonstrate, we can simulate some data. This is also quite useful for getting a handle on what the difference between fixed and random effects meta-analytic models looks like. Either way, we’ll explore the data in depth as we move through the tutorial.

# Generate some simulated effect size data with known sampling variance assumed
# to come from a common underlying distribution
set.seed(86)  # Set see so that we all get the same simulated results
# We will have 5 studies
stdy <- 1:5
# We know the variance for each effect
Ves <- c(0.05, 0.1, 0.02, 0.1, 0.09)
# We'll need this later but these are weights
W <- 1/Ves
# We assume they are sampled from a normal distribution with a mean effect size
# of 2
es <- rnorm(length(Ves), 2, sqrt(Ves))
# Data for our fixed effect meta-analysis
dataFE <- data.frame(stdy = stdy, es, Ves)

# Generate a second set of effect sizes, but now assume that each study effect
# does not come from the same distribution, but from a population of effect
# sizes.

# Here adding 0.8 says we want to add 0.8 as the between study variability. In
# other words, each effect size is sampled from a larger distribution of effect
# sizes that itself comes from a distribution with a variance of 0.8.
esRE <- rnorm(length(Ves), 2, sqrt(Ves + 0.8))
# Data for our random effect meta-analysis
dataRE <- data.frame(stdy = stdy, esRE, Ves)

We can get a look at what these two datasets we simulated above look like (Figure 1). The red circles are effect sizes and their standard errors (square root of the sampling error variance) from our fixed effect meta-analysis (FE) data set. In contrast, the black circles are our effect size and standard errors from the data generated in the random effect meta-analysis (RE) dataset. The black line is the average, true effect size (which we have set to the value 2).

Mean (arrows are sampling standard deviation) effect size for each study. Data simulated under a **fixed effect model in black** and data simulated under a **random effect model in red**.

Figure 1: Mean (arrows are sampling standard deviation) effect size for each study. Data simulated under a fixed effect model in black and data simulated under a random effect model in red.

We notice a few important differences here. In the RE data set the variance across the studies is a lot larger compared to the FE data set. This is what we would expect because we have added in a between study variance. Now, let’s use a common meta-analysis package, metafor, to analyse these data sets.

Fixed Effect Meta-Analysis

What is a fixed effect meta-analysis anyway? If it wasn’t clear from the above simulated data lets be more formal. We simulated data according to the following model, a fixed effects meta-analytic model:

\[ y_{i} = \mu + m_{i} \\ m_{i} \sim N(0, v_{i}) \] where \(y_{i}\) is the ith effect size estimate and \(m_{i}\) is the sampling error (deviation from \(\mu\)) for effect size i. Sampling deviations are assumed to be drawn from a normal distribution (N) with a mean of 0 and a sampling variance, \(v_{i}\). You may notice from this model, and the simulated data, that we are assuming that there is a single overall mean from the pooled effect sizes we have, that effects are assumed to be independently, and identically distributed, and that the only deviation from \(\mu\) is the result of sampling error, which is related to the sample size or power of the study. There are no additional sources of variance added to effect size estimates (Figure 2a).

Fixed effect meta-analysis with metafor

Now that we understand the fixed effect model, both from formal statistical notation and by looking at how the data were generated, lets fit a fixed effect model in metafor.

# Run a fixed effect meta-analysis using the FE dataset.
metafor::rma(yi = es, vi = Ves, method = "FE", data = dataFE)
## 
## Fixed-Effects Model (k = 5)
## 
## I^2 (total heterogeneity / total variability):   0.00%
## H^2 (total variability / sampling variability):  0.56
## 
## Test for Heterogeneity:
## Q(df = 4) = 2.2340, p-val = 0.6928
## 
## Model Results:
## 
## estimate      se     zval    pval   ci.lb   ci.ub     ​ 
##   2.0731  0.0994  20.8459  <.0001  1.8782  2.2680  *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

How to interpret the model output?

Task

What do the model results and Q values (under Test for Heterogeneity) mean?


Answer

The output suggests that the average estimate is 2.07 and it has a standard error of 0.1. It also provides us with a Q statistic, which measures the amount of heterogeneity among effect sizes. If Q is large and the p-value is small then it suggests substantial heterogeneity among effect sizes beyond what we would expect if these effects were generated from a common underlying distribution. This is a major assumption of fixed effect meta-analyses and in this case is supported. This is a good thing because we specifically generated these data under the assumption that they are from the same distribution. We can see this if we look at how we generated the data: rnorm(length(Ves), 2, sqrt(Ves)). This draws random effect sizes from a normal distribution with a variance (sqrt(Ves)) for each effect size that is only defined by its sampling variability.


Fixed effect meta-analysis by hand!

What is the model above doing though? and what is the logic behind the calculations? The best way to understand this is to hand calculate these values. Basically the effect sizes are being weighted by their sampling error variance when deriving the pooled estimate and it’s variance. Let’s calculate this by hand and see what’s happening:

# Calculate pooled effect size
EsP.FE <- sum(W * dataFE$es)/sum(W)
EsP.FE
## [1] 2.073
# Calculate the pooled variance around estimate
VarEsP.FE <- 1/sum(W)
VarEsP.FE
## [1] 0.00989
# Calculate the standard error around estimate
SE.EsP.FE <- sqrt(VarEsP.FE)
SE.EsP.FE
## [1] 0.09945

Wow! This is so cool. We just did a fixed effect meta-analysis by hand…and, look, it matches the model output perfectly. The math is not so scary after all! But what about this extra statistic, Q? How do we derive this?

# Now lets calculate our Q. Q is the total amount of heterogeneity in our data.
Q.fe <- sum(W * (dataFE$es^2)) - (sum(W * dataFE$es)^2/sum(W))
Q.fe
## [1] 2.234

Cool. So this value also matches up nicely. If you didn’t catch all that, we can summarise it in Table 1 below.

Random Effect Meta-Analysis

Now lets turn to the formal definition of a random effects meta-analysis. It looks at the outset as being very similar to our fixed effect model, but with a critically important addition: \[ y_{i} = \mu + s_{i} + m_{i} \\ m_{i} \sim N(0, v_{i}) \\ s_{i} \sim N(0, \tau^2) \] Again, \(y_{i}\) is the ith effect size estimate and \(m_{i}\) is the sampling error (deviation from \(\mu\)) for effect size i. Sampling deviations are assumed to be drawn from a normal distribution (N) with a mean of 0 and a sampling variance, \(v_{i}\). However, you will notice now that we have an additional source of variation, \(s_{i}\), which is the study specific deviation. Here, we are assuming now that every study has it’s own mean effect size which is again sampled from a normal distribution with a between study variance, \(\tau^2\) (read, tau-squared).

You’ll notice from this model, and the simulated data, that we are again assuming that there is a single overall mean and that effects are independently, and identically, distributed. However, now any deviation from the true mean is the result of not only \(\mu\) (sampling error) but also because studies vary in their true mean effect sizes as well. Again, beyond these two sources, there are no additional sources of variance added to effect size estimates.