Why do we do meta-analysis?

Meta-analysis is the gold-standard for research synthesis across all disciplines. We synthesise studies to gain broader insights into the efficacy of treatment effects and / or relationships between variables. In physiology, ecology and evolution, we are also dealing with many different populations and species. As such, we’re not just interested in understanding ‘what the overall effect’ actually is (in fact, we may not even care in many cases), but we are mainly focused on attempting to understand what factors (e.g., biological, methodological) explain variation in effects (Gurevitch et al., 2018; Lajeunesse, 2010; Noble et al., 2022). In fact, reporting upon measures of variability, or what is referred to as ‘heterogeneity’ in meta-analysis, is essential to meta-analysis reporting and placing effects within context (Borenstein, 2019; Gurevitch et al., 2018; Nakagawa and Santos, 2012; Nakagawa et al., 2017; O’Dea et al., 2021). There are a number of important metrics of heterogeneity that are commonly used and reported upon in the meta-analytic literature (Borenstein, 2019; Nakagawa and Santos, 2012; Nakagawa et al., 2017). We’ll discuss some of the key ones.

Meta-analyst’s have also worked hard to develop tools that can be used to try and understand different forms of publication practices and biases within the scientific literature. Such biases can occur if studies reporting non-significant or opposite results to what is predicted are not found in systematic searches [‘i.e., the ’file-drawer’ problem; Jennions et al. (2013)]. Alternatively, biases could result from selective reporting or ‘p-hacking’. Visual and quantitative tools that have been developed try and identify and ‘correct’ for such biases on meta-analytic results (Jennions et al., 2013; Nakagawa et al., 2022; Rothstein et al., 2005). Having said that, aside from working hard to try and incorporate ‘gray literature’ (unpublished theses, government reports, etc.) and working hard to include work done in non-English speaking languages, there is little one can truly due to counteract publication biases beyond a few simple tools. We cannot know for certain what isn’t published in many cases or how a sample of existing work on a topic might be biased. Nonetheless, exploring the possibility of publication bias and its possible effects on conclusions is a core component of meta-analysis (O’Dea et al., 2021).

Applying the statistical tools of meta-analysis is therefore fundamental to establishing the generality of findings, testing hypotheses about what might drive variation among effects and identifying publication biases that may result in a skewed picture of existing work. Such insights have the potential to shape future research in important ways (Koricheva and Gurevitch, 2013).

How does meta-analysis work?

Meta-analysis is a way to synthesise effect sizes using a special (or maybe not so special) set of models that account for each effect sizes sampling variance. In other words, meta-analyses are:

Statistical methods and techniques for aggregating, summarizing, and drawing inferences from collections of studies

Why would we want to account for a given study’s sampling variance? We want to do this because studies vary greatly in their sample size, and thus their power, to detect effects. As meta-analysts we want to weight effect sizes from studies with higher power more in an analysis. These studies are more likely to be ‘correct’ and their estimates less biased as a result of sampling variance.

Marc Lajeunesse does a brilliant job explaining the goals and types of models that meta-analysts use. In the following video he describes why weighting in meta-analysis is so important.

We can effectively think of a meta-analysis as a weighted regression model with the weights being the inverse sampling variance for each effect size. Weights are calculated differently depending on the meta-analytic model in question (more on that in later tutorials).

Throughout this workshop we will highlight the various features of meta-analysis that we discuss above. We’ll try to focus on dissecting how meta-analytic models are working and focus on important aspects of effect sizes and modelling that users should be aware of when meta-analysing a body of work for their specific question of interest. An important aspect to doing meta-analysis well (which can be hard) is being critical of the data and assumptions inherent to both effect sizes and statistical models. Meta-analysis also has some unique sources of non-independence (Gleser and Olkin, 2009; Gurevitch and Hedges, 1999; Nakagawa and Santos, 2012; Noble et al., 2017) that isn’t always apparent. We’ll try and cover these and discuss some ways in which you can protect yourself from their impacts on inferences.

Meta-analysis with metafor!

Throughout our workshop we will make use of the metafor package (Viechtbauer, 2010). It has substantial capabilities. If you’re not familiar with metafor we would suggest having a look at Wolfgang’s fantastic UseR talk below. Having said that, we will go over the variaous functions as we move through tutorials.


Borenstein, M. (2019). Heterogenity in meta-analysis. In (ed. Cooper, H.), Hedges, L. V.), and Valentine, J. C.), pp. 454–466. New York: Russell Sage Foundation.
Gleser, L. J. and Olkin, I. (2009). Stochastically dependent effect sizes. In: The Handbook of Research Synthesis and Meta-analysis (eds Cooper H, Hedges LV, Valentine JC). Russell Sage Foundation, New York. pp 357–376.
Gurevitch, J. and Hedges, L. V. (1999). Statistical issues in ecological meta-analyses. Ecology 80, 1142–1149.
Gurevitch, J., Koricheva, J., Nakagawa, S. and Stewart, G. (2018). Meta-analysis and the science of research synthesis. Nature 555, 176–182.
Jennions, M. D., Lortie, C. J., Rosenberg, M. S. and Rothstein, H. R. (2013). Publication and related biases. In: Handbook of Meta-Analysis in Ecology and Evolution (eds J. Koricheva, J. Gurevitch & K. Mengersen). Princeton University Press, Princeton and Oxford. pp 207–236.
Koricheva, J. and Gurevitch, J. (2013). Place of meta-analysis among other methods of research synthesis. In: Handbook of Meta-Analysis in Ecology and Evolution (eds J. Koricheva, J. Gurevitch & K. Mengersen), Princeton University Press, Princeton, New Jersey. pp 3–13.
Lajeunesse, M. J. (2010). Achieving synthesis with meta-analysis by combining and comparing all available studies. Ecology 91, 2561–2564.
Nakagawa, S. and Santos, E. S. (2012). Methodological issues and advances in biological meta-analysis. Evolutionary Ecology 26, 1253–1274.
Nakagawa, S., Noble, D. W. A., Senior, A. M. and Lagisz, M. (2017). Meta-evaluation of meta-analysis: Ten appraisal questions for biologists. BMC Biology 15–18, DOI 10.1186/s12915-017-0357-7.
Nakagawa, S., Lagisz, M., Jennions, M. D., Koricheva, J., Daniel W. A. Noble, T. H. P., Sánchez-Tójar, A., Yang, Y. and O’Dea, R. E. (2022). Methods for testing publication bias in ecological and evolutionary meta-analyses. Methods in Ecology and Evolution 13, 4–21.
Noble, D. W. A., Lagisz, M., O’Dea, R. E. and Nakagawa, S. (2017). Non‐independence and sensitivity analyses in ecological and evolutionary meta‐analyses. Molecular Ecology 26, 2410–2425.
Noble, D. W. A., Pottier, P., Lagisz, M., Burke, S., Drobniak, S. M., O’Dea, R. E. and Nakagawa, S. (2022). Meta-analytic approaches and effect sizes to account for “nuisance heterogeneity” in comparative physiology. Journal of Experimental Biology 225, jeb243225.
O’Dea, R. E., Lagisz, M., Jennions, M. D., Koricheva, J., Noble, D. W. A., Parker, T. H., Gurevitch, J., Page, M. J., Stewart, G., Moher, D., et al. (2021). Preferred reporting items for systematic reviews and meta-analyses in ecology and evolutionary biology: A PRISMA extension. Biological Reviews doi: 10.1111/brv.12721,.
Rothstein, H. R., Sutton, A. J. and Borenstein, M. (2005). Publication bias in meta-analysis: Prevention, assessment and adjustments. Wiley, Chichester.
Viechtbauer, W. (2010). Conducting meta-analyses in r with the metafor package. Journal of Statistical Software 36, 1–48. URL: https://www.jstatsoft.org/v36/i03/.

Session Information

R version 4.0.5 (2021-03-31)

Platform: x86_64-apple-darwin17.0 (64-bit)

attached base packages: stats, graphics, grDevices, utils, datasets, methods and base

other attached packages: vembedr(v.0.1.5), equatags(v.0.1.1), mathjaxr(v.1.6-0), pander(v.0.6.4), orchaRd(v.2.0), forcats(v.0.5.1), stringr(v.1.4.0), dplyr(v.1.0.9), purrr(v.0.3.4), readr(v.2.1.2), tidyr(v.1.2.0), tibble(v.3.1.7), ggplot2(v.3.3.6), tidyverse(v.1.3.1), flextable(v.0.6.10), metafor(v.3.5-4), metadat(v.1.2-0) and Matrix(v.1.4-0)

loaded via a namespace (and not attached): httr(v.1.4.3), sass(v.0.4.1), jsonlite(v.1.8.0), modelr(v.0.1.8), bslib(v.0.3.1), assertthat(v.0.2.1), cellranger(v.1.1.0), yaml(v.2.3.5), remotes(v.2.4.2), gdtools(v.0.2.3), locatexec(v.0.1.1), pillar(v.1.7.0), backports(v.1.4.1), lattice(v.0.20-45), glue(v.1.6.2), uuid(v.1.1-0), digest(v.0.6.29), rvest(v.1.0.2), colorspace(v.2.0-3), htmltools(v.0.5.2), pkgconfig(v.2.0.3), broom(v.0.8.0), haven(v.2.5.0), bookdown(v.0.24), scales(v.1.2.0), xslt(v.1.4.3), officer(v.0.4.1), tzdb(v.0.3.0), generics(v.0.1.2), ellipsis(v.0.3.2), pacman(v.0.5.1), withr(v.2.5.0), klippy(v., cli(v.3.3.0), magrittr(v.2.0.3), crayon(v.1.5.1), readxl(v.1.4.0), evaluate(v.0.15), fs(v.1.5.2), fansi(v.1.0.3), nlme(v.3.1-152), xml2(v.1.3.3), tools(v.4.0.5), data.table(v.1.14.2), hms(v.1.1.1), lifecycle(v.1.0.1), munsell(v.0.5.0), reprex(v.2.0.1), zip(v.2.2.0), compiler(v.4.0.5), jquerylib(v.0.1.4), systemfonts(v.1.0.3), rlang(v.1.0.2), grid(v.4.0.5), rstudioapi(v.0.13), base64enc(v.0.1-3), rmarkdown(v.2.14), gtable(v.0.3.0), DBI(v.1.1.3), curl(v.4.3.2), R6(v.2.5.1), lubridate(v.1.8.0), knitr(v.1.39), fastmap(v.1.1.0), utf8(v.1.2.2), stringi(v.1.7.6), Rcpp(v., vctrs(v.0.4.1), dbplyr(v.2.2.0), tidyselect(v.1.1.2) and xfun(v.0.31)