Meta-analysis is the gold-standard for research synthesis across all disciplines. We synthesise studies to gain broader insights into the efficacy of treatment effects and / or relationships between variables. In physiology, ecology and evolution, we are also dealing with many different populations and species. As such, we’re not just interested in understanding ‘what the overall effect’ actually is (in fact, we may not even care in many cases), but we are mainly focused on attempting to understand what factors (e.g., biological, methodological) explain variation in effects (Gurevitch et al., 2018; Lajeunesse, 2010; Noble et al., 2022). In fact, reporting upon measures of variability, or what is referred to as ‘heterogeneity’ in meta-analysis, is essential to meta-analysis reporting and placing effects within context (Borenstein, 2019; Gurevitch et al., 2018; Nakagawa and Santos, 2012; Nakagawa et al., 2017; O’Dea et al., 2021). There are a number of important metrics of heterogeneity that are commonly used and reported upon in the meta-analytic literature (Borenstein, 2019; Nakagawa and Santos, 2012; Nakagawa et al., 2017). We’ll discuss some of the key ones.

Meta-analyst’s have also worked hard to develop tools that can be used to try and understand different forms of publication practices and biases within the scientific literature. Such biases can occur if studies reporting non-significant or opposite results to what is predicted are not found in systematic searches [‘i.e., the ’file-drawer’ problem; Jennions et al. (2013)]. Alternatively, biases could result from selective reporting or ‘p-hacking’. Visual and quantitative tools that have been developed try and identify and ‘correct’ for such biases on meta-analytic results (Jennions et al., 2013; Nakagawa et al., 2022; Rothstein et al., 2005). Having said that, aside from working hard to try and incorporate ‘gray literature’ (unpublished theses, government reports, etc.) and working hard to include work done in non-English speaking languages, there is little one can truly due to counteract publication biases beyond a few simple tools. We cannot know for certain what isn’t published in many cases or how a sample of existing work on a topic might be biased. Nonetheless, exploring the possibility of publication bias and its possible effects on conclusions is a core component of meta-analysis (O’Dea et al., 2021).

Applying the statistical tools of meta-analysis is therefore fundamental to establishing the generality of findings, testing hypotheses about what might drive variation among effects and identifying publication biases that may result in a skewed picture of existing work. Such insights have the potential to shape future research in important ways (Koricheva and Gurevitch, 2013).

Meta-analysis is a way to synthesise effect sizes using a special (or maybe not so special) set of models that account for each effect sizes sampling variance. In other words, meta-analyses are:

Statistical methods and techniques for aggregating, summarizing, and drawing inferences from collections of studies

Why would we want to account for a given study’s sampling variance? We want to do this because studies vary greatly in their sample size, and thus their power, to detect effects. As meta-analysts we want to weight effect sizes from studies with higher power more in an analysis. These studies are more likely to be ‘correct’ and their estimates less biased as a result of sampling variance.

Marc Lajeunesse does a brilliant job explaining the goals and types of models that meta-analysts use. In the following video he describes why weighting in meta-analysis is so important.

We can effectively think of a meta-analysis as a weighted regression model with the weights being the inverse sampling variance for each effect size. Weights are calculated differently depending on the meta-analytic model in question (more on that in later tutorials).

Throughout this workshop we will highlight the various features of meta-analysis that we discuss above. We’ll try to focus on dissecting how meta-analytic models are working and focus on important aspects of effect sizes and modelling that users should be aware of when meta-analysing a body of work for their specific question of interest. An important aspect to doing meta-analysis well (which can be hard) is being critical of the data and assumptions inherent to both effect sizes and statistical models. Meta-analysis also has some unique sources of non-independence (Gleser and Olkin, 2009; Gurevitch and Hedges, 1999; Nakagawa and Santos, 2012; Noble et al., 2017) that isn’t always apparent. We’ll try and cover these and discuss some ways in which you can protect yourself from their impacts on inferences.

`metafor`

!Throughout our workshop we will make use of the `metafor`

package (Viechtbauer, 2010). It has substantial capabilities. If you’re not familiar with `metafor`

we would suggest having a look at Wolfgang’s fantastic `UseR`

talk below. Having said that, we will go over the variaous functions as we move through tutorials.

**R version 4.0.5 (2021-03-31)**

**Platform:** x86_64-apple-darwin17.0 (64-bit)

**attached base packages:**
*stats*, *graphics*, *grDevices*, *utils*, *datasets*, *methods* and *base*

**other attached packages:**
*vembedr(v.0.1.5)*, *equatags(v.0.1.1)*, *mathjaxr(v.1.6-0)*, *pander(v.0.6.4)*, *orchaRd(v.2.0)*, *forcats(v.0.5.1)*, *stringr(v.1.4.0)*, *dplyr(v.1.0.9)*, *purrr(v.0.3.4)*, *readr(v.2.1.2)*, *tidyr(v.1.2.0)*, *tibble(v.3.1.7)*, *ggplot2(v.3.3.6)*, *tidyverse(v.1.3.1)*, *flextable(v.0.6.10)*, *metafor(v.3.5-4)*, *metadat(v.1.2-0)* and *Matrix(v.1.4-0)*

**loaded via a namespace (and not attached):**
*httr(v.1.4.3)*, *sass(v.0.4.1)*, *jsonlite(v.1.8.0)*, *modelr(v.0.1.8)*, *bslib(v.0.3.1)*, *assertthat(v.0.2.1)*, *cellranger(v.1.1.0)*, *yaml(v.2.3.5)*, *remotes(v.2.4.2)*, *gdtools(v.0.2.3)*, *locatexec(v.0.1.1)*, *pillar(v.1.7.0)*, *backports(v.1.4.1)*, *lattice(v.0.20-45)*, *glue(v.1.6.2)*, *uuid(v.1.1-0)*, *digest(v.0.6.29)*, *rvest(v.1.0.2)*, *colorspace(v.2.0-3)*, *htmltools(v.0.5.2)*, *pkgconfig(v.2.0.3)*, *broom(v.0.8.0)*, *haven(v.2.5.0)*, *bookdown(v.0.24)*, *scales(v.1.2.0)*, *xslt(v.1.4.3)*, *officer(v.0.4.1)*, *tzdb(v.0.3.0)*, *generics(v.0.1.2)*, *ellipsis(v.0.3.2)*, *pacman(v.0.5.1)*, *withr(v.2.5.0)*, *klippy(v.0.0.0.9500)*, *cli(v.3.3.0)*, *magrittr(v.2.0.3)*, *crayon(v.1.5.1)*, *readxl(v.1.4.0)*, *evaluate(v.0.15)*, *fs(v.1.5.2)*, *fansi(v.1.0.3)*, *nlme(v.3.1-152)*, *xml2(v.1.3.3)*, *tools(v.4.0.5)*, *data.table(v.1.14.2)*, *hms(v.1.1.1)*, *lifecycle(v.1.0.1)*, *munsell(v.0.5.0)*, *reprex(v.2.0.1)*, *zip(v.2.2.0)*, *compiler(v.4.0.5)*, *jquerylib(v.0.1.4)*, *systemfonts(v.1.0.3)*, *rlang(v.1.0.2)*, *grid(v.4.0.5)*, *rstudioapi(v.0.13)*, *base64enc(v.0.1-3)*, *rmarkdown(v.2.14)*, *gtable(v.0.3.0)*, *DBI(v.1.1.3)*, *curl(v.4.3.2)*, *R6(v.2.5.1)*, *lubridate(v.1.8.0)*, *knitr(v.1.39)*, *fastmap(v.1.1.0)*, *utf8(v.1.2.2)*, *stringi(v.1.7.6)*, *Rcpp(v.1.0.8.3)*, *vctrs(v.0.4.1)*, *dbplyr(v.2.2.0)*, *tidyselect(v.1.1.2)* and *xfun(v.0.31)*