Code
rm(list = ls())
::install_github("daniel1noble/orchaRd", ref = "dev", force = TRUE)
devtools::p_load(devtools, tidyverse, metafor, patchwork, R.rsp, orchaRd, emmeans, ape, phytools, flextable, rotl, clubSandwich) pacman
orchaRd
2.0 allows users to create pretty orchard plots that contain both confidence intervals (CIs) and prediction intervals (PIs) around point estimates for different categorical moderators, plots the effect size distribution over top such estimates and weights effect sizes by their precision (1/standard error, SE) or sample size. orchaRd
takes a metafor
object of class rma.mv
or rma
(Viechtbauer 2010) and plots the results for the meta-analytic or meta-regression model.
orchaRd
2.0 now allows users to take meta-regression models with a multitude of moderator variables that are categorical or continuous along with heterogeneous residual variance models. This is achieved by allowing the user to produce marginalised mean estimates using the emmeans
package. For plotting, orchaRd
uses ggplot2
(Wickham 2009), and as such, layers can be added directly to make plots customizable to the users needs. Also, the function orchard_plot
uses the principles of bee swarm plots to visualise individual effect sizes (Eklund 2012; see also Clarke and Sherrill-Mix 2017; Sherrill-Mix and Clarke 2017).
In addition to orchard plots, we have another plotting function caterpillars
to draw caterpillar(s) plots. Furthermore, the package orchaRd
comes with two functions that are useful for multilevel meta-analysis and meta-regression (i.e., rma.mv
): 1) i2_ml
to calculate the ‘total’ I2 along with I2 for each level (sensu Nakagawa S and Santos 2012; see also A. M. Senior et al. 2016) and 2) r2_ml
to calculate marginal and conditional R2 for mixed models (sensu S. Nakagawa and Schielzeth 2013).
To cite orchaRd
2.0 in publications one can use the following reference:
Shinichi Nakagawa, Malgorzata Lagisz, Rose E. O’Dea, Patrice Pottier, Joanna Rutkowska, Alistair M. Senior, Yefeng Yang, Daniel W.A. Noble. 2023. orchaRd 2.0: An R package for visualizing meta-analyses with orchard plots. Methods in Ecology and Evolution, https://doi.org/10.1111/2041-210X.14152 (preprint = EcoEvoRxiv, https://doi.org/10.32942/X2QC7).
For earlier versions please cite:
Nakagawa, S., Lagisz, M., O’Dea, R. E., Rutkowska, J., Yang, Y., Noble, D. W., & Senior, A. M. (2021). The Orchard Plot: Cultivating Forest Plots for Use in Ecology, Evolution and Beyond. Research Synthesis Methods, 12: 4-12, https://doi.org/10.1002/jrsm.1424 (preprint = EcoEvoRxiv https://doi.org/10.32942/osf.io/epqa7)
To install orchaRd
use the following code in R:
If you want to download the development branch, which contains newer features you can execute the following code:
rm(list = ls())
::install_github("daniel1noble/orchaRd", ref = "dev", force = TRUE)
devtools::p_load(devtools, tidyverse, metafor, patchwork, R.rsp, orchaRd, emmeans, ape, phytools, flextable, rotl, clubSandwich) pacman
It’s important to note that this version may not be as stable as the master branch.
#install.packages("pacman")
rm(list = ls())
::install_github("daniel1noble/orchaRd", ref = "main", force = TRUE)
devtools::p_load(devtools, tidyverse, metafor, patchwork, R.rsp, orchaRd, emmeans, ape, phytools, flextable) pacman
Installation will make the primary functions accessible to users along with their help files. You will also need the tidyverse
and metafor
packages.
In this vignette we’ll walk the reader through a number of case studies and show you how you can create beautiful looking orchard plots. We overview three different case studies that make use of different effect sizes and moderators. The data sets associated with each case study come as part of the orchaRd
package.
English and Uller (2016) performed a systematic review and meta-analysis on the effects of early life dietary restriction (a reduction in a major component of the diet without malnutrition; e.g. caloric restriction) on average age at death, using the standardised mean difference (often called d). They found that, across the whole data set, there was little evidence for an effect of dietary restriction on mean age at death. Here we’ll use the data set to first calculate the effect size measures and then fit an intercept only, meta-analytic model.
data(english)
# We need to calculate the effect sizes, in this case d
<- escalc(measure = "SMD", n1i = NStartControl, sd1i = SD_C, m1i = MeanC, n2i = NStartExpt, sd2i = SD_E, m2i = MeanE,
english var.names=c("SMD","vSMD"),
data = english)
<- rma.mv(yi = SMD, V = vSMD, random = list( ~ 1 | StudyNo, ~ 1 | EffectID), data = english)
english_MA summary(english_MA)
Multivariate Meta-Analysis Model (k = 77; method: REML)
logLik Deviance AIC BIC AICc
-44.5943 89.1887 95.1887 102.1809 95.5220
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.0614 0.2478 21 no StudyNo
sigma^2.2 0.0760 0.2756 77 no EffectID
Test for Heterogeneity:
Q(df = 76) = 297.4722, p-val < .0001
Model Results:
estimate se zval pval ci.lb ci.ub
0.0572 0.0729 0.7845 0.4327 -0.0856 0.2000
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
We have fit a meta-analytic model and thus the only estimate we see is the overall effect size on the effects of caloric restriction on mean death across all studies examined. Now that we have fit our meta-analytic model we can get the confidence intervals and prediction intervals with a few functions in the orchaRd
package. If one is interested in getting the table of results we can use the mod_results
function. This will allow users to make nice tables of the results if needed. We can do that as follows:
In orchaRd
version 2.1 and above, the data argument has now been removed from the mod_results
and orchard_plot
, bubble_plot
and catepillars
functions. This is because the data is already contained within the metafor
object.
<- orchaRd::mod_results(english_MA, mod = "1", at = NULL, group = "StudyNo")
model_results model_results
name estimate lowerCL upperCL lowerPR upperPR
1 Intrcpt 0.057 -0.086 0.2 -0.68 0.8
If we instead want to create an orchard plot and visualise the results we can do so quite simply as:
::orchard_plot(english_MA, mod="1", group = "StudyNo", xlab = "Standardised mean difference", transfm = "none", twig.size = 0.5, trunk.size = 1) orchaRd
In Fig. 1, we simply add in the metafor model and it will create a default orchard plot. Alternatively, we could also add in the table of results. Also. some people may wish to present the heterogeneity index, I2 and, at the same time, change the colours of the ‘trunk and fruit’ by adding a few arguments.
# use i2_sn function to obtain the total I^2
<- orchaRd::i2_ml(english_MA)
I2
::orchard_plot(model_results, mod="1", xlab = "Standardised mean difference") +
orchaRdannotate(geom="text", x= 0.80, y= 1, label= paste0("italic(I)^{2} == ", round(I2[1],4), "*\"%\""),
color="black", parse = TRUE, size = 5) +
scale_fill_manual(values="grey") +
scale_colour_manual(values="grey")
Figure Fig. 2 and Fig. 1 above show that overall estimate from a random-effects meta-analysis of 77 effect sizes is centered on zero, with a 95% CI that spans the line of no-effect. The prediction intervals clearly demonstrate the high level of heterogeneity (over 75%), with effects size less than -0.5 and greater than 0.5 predicted to be observed.
We can now draw a comparable caterpillar plot by using the function caterpillars
.
# a caterpillar plot (not a caterpillars plot)
::caterpillars(model_results, mod="1", xlab = "Standardised mean difference") orchaRd
It is now interesting to compare how the same data set can be visualised different by look at Fig. 2 and Fig. 3. It is good that the caterpillar plot can show CIs for each effect size. An equivalent function was done with the size of fruit in the orchard plot.
In a subsequent publication, Alistair M. Senior et al. (2017) analysed this data set for effects of dietary-restriction on among-individual variation in the age at death using the log coefficient of variation ratio. A major prediction in both English and Uller (2016) and Alistair M. Senior et al. (2017) was that the type of manipulation, whether the study manipulated quality of food versus the quantity of food would be important. As such, we can fit a meta-regression model to test whether the moderator “Manipulation Type” ManipType
impacts our results on the mean and variance
# First we need to calculate the lnCVR
<- escalc(measure = "CVR", n1i = NStartControl, sd1i = SD_C, m1i = MeanC,
english n2i = NStartExpt, sd2i = SD_E, m2i = MeanE,
var.names=c("lnCVR","vlnCVR"),
data = english)
# Now we can fit the meta-regression model (contrast)
<- rma.mv(yi = SMD, V = vSMD, mods = ~ ManipType,
english_MR0 random = list(~ 1 | StudyNo, ~ 1 | EffectID), data = english)
summary(english_MR0)
Multivariate Meta-Analysis Model (k = 77; method: REML)
logLik Deviance AIC BIC AICc
-44.2108 88.4216 96.4216 105.6916 96.9931
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.0655 0.2560 21 no StudyNo
sigma^2.2 0.0761 0.2758 77 no EffectID
Test for Residual Heterogeneity:
QE(df = 75) = 295.5324, p-val < .0001
Test of Moderators (coefficient 2):
QM(df = 1) = 0.0414, p-val = 0.8388
Model Results:
estimate se zval pval ci.lb ci.ub
intrcpt 0.0469 0.0924 0.5079 0.6115 -0.1342 0.2281
ManipTypeQuantity 0.0283 0.1390 0.2035 0.8388 -0.2442 0.3008
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Again, we can create a table of results
<- orchaRd::mod_results(english_MR0, mod = "ManipType", group = "StudyNo")
res2 res2
name estimate lowerCL upperCL lowerPR upperPR
1 Quality 0.047 -0.13 0.23 -0.71 0.81
2 Quantity 0.075 -0.14 0.30 -0.69 0.84
# we fit a meta-regression (contrast) for the log coefficient of variation
<- rma.mv(
senior_MR0 yi = lnCVR, V = vlnCVR, mods = ~ ManipType, random = list(~ 1 | StudyNo, ~ 1 | EffectID),
data = english
)summary(senior_MR0)
Multivariate Meta-Analysis Model (k = 77; method: REML)
logLik Deviance AIC BIC AICc
-32.0740 64.1481 72.1481 81.4180 72.7195
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.0275 0.1657 21 no StudyNo
sigma^2.2 0.0470 0.2169 77 no EffectID
Test for Residual Heterogeneity:
QE(df = 75) = 215.7242, p-val < .0001
Test of Moderators (coefficient 2):
QM(df = 1) = 1.3308, p-val = 0.2487
Model Results:
estimate se zval pval ci.lb ci.ub
intrcpt -0.1310 0.0678 -1.9333 0.0532 -0.2639 0.0018 .
ManipTypeQuantity 0.1217 0.1055 1.1536 0.2487 -0.0851 0.3285
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# creating a table of results
<- orchaRd::mod_results(senior_MR0, mod = "ManipType", group = "StudyNo", at = list(ManipType = "Quality"), subset = TRUE)
res3 res3
name estimate lowerCL upperCL lowerPR upperPR
1 Quality -0.13 -0.26 0.0018 -0.68 0.42
# We can now plot SMD and lnCVR beside each other and compare the results
<- orchaRd::orchard_plot(res2,
p1 mod = "ManipType", group = "StudyNo", xlab = "Standardised mean difference")
<- orchaRd::orchard_plot(res3,
p2 mod = "ManipType", group = "StudyNo", xlab = "log(CV ratio) (lnCVR)")
/ p2) (p1
Our orchard plot for the log coefficient of variation demonstrates that, while restrictions on dietary quality and quantity do not affect the average age at death (top of Fig. 4), among-individual variation may be altered by quality restrictions (bottom of Fig. 4). The effect is negative suggesting that the coefficient of variation in the control group is lower than that in the treatment group, and the 95% CI does not span zero. Again though, the effect is heterogeneous; a substantial number of positive effects are still predicted.
Let’s draw comparable caterpillars plots to compare these two results again.
# We can now plot SMD and lnCVR beside each other and compare the results
<- orchaRd::caterpillars(res2,
p1c mod = "ManipType", group = "StudyNo", xlab = "Standardised mean difference")
<- orchaRd::caterpillars(res3,
p2c mod = "ManipType", group = "StudyNo", xlab = "log(CV ratio) (lnCVR)")
/ p2c p1c
Now compare between Fig. 4 and Fig. 5. We think they both look nice.
Eklof et al. (2012) evaluated the effects of predation on benthic invertebrate communities. Using the log response ratio they quantified differences in abundance and/or biomass of gastropods and Amphipods in groups with and without predation in an experimental setting.
Here again, we can create orchard plots of the model results, but we’ll show how a few simple things can be modified. Again, we can fit the meta-analytic model first:
data(eklof)
# Calculate the effect size
<- escalc(
eklof measure = "ROM", n1i = N_control, sd1i = SD_control, m1i = mean_control,
n2i = N_treatment, sd2i = SD_treatment, m2i = mean_treatment,
var.names = c("lnRR", "vlnRR"),
data = eklof
)
# Add the observation level factor
$Datapoint <- as.factor(seq(1, dim(eklof)[1], 1))
eklof
# Also, we can get the sample size, which we can use for weighting if we would like
$N <- rowSums(eklof[, c("N_control", "N_treatment")])
eklof
# fit a meta-regression with the intercept (contrast)
<- rma.mv(
eklof_MR0 yi = lnRR, V = vlnRR, mods = ~ Grazer.type, random = list(~ 1 | ExptID, ~ 1 | Datapoint),
data = eklof
)
summary(eklof_MR0)
Multivariate Meta-Analysis Model (k = 34; method: REML)
logLik Deviance AIC BIC AICc
-51.9349 103.8698 111.8698 117.7328 113.3513
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.5075 0.7124 18 no ExptID
sigma^2.2 0.6703 0.8187 34 no Datapoint
Test for Residual Heterogeneity:
QE(df = 32) = 195.9943, p-val < .0001
Test of Moderators (coefficient 2):
QM(df = 1) = 0.8891, p-val = 0.3457
Model Results:
estimate se zval pval ci.lb ci.ub
intrcpt -0.8095 0.3099 -2.6119 0.0090 -1.4170 -0.2021 **
Grazer.typegastropod 0.3285 0.3484 0.9429 0.3457 -0.3544 1.0114
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Above we have fit a meta-regression model using “Grazer Type” as a moderator which is predicted to explain variation in log response ratios. We can demonstrate a few simple changes users can make, but we note here that users can make far more complex changes down the line if needed, but we’ll save explaining these more complex changes for the last example. The first is the angle at which the y-axis labels are positioned (bottom of Fig. 6):
#|
<- orchaRd::mod_results(eklof_MR0, mod = "Grazer.type", group = "ExptID")
f
<- orchaRd::orchard_plot(eklof_MR0, mod = "Grazer.type", group = "ExptID", xlab = "log(Response ratio) (lnRR)",
p3 transfm = "none")
<- orchaRd::orchard_plot(eklof_MR0, mod = "Grazer.type", group = "ExptID", xlab = "log(Response ratio) (lnRR)",
p4 transfm = "none", angle = 45, g = FALSE)
/p4 p3
The other thing we can change is the type of scaling we wish to use. Let’s say we are interested in scaling the effect size by the total sample size of the study we use a vector of N, sample size (bottom of Fig. 7:
<- orchaRd::orchard_plot(eklof_MR0, mod = "Grazer.type", group = "ExptID", xlab = "log(Response ratio) (lnRR)")
p5
<- orchaRd::orchard_plot(eklof_MR0, mod = "Grazer.type", group = "ExptID", xlab = "log(Response ratio) (lnRR)", angle = 45, N = "N", g = FALSE)
p6
/p6 p5
Overall, our orchard plot shows the results of a re-analysis of their data. The estimated mean effects are negative for both gastropods and amphipods suggesting that mean abundance/biomass in the control group is lower than in the treatment groups, although the effect is largest, and is statistically significant, for amphipods. In both cases the prediction intervals reveal the extent of heterogeneity, with positive effects predicted to be observed.
Again, we can also draw the caterpillars-plot counterpart for this data set (bottom of Fig. 8):
<- orchaRd::mod_results(eklof_MR0, mod = "Grazer.type", group = "ExptID")
eklof_MR_result ::caterpillars(eklof_MR_result, mod = "Grazer.type", group = "ExptID", xlab = "log(Response ratio) (lnRR)") orchaRd
Like last time, we compare between Fig. 7 and Fig. 8. Again, they both look quite informative.
The last thing we can do is transform these data to percentage. You can also make other transformation depending on what effect size that you are working with, see ?orchard_plot
.
<- orchaRd::orchard_plot(eklof_MR0, mod = "Grazer.type", group = "ExptID", xlab = "Percentage Change (%)", angle = 45, N = "N", g = FALSE, transfm = "percentr")
p6_per
p6_per
Finally, we also look at the case discussed by Lim, Senior, and Nakagawa (2014), who meta-analysed the strength of correlation between maternal and offspring size within-species, across a very wide range of taxa. They found, that typically, there is a moderate positive correlation between maternal size and offspring size within species (i.e. larger mothers have larger offspring). However, they also found evidence for relatively strong phylogenetic effects suggesting that the strength of the association is dependent on evolutionary lineage.
Here we have used an orchard plot to represent the results obtained when meta-analysing the data from Lim, Senior, and Nakagawa (2014) by taxonomic Phylum.
data(lim)
# Add in the sampling variance
$vi<-(1/sqrt(lim$N - 3))^2
lim
# Lets fit a meta-regression - I will do Article non-independence. The phylogenetic model found phylogenetic effects, however, instead we could fit Phylum as a fixed effect and explore them with an Orchard Plot
<-metafor::rma.mv(yi=yi, V=vi, mods=~Phylum-1, random=list(~1|Article, ~1|Datapoint), data=lim)
lim_MRsummary(lim_MR)
Multivariate Meta-Analysis Model (k = 357; method: REML)
logLik Deviance AIC BIC AICc
-97.6524 195.3049 213.3049 248.0263 213.8343
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.0411 0.2029 220 no Article
sigma^2.2 0.0309 0.1757 357 no Datapoint
Test for Residual Heterogeneity:
QE(df = 350) = 1912.9637, p-val < .0001
Test of Moderators (coefficients 1:7):
QM(df = 7) = 356.6775, p-val < .0001
Model Results:
estimate se zval pval ci.lb ci.ub
PhylumArthropoda 0.2690 0.0509 5.2829 <.0001 0.1692 0.3687 ***
PhylumChordata 0.3908 0.0224 17.4190 <.0001 0.3468 0.4347 ***
PhylumEchinodermata 0.8582 0.3902 2.1992 0.0279 0.0934 1.6230 *
PhylumMollusca 0.4867 0.1275 3.8175 0.0001 0.2368 0.7366 ***
PhylumNematoda 0.4477 0.3054 1.4658 0.1427 -0.1509 1.0463
PhylumPlatyhelminthes 0.4935 0.2745 1.7980 0.0722 -0.0444 1.0314 .
PhylumRotifera 0.4722 0.3021 1.5634 0.1180 -0.1198 1.0642
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Noe we can plot a default orchard plot, scaling each effect size by N. Also, because we are using Zr, we can use transfm = “tanh” and it will do the conversions for us:
#Plot the meta-regression model
::orchard_plot(lim_MR, mod = "Phylum", group = "Article", xlab = "Correlation coefficient",
orchaRdalpha = 0.5, transfm = "tanh", angle = 45, N = "N", cb = FALSE)
Now that we have Fig. 10, we can do some small changes to make it pretty and add some more details, such as between study heterogeneity and R2 for phylum. Currently, the cb argument is “FALSE”, we can change this to “TRUE” to use colour blind friendly colours. Additionally, because we are using ggplot
we can add element to the figure to make it look pretty.
# Lets add the R2 on the figure as well as little modifications:
<- orchaRd::r2_ml(lim_MR)
R2
# We can draw an orchard plot with R2
::orchard_plot(lim_MR, mod = "Phylum", group = "Article", xlab = "Correlation coefficient (r)",
orchaRdalpha = 0.5, transfm = "tanh", angle = 45, N = "N", cb = TRUE) +
theme(legend.position= c(0.05, 0.99), legend.justification = c(0,1), legend.key.size = unit(1, "mm")) +
theme(legend.direction="horizontal", legend.title = element_text(size =8),legend.text = element_text(size = 10)) +
annotate(geom="text", x=0.8, y=0.7, label=paste0("italic(R)^{2} == ", round(R2[1],4)*100), color="black", parse = TRUE, size = 4)
As in Fig. 11, new elements can be added to the orchard_plot
to modify it as one sees fit. It will overwrite existing elements. From our orchard plots above, it is clear that the analysis is dominated by data from Chordates and Arthropods, with the other Phyla being much more poorly represented. Second, there is a difference between the strength of a typical correlation within these two well represented groups (the correlation is stronger in Chordates), which arguably would explain the phylogenetic signals detected by Lim, Senior, and Nakagawa (2014). Lastly, although there are differences within the typical correlation between Chordates and Arthropods, there remains a large overlap in predicted range of individual effect sizes; individual species within Phyla are still highly variable.
Finally, we make a comparable caterpillars plot.
#Plot the meta-regression model
<- orchaRd::mod_results(lim_MR, mod = "Phylum", group = "Article")
lim_MR_results ::caterpillars(lim_MR_results, mod = "Phylum", group = "Article", xlab = "Correlation coefficient", transfm = "tanh", g = FALSE) orchaRd
Compared to the orchard plot Fig. 11, Fig. 12 does not look as elegant, unlike the previous comparisons. This is because the groups with small sample sizes are packed at the top of the figure.
From Fig. 11 we can see that many taxonomic groups have very little data. We may want instead to just focus the readers attention to means of groups with sufficient data. The new mod_results
function allows us to do this using the maginalised means approach. We can do that as follows:
#Plot the meta-regression model
<- orchaRd::mod_results(lim_MR, mod = "Phylum", group = "Article",
lim_MR_results at = list(Phylum = c("Chordata", "Arthropoda", "Mollusca")),
subset = TRUE)
::orchard_plot(lim_MR_results, xlab = "Fishers z-transformed Correlation (Zr)", g = TRUE, angle = 45) orchaRd
We can now see that Fig. 13 limits what it plots to only the three levels relevant for presentation.
We have noted a bug at this point in the orchaRd
package. In updating the transf
pipeline we’ve now noticed a bug when combining subsetting of levels using the at
argument and the subset
argument. This still works, but you cannot currently combine it with any back-transformation of the effect size, which is why we have reverted back to Zr
. We’re working on fixing this.
We may also at times want to revert back to a normal plot with catagorical variables on the x-axis. We can do this by setting flip
argument to FALSE
.
#Plot the meta-regression model
<- orchaRd::mod_results(lim_MR, mod = "Phylum", group = "Article",
lim_MR_results at = list(Phylum = c("Chordata", "Arthropoda", "Mollusca")),
subset = TRUE)
::orchard_plot(lim_MR_results, xlab = "Fishers z-transformed Correlation (Zr)", g = TRUE, flip = FALSE) orchaRd
In orchaRd
2.0 we’ve made some major updates to the types of models that can be used with the package. Meta-analyses in ecology and evolution often fit a multitude of moderators and random effects within a single model. This is even desirable in many cases to deal with high heterogeneity and disentangle the independent contributions of moderators to changes in mean effect size. The challenge with the previous version of the package was that it was difficult to accommodate these models. However, the emmeans
package is an excellent solution to the problem. It will produce marginsalised or conditional mean estimates for a given level of moderator. This includes models with categorical and continuous moderators.
An additional issue that is of importance with meta-regression models is the need to deal with heteroscedastic variance across levels of a moderator. Homogeneity of variance assumptions are often violated in meta-analysis. orchaRd
2.0 allows the user to fit models that explicitly estimate different residual variances reducing the probability of type I errors.
In this section, we show users: 1) how to fit these models using metafor
and 2) how these models interface with the orchaRd
package to produce pretty plots that capture the model predictions.
Meta-regression models with single categorical moderators are extremely common. Meta-analyst’s want to estimate the mean effect size within levels of a moderator and test whether these means are significantly different from each other. One inherent assumption to fitting these models is that we are assuming that the within study (or residual) variance within these groups is the same (i.e., “homogeneity of variance assumption”). This assumption will likely be violated in many situations in meta-regression models. Unless one conducts a sub-group analysis for each level of the moderator (although this is less powerful and makes it hard to test whether levels are significantly different) different residual variances need to be explicitly modeled in metafor
. Fortunately, this can be done. orchaRd
2.0 now allows plotting of these models with their corresponding CI’s and PI’s.
We’ll work with a meta-analysis by O’Dea et al. (2019). In this meta-analysis, the authors were interested in testing whether developmental temperature in fish had an effect on mean and variance in a host of phenotypic traits. They used experimental studies manipulating developmental temperatures using the log response ratio (lnrr
) and the log coefficient of variation (lncvr
) as their effect sizes. We’ll only focus on lnrr
here for demonstration purposes. Importantly, they categorised effect sizes as having come from one of 4 trait categories: Behaviour, Life-history, Morphology and Physiology. They were interested in whether developmental temperature impact trait types differently. We’ll first explore how to fit a meta-regression model that explicitly estimates residual variance in each of these trait levels.
First, load the data.
# Load the dataset that comes with orchaRd
data(fish)
# Subset data for demonstration purposes.
<- fish warm_dat
To fit a heterogeneous variance model in metafor
we need to use some additional arguments and specify the random effect structure differently (i.e., specify inner argument). To do this, we need to add the struc = HCS
to the argument list, which specifies the variance structure of an ~inner|outer formula of a particular random effect. In this case, we’re assuming a heteroscedastic compound symmetry structure. To avoid estimating the correlation / covariance between levels we need to also add the rho
argument and set this to 0. This will assume that the covariance between the levels is set to 0 (although it could be estimated).
Once we have these additional arguments we need to modify our random effect structure. Note that metafor
does not estimate a residual / within study variance. We need to explicitly add an observation level random effect, in this case es_ID
. If we have more than one effect size / study (which is common) then we want to have this random effect in our models.
<- metafor::rma.mv( yi = lnrr, V = lnrr_vi,
model_het mods = ~ trait.type, method = "REML", test = "t",
random = list(~1 | group_ID, ~1 + trait.type| es_ID),
rho = 0, struc = "HCS",
data = warm_dat,
control = list(optimizer="optim", optmethod="Nelder-Mead"))
model_het
Multivariate Meta-Analysis Model (k = 410; method: REML)
Variance Components:
estim sqrt nlvls fixed factor
sigma^2 0.0143 0.1196 84 no group_ID
outer factor: es_ID (nlvls = 410)
inner factor: trait.type (nlvls = 4)
estim sqrt k.lvl fixed level
tau^2.1 0.2144 0.4630 4 no behaviour
tau^2.2 0.2121 0.4605 4 no life-history
tau^2.3 0.0232 0.1522 323 no morphology
tau^2.4 0.0320 0.1790 79 no physiology
rho 0.0000 yes
Test for Residual Heterogeneity:
QE(df = 406) = 39294.5728, p-val < .0001
Test of Moderators (coefficients 2:4):
F(df1 = 3, df2 = 406) = 3.0890, p-val = 0.0271
Model Results:
estimate se tval df pval ci.lb ci.ub
intrcpt 0.1848 0.2713 0.6811 406 0.4962 -0.3485 0.7180
trait.typelife-history 0.5576 0.3701 1.5068 406 0.1326 -0.1699 1.2851
trait.typemorphology -0.1846 0.2714 -0.6801 406 0.4968 -0.7181 0.3489
trait.typephysiology -0.1712 0.2730 -0.6272 406 0.5309 -0.7079 0.3655
intrcpt
trait.typelife-history
trait.typemorphology
trait.typephysiology
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
We can see from above that we now have four variances; one estimated for each level of the trait.type
moderator. Now that we have fit our model and estimated the relevant parameters we can produce the orchard plot using the marginal_means
function:
<- orchaRd::mod_results(model_het,
het_model group = "group_ID",
mod = "trait.type")
::orchard_plot(het_model, xlab = "lnRR") orchaRd
We can now see from Fig. 15 that the variance in effects for the morphological and physiological trait categories are smaller than for life-history and behavioural trait categories; likely because of the very small sample sizes in the latter trait categories.
As a second example, we can turn to Pottier et al. (2021) who did a meta-analysis comparing acclimation differences between males and females. Pottier et al. (2021) found that aquatic and terrestrial systems differed in their variability of effects. Here, we’ll also show how phylogenetic correlation matrices can be incorporated within the models and work with orchaRd.
# Clear working space
rm(list = ls())
# Load the data
data("pottier")
# Load the phylogenetic correlation matrix
data("phylo_matrix")
# Load the sampling variance matrix
data("VCV_dARR")
Now that we have all the data and matrices loaded, we can fit the multi-level mete-regression model.
##### Heteroscedasticity modeled at the effect size level
<- rma.mv(yi=dARR, V=VCV_dARR,mods= ~habitat,method="REML",test="t",dfs="contain",
mod.habitat_het random=list(~1|species_ID,~1|phylogeny,~habitat|es_ID),struct="HCS", rho=0, R =list(phylogeny = phylo_matrix),data=pottier)
mod.habitat_het
Multivariate Meta-Analysis Model (k = 1089; method: REML)
Variance Components:
estim sqrt nlvls fixed factor R
sigma^2.1 0.0090 0.0948 138 no species_ID no
sigma^2.2 0.0129 0.1136 138 no phylogeny yes
outer factor: es_ID (nlvls = 1089)
inner factor: habitat (nlvls = 2)
estim sqrt k.lvl fixed level
tau^2.1 0.0716 0.2677 929 no aquatic
tau^2.2 0.0082 0.0907 160 no terrestrial
rho 0.0000 yes
Test for Residual Heterogeneity:
QE(df = 1087) = 56246.7341, p-val < .0001
Test of Moderators (coefficient 2):
F(df1 = 1, df2 = 136) = 9.2383, p-val = 0.0028
Model Results:
estimate se tval df pval ci.lb ci.ub
intrcpt 0.2087 0.0655 3.1856 136 0.0018 0.0792 0.3383
habitatterrestrial -0.1492 0.0491 -3.0394 136 0.0028 -0.2463 -0.0521
intrcpt **
habitatterrestrial **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Now we can produce an orchaRd
plot using this model to show the unequal variance in the two habitat types.
orchard_plot(mod.habitat_het, group = "species_ID", mod = "habitat", xlab = "dARR", angle = 45)
We can also create the model results table (Tab. 1).
::flextable(mod_results(mod.habitat_het, group = "species_ID", mod = "habitat")$mod_table) flextable
Warning in mod_results(mod.habitat_het, group = "species_ID", mod = "habitat"):
We noticed you're fitting an ~inner|outer rma model ('random slope'). There are
circumstances where the prediction intervals for such models are calculated
incorrectly. Please check your results carefully.
name | estimate | lowerCL | upperCL | lowerPR | upperPR |
---|---|---|---|---|---|
Aquatic | 0.21 | 0.079 | 0.34 | -0.41 | 0.83 |
Terrestrial | 0.06 | -0.091 | 0.21 | -0.32 | 0.43 |
We might also think about heterogeneous variance existing at multiple random effect levels. These models require a lot of data, and need to be carefully thought about, but we orchard
2.0 can also handle these models.
##### Heteroscedasticity modeled at the effect size level
<- rma.mv(yi=dARR, V=VCV_dARR,mods= ~habitat,method="REML",test="t",dfs="contain",
mod.habitat_het2 random=list(~habitat|species_ID,~1|phylogeny,~habitat|es_ID),struct="HCS", rho=0, phi = 0, R =list(phylogeny = phylo_matrix),data=pottier)
mod.habitat_het2
Multivariate Meta-Analysis Model (k = 1089; method: REML)
Variance Components:
estim sqrt nlvls fixed factor R
sigma^2 0.0098 0.0989 138 no phylogeny yes
outer factor: species_ID (nlvls = 138)
inner factor: habitat (nlvls = 2)
estim sqrt k.lvl fixed level
tau^2.1 0.0116 0.1077 118 no aquatic
tau^2.2 0.0022 0.0469 20 no terrestrial
rho 0.0000 yes
outer factor: es_ID (nlvls = 1089)
inner factor: habitat (nlvls = 2)
estim sqrt k.lvl fixed level
gamma^2.1 0.0715 0.2673 929 no aquatic
gamma^2.2 0.0084 0.0918 160 no terrestrial
phi 0.0000 yes
Test for Residual Heterogeneity:
QE(df = 1087) = 56246.7341, p-val < .0001
Test of Moderators (coefficient 2):
F(df1 = 1, df2 = 136) = 14.6883, p-val = 0.0002
Model Results:
estimate se tval df pval ci.lb ci.ub
intrcpt 0.2098 0.0581 3.6096 136 0.0004 0.0949 0.3247
habitatterrestrial -0.1606 0.0419 -3.8325 136 0.0002 -0.2434 -0.0777
intrcpt ***
habitatterrestrial ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Again, we can plot this new model normally.
orchard_plot(mod.habitat_het2, group = "species_ID", mod = "habitat", xlab = "dARR", angle = 45)
We can also view the model results table (Tab. 2)
::flextable(mod_results(mod.habitat_het2, group = "species_ID", mod = "habitat")$mod_table) flextable
Warning in mod_results(mod.habitat_het2, group = "species_ID", mod =
"habitat"): We noticed you're fitting an ~inner|outer rma model ('random
slope'). There are circumstances where the prediction intervals for such models
are calculated incorrectly. Please check your results carefully.
name | estimate | lowerCL | upperCL | lowerPR | upperPR |
---|---|---|---|---|---|
Aquatic | 0.210 | 0.095 | 0.32 | -0.40 | 0.82 |
Terrestrial | 0.049 | -0.079 | 0.18 | -0.26 | 0.36 |
orchaRd
2.0 can now also create bubble plots. Bubble plots are extremely useful when one has a continuous moderator variable. For example, this might be publication year or sampling variance if one is testing for various forms of publication bias. We’ll demonstrate using two examples.
First, we’ll return to the Lim, Senior, and Nakagawa (2014) dataset. Here, we can fit a model that assessing time-lag effects for studies on captive and wild populations. Say, for the sake of argument, that we expect the time-lag effect to vary by lab/wild conditions. We can fit an interaction model and create bubble plots for these situations (Fig. 18):
data(lim)
"year"] <- as.numeric(lim$year)
lim[, $vi<- 1/(lim$N - 3)
lim<-metafor::rma.mv(yi=yi, V=vi, mods= ~Environment*year,
modelrandom=list(~1|Article,~1|Datapoint), data=na.omit(lim))
<- orchaRd::mod_results(model, mod = "year", group = "Article",
lim_bubble weights = "prop", by = "Environment")
::bubble_plot(lim_bubble, group = "Article", mod = "year", xlab = "Year", legend.pos = "top.left") orchaRd
As second example, we’ll return back to Pottier et al. (2021), who collected data on body mass for species in the data set where the acclimation response ratio (ARR) was calculated. We might expect the ARR to vary by body mass because small species might be able to acclimate faster than larger species. Using a simplified version of Pottier et al. (2021)’s model, we can look at this effect (Fig. 19):
# extract complete cases
data(pottier)
<- pottier[complete.cases(pottier$body_mass),]
pottier2
# Model body mass effects
<- metafor::rma.mv(yi = dARR, V = Var_dARR,mods= ~body_mass, method="REML", test="t", dfs="contain",
mod.body_mass random=list(~1|species_ID, ~1|es_ID), data = pottier2)
::bubble_plot(mod.body_mass, mod = "body_mass", group = "species_ID", xlab = "Body mass (g)", ylab = "Acclimation Response Ratio (ARR)", legend.pos = "top.left") orchaRd
We may want to colour code the points based on some grouping variable, such as habitat. As indicated through the vignette, we can override layers of the orchard plot to do this fairly easily (Fig. 20).
::bubble_plot(mod.body_mass, mod = "body_mass", group = "species_ID", xlab = "Body mass (g)", ylab = "Acclimation Response Ratio (ARR)", legend.pos = "top.left") + geom_point(data = pottier, aes(x = body_mass, y = dARR, color = habitat, size = 1/sqrt(Var_dARR)), alpha = 0.6) + scale_color_discrete() orchaRd
We’ll now demonstrate how multi-moderator meta-regression models can be used with orchaRd
2.0, by plotting conditional mean effects within categories of a single moderator.
To demonstrate how to do this we’ll again turn to O’Dea et al. (2019) as our example data set. We’ll first load the data set, if you haven’t already done so:
# Load the dataset that comes with orchaRd
data(fish)
# Subset data for demonstration purposes.
<- fish warm_dat
Now that we have the data set assume we are interested in modelling multiple moderators that we hypothesize will explain effect size variance. We want to provide a model that will allow us to understand the effects of each multiple moderators on mean effect size. We know that experimental_design
and the temperature difference between treatments (deg_diff
) are expected to impact mean effect size. We also expect experiments that last longer to have stronger effects (treat_end_days
). We’ll now extend our model from Section 5.1 above to include these moderators so we can look at the effects of each controlling for each other.
# Fit the metaregerssion model
<- metafor::rma.mv(yi = lnrr, V = lnrr_vi,
model mods = ~ experimental_design + trait.type + deg_dif + treat_end_days,
method = "REML", test = "t",
random = list(~1 | group_ID, ~1 + trait.type| es_ID),
rho = 0, struc = "HCS",
data = warm_dat,
control=list(optimizer="optim", optmethod="Nelder-Mead"))
model
Multivariate Meta-Analysis Model (k = 410; method: REML)
Variance Components:
estim sqrt nlvls fixed factor
sigma^2 0.0143 0.1196 84 no group_ID
outer factor: es_ID (nlvls = 410)
inner factor: trait.type (nlvls = 4)
estim sqrt k.lvl fixed level
tau^2.1 0.2145 0.4631 4 no behaviour
tau^2.2 0.2147 0.4634 4 no life-history
tau^2.3 0.0226 0.1503 323 no morphology
tau^2.4 0.0320 0.1790 79 no physiology
rho 0.0000 yes
Test for Residual Heterogeneity:
QE(df = 402) = 38537.0410, p-val < .0001
Test of Moderators (coefficients 2:8):
F(df1 = 7, df2 = 402) = 2.4871, p-val = 0.0165
Model Results:
estimate se tval df
intrcpt 0.1637 0.2726 0.6003 402
experimental_designsplit pooled families 0.0339 0.0503 0.6728 402
experimental_designsplit single family 0.0654 0.0687 0.9513 402
trait.typelife-history 0.5532 0.3721 1.4870 402
trait.typemorphology -0.1847 0.2726 -0.6775 402
trait.typephysiology -0.1689 0.2745 -0.6153 402
deg_dif -0.0091 0.0051 -1.7891 402
treat_end_days 0.0007 0.0003 2.2183 402
pval ci.lb ci.ub
intrcpt 0.5486 -0.3723 0.6997
experimental_designsplit pooled families 0.5015 -0.0651 0.1328
experimental_designsplit single family 0.3420 -0.0697 0.2005
trait.typelife-history 0.1378 -0.1782 1.2847
trait.typemorphology 0.4985 -0.7206 0.3512
trait.typephysiology 0.5387 -0.7085 0.3707
deg_dif 0.0744 -0.0190 0.0009 .
treat_end_days 0.0271 0.0001 0.0014 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Our model is complicated. It not only estimates the effects of trait type, but type of experimental design, degree difference and treatment duration. But, effects of moderators are now conditional on each other. What happens if we just want to plot mean effects for trait type, but we want to condition these effects on a given temperature different? We can do this using calculating conditional meta-analytic means. We can see that there’s not much going on for experimental design, so we will marginalize the means across each level of this moderator. Degree difference is continuous. To deal with continuous moderators we need to specify the levels we want predictions to be made.
<- orchaRd::mod_results(model, group = "group_ID", mod = "trait.type", at = list(deg_dif = c(5, 10, 15)), by = "deg_dif", weights = "prop" )
HetModel
::orchard_plot(HetModel, xlab = "lnRR" , angle = 45, g = FALSE, legend.pos = "top.left", condition.lab = "Temperature Difference") +
orchaRdtheme(legend.direction = "vertical")
We can see the orchard plot changes. Here we have a ‘condition’ label, which we have renamed to “Temperature Difference” using the condition.lab
argument. This new legend specifies the degree difference for each mean predicted. Note that these means effectively average over all levels of experimental design and all possible values of treat_end_days
(i.e., marginalize). We have also made some small changes to the plot. First, we set g = FALSE
which will suppress the total groups on the plot (e.g. studies), moved the position of legends and angled the labels.
We can also specify the specific value of treat_end_day
, further conditioning our mean estimates. We can do that as follows:
<- orchaRd::mod_results(model, group = "group_ID", mod = "trait.type", at = list(deg_dif = c(5, 10, 15), treat_end_days = c(10)), by = "deg_dif", weights = "prop")
HetModel2
::orchard_plot(HetModel2, xlab = "lnRR", angle = 45, g = FALSE, legend.pos = "top.left", condition.lab = "Temperature Difference") +
orchaRdtheme(legend.direction = "vertical")
Now the orchard plot depicts the mean effect sizes (plus 95% CI and PI) for each trait category, at 5, 10 and 15 \(^{\circ}\)C when the treatment duration is 10 days.
One way to assess the robustness of a meta-analytic model is by re-running the model without one or more effect sizes and then comparing the results. A common approach is to re-run the model multiple times, each time removing the effect sizes from a different study. This allows you to see how the overall effect estimate changes when each study is excluded.
This is a leave-one-out analysis Shinichi Nakagawa et al. (2023). It leaves out one study at a time, but you can leave out any group you think can have a large influence on the overall effect estimate. In ecology and evolution meta-analyses, it can be useful to check how the model changes when certain species are left out.
Thanks to Facundo Decunta, there are now two functions for this: leave_one_out()
and orchard_leave1out()
. The first one is to run the analysis, and the second one is to plot the results.
Using the data from English and Uller (2016), here is how to run a leave-one-out and plot the results.
First, calculate the effect sizes and fit the model.
data(english)
# Create a new column with 'Author, year' format
$author_year <- paste(english$Author, english$Year)
english
# We need to calculate the effect sizes, in this case d
<- escalc(measure = "SMD",
english n1i = NStartControl,
sd1i = SD_C,
m1i = MeanC,
n2i = NStartExpt,
sd2i = SD_E,
m2i = MeanE,
var.names = c("SMD", "vSMD"),
data = english)
<- rma.mv(yi = SMD,
english_MA V = vSMD,
random = list(~1 | StudyNo,
~1 | EffectID),
data = english)
leave_one_out()
will re-run the model without one study at a time. The group
argument is used to specify the grouping variable. In this case, we use the author_year
variable:
<- leave_one_out(english_MA, group = "author_year")
loo_test loo_test
name estimate lowerCL upperCL lowerPR upperPR
1 Barrett 2009 0.0368 -0.108 0.18 -0.65 0.72
2 Cheney 1983 0.0731 -0.074 0.22 -0.67 0.82
3 Langley-Evans 2006 0.0655 -0.085 0.22 -0.70 0.83
4 Lynch 1983 0.0510 -0.097 0.20 -0.70 0.81
5 Partadiredja 2010 0.0562 -0.091 0.20 -0.69 0.81
6 Sayer 2001 0.0451 -0.100 0.19 -0.70 0.79
7 Sun 2009 0.0736 -0.072 0.22 -0.67 0.81
8 Tu 2003 0.0558 -0.093 0.20 -0.70 0.81
9 Zajistchek 2009 0.0677 -0.083 0.22 -0.69 0.83
10 Mestre 2012 0.0058 -0.110 0.12 -0.61 0.62
11 Saasatamoinen 2013 0.0450 -0.110 0.20 -0.75 0.84
12 Saasatamoinen 2010 0.0664 -0.085 0.22 -0.70 0.83
13 Ozanne 2004 0.0554 -0.100 0.21 -0.66 0.77
14 Zwaan 1991 0.0603 -0.091 0.21 -0.72 0.84
15 Boggs 2005 0.0405 -0.101 0.18 -0.69 0.77
16 May 2015 0.0796 -0.060 0.22 -0.65 0.81
17 Dmitriew 2009 0.0668 -0.083 0.22 -0.70 0.83
18 Heidinger 2012 0.0678 -0.080 0.22 -0.68 0.82
19 Houslay 2015 0.0639 -0.087 0.21 -0.70 0.83
20 Kelly 2014 0.0739 -0.074 0.22 -0.67 0.82
We can plot the results with orchard_leave1out()
. This function takes the output of leave_one_out()
and creates an orchard plot. The y-axis shows which study was left out.
orchard_leave1out(leave1out = loo_test,
xlab = "SMD",
ylab = "Study left out",
trunk.size = 1.2,
branch.size = 1.5,
alpha = 0.08,
legend.pos = "top.out")
The dashed red lines show the 95% confidence interval for the overall effect size in the original model. You can removed them by setting ci_lines = FALSE
. To change the color, use ci_lines_color = "blue"
, or whatever color you want.
The empty points, called ‘ghost points’, show the effect sizes left-out of that model. You can removed them by setting ghost_points = FALSE
.
The rest of the arguments are the same as in orchard_plot()
.
There are cases in which the meta-analytic model is more complicated. leave_one_out()
can handle some common cases that try to account for non-independence including when the models have:
A variance-covariance matrix for sampling variances, implemented with metafor::vcalc()
Phylogenetic matrix, calculated with ape::vcv()
Robust variance estimation, implemented with metafor::robust()
To demonstrate how this works, we will use a subset of the Pottier et al. (2021) data and fit a fictitious model.
# Fictitious model just for the example
<- rma.mv(yi = dARR,
res V = VCV,
test = "t",
random = list(~1 | study_ID,
~1 | es_ID,
~1 | species_ID,
~1 | phylogeny),
R = list(phylogeny = phylo_matrix),
data = pottier_subset)
<- robust(res, cluster = study_ID)
robust_res
mod_results(robust_res, group = "study_ID")
name estimate lowerCL upperCL lowerPR upperPR
1 Intrcpt 0.084 -0.29 0.46 -0.86 1
To include those correlations in the leave-one-out analysis, we need to pass some extra arguments to the leave_one_out()
function, so they can be re-calculated in each iteration. To see the details of the arguments, run ?leave_one_out()
.
# vcalc_args must be a list with the arguments for vcalc. They must be
# passed as strings, except for rho, which is a number.
<- list(vi = "Var_dARR",
vcalc_args cluster = "study_ID",
obs = "es_ID",
rho = 0.5)
# phylo_args must be a list with the arguments for vcv. They must be
# the phylogenetic tree object, and a string with the name of the
# species names column that is used as random effect.
<- list(tree = tree,
phylo_args species_colname = "phylogeny")
# robust_args must be a list with the arguments for robust. For the moment,
# it only accepts cluster as a string, and clubSandwich as TRUE or FALSE.
<- list(cluster = "study_ID")
robust_args
<- leave_one_out(res,
loo_test_phylo group = "phylogeny",
vcalc_args = vcalc_args,
phylo_args = phylo_args,
robust_args = robust_args)
loo_test_phylo
name estimate lowerCL upperCL lowerPR upperPR
1 Amalosia_lesueurii 0.095 -0.46 0.65 -1.18 1.37
2 Tor_putitora -0.013 -0.39 0.36 -0.88 0.85
3 Labeo_rohita 0.135 -0.20 0.47 -0.77 1.04
4 Danio_rerio 0.042 -0.49 0.57 -1.17 1.26
5 Propylea_japonica 0.110 -0.43 0.65 -1.13 1.35
orchard_leave1out(leave1out = loo_test_phylo,
xlab = "dARR",
ylab = "Species left out",
trunk.size = 1.2,
branch.size = 1.5,
alpha = 0.2,
legend.pos = "top.out")
Note that this time the leave-one-out was done by species, not by study. You can leave out one study at a time and still use phylogeny:
<- leave_one_out(res,
loo_test_stdy group = "study_ID",
vcalc_args = vcalc_args,
phylo_args = phylo_args,
robust_args = robust_args)
loo_test_stdy
name estimate lowerCL upperCL lowerPR upperPR
1 1 0.095 -0.46 0.65 -1.18 1.37
2 2 -0.013 -0.39 0.36 -0.88 0.85
3 4 0.135 -0.20 0.47 -0.77 1.04
4 5 0.042 -0.49 0.57 -1.17 1.26
5 6 0.110 -0.43 0.65 -1.13 1.35
Finally, you can use the output from leave_one_out()
to plot the results using ggplot2
if you like:
library(ggplot2)
$mod_table |>
loo_test_phyloggplot(aes(x = name, y = estimate, ymin = lowerCL, ymax = upperCL)) +
geom_pointrange() +
coord_flip()
Publication bias is a common issue in meta-analysis but can be challenging to deal with the multi-level meta-analyses (Shinichi Nakagawa et al. 2022). Yang et al. (2024) proposed a new method to correct for publication bias in multi-level meta-analyses. We have implemented a new plot which can visualise how overall meta-analytic means change when correcting for publication bias. This is done using the pub_bias_plot()
function.
# Data
data(english)
# We need to calculate the effect sizes, in this case d
<- metafor::escalc(measure = "SMD", n1i = NStartControl, sd1i = SD_C, m1i = MeanC, n2i = NStartExpt, sd2i = SD_E, m2i = MeanE,
english var.names=c("SMD","vSMD"),
data = english)
# Our MLMA model
<- metafor::rma.mv(yi = SMD, V = vSMD, random = list( ~ 1 | StudyNo, ~ 1 | EffectID),test = "t", data = english)
english_MA1
# Now apply the approach of Yang et al. 2024
# Step 1: Fit the fixed effect model
<- metafor::rma(yi = SMD, vi = vSMD, data = english, test = "t", method = "FE")
english_MA2
# Step 2: Correct for dependency
<- metafor::robust(english_MA2, cluster = english$StudyNo, clubSandwich=TRUE)
english_MA2_1
# Step 3: Testing modified eggers. Need intercept following Nakagawa et al. 2022
<- metafor::rma.mv(yi = SMD, V = vSMD, mod = ~vSMD, random = list( ~ 1 | StudyNo, ~ 1 | EffectID),test = "t", data = english) english_MA4
We can then plot the results using the pub_bias_plot()
function. This function takes the output of orchard_plot()
and adds the bias-corrected mean estimates (Fig. 26). The pub_bias_plot()
function takes the output of orchard_plot()
and adds the bias-corrected mean estimates.
<- orchard_plot(english_MA1, group = "StudyNo", xlab = "Standardized Mean Difference")
plot <- pub_bias_plot(plot, english_MA2_1, english_MA4)
plot2
plot2