Exploring consequences of simulation design for apparent performance of methods of meta-analysis

Elena Kulinskaya, David C. Hoaglin, Ilyas Bakbergenuly

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
20 Downloads (Pure)

Abstract

Contemporary statistical publications rely on simulation to evaluate performance of new methods and compare them with established methods. In the context of random-effects meta-analysis of log-odds-ratios, we investigate how choices in generating data affect such conclusions. The choices we study include the overall log-odds-ratio, the distribution of probabilities in the control arm, and the distribution of study-level sample sizes. We retain the customary normal distribution of study-level effects. To examine the impact of the components of simulations, we assess the performance of the best available inverse–variance–weighted two-stage method, a two-stage method with constant sample-size-based weights, and two generalized linear mixed models. The results show no important differences between fixed and random sample sizes. In contrast, we found differences among data-generation models in estimation of heterogeneity variance and overall log-odds-ratio. This sensitivity to design poses challenges for use of simulation in choosing methods of meta-analysis.
Original languageEnglish
Pages (from-to)1667-1690
Number of pages24
JournalStatistical Methods in Medical Research
Volume30
Issue number7
Early online date10 Jun 2021
DOIs
Publication statusPublished - 1 Jul 2021

Cite this