Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Meta-analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Comparability and validity of included studies=== Meta-analysis may often not be a substitute for an adequately powered primary study, particularly in the biological sciences.<ref>{{cite journal | vauthors = Munafò MR, Flint J | title = Meta-analysis of genetic association studies | journal = Trends in Genetics | volume = 20 | issue = 9 | pages = 439–444 | date = September 2004 | pmid = 15313553 | doi = 10.1016/j.tig.2004.06.014 }}</ref> Heterogeneity of methods used may lead to faulty conclusions.<ref>{{cite journal | vauthors = Stone DL, Rosopa PJ |title=The Advantages and Limitations of Using Meta-analysis in Human Resource Management Research |journal=Human Resource Management Review |date=1 March 2017 |volume=27 |issue=1 |pages=1–7 |doi=10.1016/j.hrmr.2016.09.001 |language=en |issn=1053-4822}}</ref> For instance, differences in the forms of an intervention or the cohorts that are thought to be minor or are unknown to the scientists could lead to substantially different results, including results that distort the meta-analysis' results or are not adequately considered in its data. Vice versa, results from meta-analyses may also make certain hypothesis or interventions seem nonviable and preempt further research or approvals, despite certain modifications – such as intermittent administration, [[personalized medicine|personalized criteria]] and [[combination therapy|combination measures]] – leading to substantially different results, including in cases where such have been successfully identified and applied in small-scale studies that were considered in the meta-analysis.{{citation needed|date=January 2022}} [[Standardization]], [[Reproducibility|reproduction of experiments]], [[open science|open data and open protocols]] may often not mitigate such problems, for instance as relevant factors and criteria could be unknown or not be recorded.{{citation needed|date=January 2022}} There is a debate about the appropriate balance between testing with as few animals or humans as possible and the need to obtain robust, reliable findings. It has been argued that unreliable research is inefficient and wasteful and that studies are not just wasteful when they stop too late but also when they stop too early. In large clinical trials, planned, sequential analyses are sometimes used if there is considerable expense or potential harm associated with testing participants.<ref>{{cite journal | vauthors = Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, Munafò MR | title = Power failure: why small sample size undermines the reliability of neuroscience | journal = Nature Reviews. Neuroscience | volume = 14 | issue = 5 | pages = 365–376 | date = May 2013 | pmid = 23571845 | doi = 10.1038/nrn3475 | s2cid = 455476 | doi-access = free }}</ref> In [[applied science|applied]] behavioural science, "megastudies" have been proposed to investigate the efficacy of many different interventions designed in an interdisciplinary manner by separate teams.<ref name="10.1038/s41586-021-04128-4">{{cite journal | vauthors = Milkman KL, Gromet D, Ho H, Kay JS, Lee TW, Pandiloski P, Park Y, Rai A, Bazerman M, Beshears J, Bonacorsi L, Camerer C, Chang E, Chapman G, Cialdini R, Dai H, Eskreis-Winkler L, Fishbach A, Gross JJ, Horn S, Hubbard A, Jones SJ, Karlan D, Kautz T, Kirgios E, Klusowski J, Kristal A, Ladhania R, Loewenstein G, Ludwig J, Mellers B, Mullainathan S, Saccardo S, Spiess J, Suri G, Talloen JH, Taxer J, Trope Y, Ungar L, Volpp KG, Whillans A, Zinman J, Duckworth AL | display-authors = 6 | title = Megastudies improve the impact of applied behavioural science | journal = Nature | volume = 600 | issue = 7889 | pages = 478–483 | date = December 2021 | pmid = 34880497 | doi = 10.1038/s41586-021-04128-4 | pmc = 8822539 | s2cid = 245047340 | bibcode = 2021Natur.600..478M | author40-link = Kevin Volpp }}</ref> One such study used a fitness chain to recruit a large number participants. It has been suggested that behavioural interventions are often hard to compare [in meta-analyses and reviews], as "different scientists test different intervention ideas in different samples using different outcomes over different time intervals", causing a lack of comparability of such individual investigations which limits "their potential to inform [[policy]]".<ref name="10.1038/s41586-021-04128-4"/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)