Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Meta-analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Challenges== A meta-analysis of several small studies does not always predict the results of a single large study.<ref>{{cite journal | vauthors = LeLorier J, Grégoire G, Benhaddad A, Lapierre J, Derderian F | title = Discrepancies between meta-analyses and subsequent large randomized, controlled trials | journal = The New England Journal of Medicine | volume = 337 | issue = 8 | pages = 536–542 | date = August 1997 | pmid = 9262498 | doi = 10.1056/NEJM199708213370806 | doi-access = free }}</ref> Some have argued that a weakness of the method is that sources of bias are not controlled by the method: a good meta-analysis cannot correct for poor design or bias in the original studies.<ref name=Slavin>{{cite journal | vauthors = Slavin RE | title = Best-Evidence Synthesis: An Alternative to Meta-Analytic and Traditional Reviews | journal = Educational Researcher | volume = 15 | issue = 9 | pages = 5–9 | year = 1986 | doi = 10.3102/0013189X015009005| s2cid = 146457142 }}</ref> This would mean that only methodologically sound studies should be included in a meta-analysis, a practice called 'best evidence synthesis'.<ref name=Slavin/> Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size.<ref>{{cite book | vauthors = Hunter JE, Schmidt FL, Jackson GB | collaboration = American Psychological Association. Division of Industrial-Organizational Psychology |title=Meta-analysis: cumulating research findings across studies |date=1982 |publisher=Sage |location=Beverly Hills, California |isbn=978-0-8039-1864-1}}</ref> However, others have argued that a better approach is to preserve information about the variance in the study sample, casting as wide a net as possible, and that methodological selection criteria introduce unwanted subjectivity, defeating the purpose of the approach.<ref>{{cite book | vauthors = Glass GV, McGaw B, Smith ML|title=Meta-analysis in social research |date=1981 |publisher=Sage Publications |location=Beverly Hills, California |isbn=978-0-8039-1633-3}}</ref> More recently, and under the influence of a push for open practices in science, tools to develop "crowd-sourced" living meta-analyses that are updated by communities of scientists <ref>{{Cite journal |last1=Wolf |first1=Vinzent |last2=Kühnel |first2=Anne |last3=Teckentrup |first3=Vanessa |last4=Koenig |first4=Julian |last5=Kroemer |first5=Nils B. |date=2021 |title=Does transcutaneous auricular vagus nerve stimulation affect vagally mediated heart rate variability? A living and interactive Bayesian meta-analysis |journal=Psychophysiology |language=en |volume=58 |issue=11 |pages=e13933 |doi=10.1111/psyp.13933 |pmid=34473846 |issn=0048-5772|doi-access=free }}</ref><ref>{{Cite journal |last1=Allbritton |first1=David |last2=Gómez |first2=Pablo |last3=Angele |first3=Bernhard |last4=Vasilev |first4=Martin |last5=Perea |first5=Manuel |date=2024-07-22 |title=Breathing Life Into Meta-Analytic Methods |journal=Journal of Cognition |language=en |volume=7 |issue=1 |page=61 |doi=10.5334/joc.389 |issn=2514-4820 |pmc=11276543 |pmid=39072210 |doi-access=free}}</ref> in hopes of making all the subjective choices more explicit. ===Publication bias: the file drawer problem=== [[File:Example of a symmetrical funnel plot created with MetaXL Sept 2015.jpg|thumb|right|A funnel plot expected without the file drawer problem. The largest studies converge at the tip while smaller studies show more or less symmetrical scatter at the base.]] [[File:Funnel plot depicting asymmetry Sept 2015.jpg|thumb|right|A funnel plot expected with the file drawer problem. The largest studies still cluster around the tip, but the bias against publishing negative studies has caused the smaller studies as a whole to have an unjustifiably favorable result to the hypothesis.]] Another potential pitfall is the reliance on the available body of published studies, which may create exaggerated outcomes due to [[publication bias]],<ref>{{Cite journal |last=Wagner |first=John A |date=2022-09-03 |title=The influence of unpublished studies on results of recent meta-analyses: publication bias, the file drawer problem, and implications for the replication crisis |url=https://www.tandfonline.com/doi/full/10.1080/13645579.2021.1922805 |journal=International Journal of Social Research Methodology |language=en |volume=25 |issue=5 |pages=639–644 |doi=10.1080/13645579.2021.1922805 |issn=1364-5579}}</ref> as studies which show [[Null result|negative results]] or [[statistically insignificant|insignificant]] results are less likely to be published.<ref>{{Cite journal | vauthors = Polanin JR, Tanner-Smith EE, Hennessy EA |date=2016 |title=Estimating the Difference Between Published and Unpublished Effect Sizes: A Meta-Review |url=http://journals.sagepub.com/doi/10.3102/0034654315582067 |journal=Review of Educational Research |language=en |volume=86 |issue=1 |pages=207–236 |doi=10.3102/0034654315582067 |s2cid=145513046 |issn=0034-6543}}</ref> For example, pharmaceutical companies have been known to hide negative studies<ref>{{Cite journal |last1=Nassir Ghaemi |first1=S. |last2=Shirzadi |first2=Arshia A. |last3=Filkowski |first3=Megan |date=2008-09-10 |title=Publication Bias and the Pharmaceutical Industry: The Case of Lamotrigine in Bipolar Disorder |journal=The Medscape Journal of Medicine |volume=10 |issue=9 |pages=211 |issn=1934-1997 |pmc=2580079 |pmid=19008973}}</ref> and researchers may have overlooked unpublished studies such as dissertation studies or conference abstracts that did not reach publication.<ref>{{Cite journal |last1=Martin |first1=José Luis R. |last2=Pérez |first2=Víctor |last3=Sacristán |first3=Montse |last4=Álvarez |first4=Enric |date=2005 |title=Is grey literature essential for a better control of publication bias in psychiatry? An example from three meta-analyses of schizophrenia |url=https://www.cambridge.org/core/product/identifier/S0924933800066967/type/journal_article |journal=European Psychiatry |language=en |volume=20 |issue=8 |pages=550–553 |doi=10.1016/j.eurpsy.2005.03.011 |pmid=15994063 |issn=0924-9338}}</ref> This is not easily solved, as one cannot know how many studies have gone unreported.<ref name=Rosenthal1979>{{cite journal |doi=10.1037/0033-2909.86.3.638 |year=1979 | vauthors = Rosenthal R |author-link=Robert Rosenthal (psychologist) |title=The "File Drawer Problem" and the Tolerance for Null Results |journal=[[Psychological Bulletin]] |volume=86 |issue=3 |pages=638–641|s2cid=36070395 }}</ref><ref name="Publication bias in research synthe" /> This [[file drawer problem]] characterized by negative or non-significant results being tucked away in a cabinet, can result in a biased distribution of effect sizes thus creating a serious [[base rate fallacy]], in which the significance of the published studies is overestimated, as other studies were either not submitted for publication or were rejected. This should be seriously considered when interpreting the outcomes of a meta-analysis.<ref name=Rosenthal1979/><ref name=Hunter&Schmidt1990>{{Cite book|year=1990 | vauthors = Hunter JE, Schmidt FL |author-link1=John E. Hunter |author-link2=Frank L. Schmidt |title=Methods of Meta-Analysis: Correcting Error and Bias in Research Findings |place=Newbury Park, California; London; New Delhi |publisher=[[SAGE Publications]]}}</ref> The distribution of effect sizes can be visualized with a [[funnel plot]] which (in its most common version) is a scatter plot of standard error versus the effect size.<ref>{{Cite journal |last1=Nakagawa |first1=Shinichi |last2=Lagisz |first2=Malgorzata |last3=Jennions |first3=Michael D. |last4=Koricheva |first4=Julia |last5=Noble |first5=Daniel W. A. |last6=Parker |first6=Timothy H. |last7=Sánchez-Tójar |first7=Alfredo |last8=Yang |first8=Yefeng |last9=O'Dea |first9=Rose E. |date=2022 |title=Methods for testing publication bias in ecological and evolutionary meta-analyses |url=https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.13724 |journal=Methods in Ecology and Evolution |language=en |volume=13 |issue=1 |pages=4–21 |doi=10.1111/2041-210X.13724 |bibcode=2022MEcEv..13....4N |hdl=1885/294436 |s2cid=241159497 |issn=2041-210X|hdl-access=free }}</ref> It makes use of the fact that the smaller studies (thus larger standard errors) have more scatter of the magnitude of effect (being less precise) while the larger studies have less scatter and form the tip of the funnel. If many negative studies were not published, the remaining positive studies give rise to a funnel plot in which the base is skewed to one side (asymmetry of the funnel plot). In contrast, when there is no publication bias, the effect of the smaller studies has no reason to be skewed to one side and so a symmetric funnel plot results. This also means that if no publication bias is present, there would be no relationship between standard error and effect size.<ref>{{cite book | vauthors = Light RJ, Pillemer DB |title=Summing up: the science of reviewing research |date=1984 |publisher=Harvard University Press |location=Cambridge, Massachusetts |isbn=978-0-674-85431-4 |url=https://archive.org/details/summingupscience00ligh }}</ref> A negative or positive relation between standard error and effect size would imply that smaller studies that found effects in one direction only were more likely to be published and/or to be submitted for publication. Apart from the visual funnel plot, statistical methods for detecting publication bias have also been proposed.<ref name="Publication bias in research synthe">{{cite journal | vauthors = Vevea JL, Woods CM | title = Publication bias in research synthesis: sensitivity analysis using a priori weight functions | journal = Psychological Methods | volume = 10 | issue = 4 | pages = 428–443 | date = December 2005 | pmid = 16392998 | doi = 10.1037/1082-989X.10.4.428 }}</ref> These are controversial because they typically have low power for detection of bias, but also may make false positives under some circumstances.<ref name="Ioannidis and Trikalinos">{{cite journal | vauthors = Ioannidis JP, Trikalinos TA | title = The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey | journal = CMAJ | volume = 176 | issue = 8 | pages = 1091–1096 | date = April 2007 | pmid = 17420491 | pmc = 1839799 | doi = 10.1503/cmaj.060410 }}</ref> For instance small study effects (biased smaller studies), wherein methodological differences between smaller and larger studies exist, may cause asymmetry in effect sizes that resembles publication bias. However, small study effects may be just as problematic for the interpretation of meta-analyses, and the imperative is on meta-analytic authors to investigate potential sources of bias.<ref>{{Cite journal | vauthors = Hedges LV, Vevea JL |date=1996 |title=Estimating Effect Size Under Publication Bias: Small Sample Properties and Robustness of a Random Effects Selection Model |url=http://journals.sagepub.com/doi/10.3102/10769986021004299 |journal=Journal of Educational and Behavioral Statistics |language=en |volume=21 |issue=4 |pages=299–332 |doi=10.3102/10769986021004299 |s2cid=123680599 |issn=1076-9986}}</ref> The problem of publication bias is not trivial as it is suggested that 25% of meta-analyses in the psychological sciences may have suffered from publication bias.<ref name="Ferguson and Brannick">{{cite journal |vauthors=Ferguson CJ, Brannick MT |date=March 2012 |title=Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses |journal=Psychological Methods |volume=17 |issue=1 |pages=120–128 |doi=10.1037/a0024445 |pmid=21787082}}</ref> However, low power of existing tests and problems with the visual appearance of the funnel plot remain an issue, and estimates of publication bias may remain lower than what truly exists. Most discussions of publication bias focus on journal practices favoring publication of statistically significant findings. However, questionable research practices, such as reworking statistical models until significance is achieved, may also favor statistically significant findings in support of researchers' hypotheses.<ref name=Simmons>{{cite journal | vauthors = Simmons JP, Nelson LD, Simonsohn U | title = False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant | journal = Psychological Science | volume = 22 | issue = 11 | pages = 1359–1366 | date = November 2011 | pmid = 22006061 | doi = 10.1177/0956797611417632 | doi-access = free }}</ref><ref name=LeBel>{{Cite journal |year=2011 | vauthors = LeBel E, Peters K |title=Fearing the future of empirical psychology: Bem's (2011) evidence of psi as a case study of deficiencies in modal research practice |journal=[[Review of General Psychology]] |volume=15 |issue=4 |pages=371–379 |url=http://publish.uwo.ca/~elebel/documents/l&p(2011,rgp).pdf |doi=10.1037/a0025172 |s2cid=51686730 |url-status=dead |archive-url=https://web.archive.org/web/20121124154834/http://publish.uwo.ca/~elebel/documents/l%26p%282011%2Crgp%29.pdf |archive-date=24 November 2012}}</ref> ===Problems related to studies not reporting non-statistically significant effects=== Studies often do not report the effects when they do not reach statistical significance.<ref>{{Cite journal |last1=Schober |first1=Patrick |last2=Bossers |first2=Sebastiaan M. |last3=Schwarte |first3=Lothar A. |date=2018 |title=Statistical Significance Versus Clinical Importance of Observed Effect Sizes: What Do P Values and Confidence Intervals Really Represent? |journal=Anesthesia & Analgesia |language=en |volume=126 |issue=3 |pages=1068–1072 |doi=10.1213/ANE.0000000000002798 |issn=0003-2999 |pmc=5811238 |pmid=29337724}}</ref> For example, they may simply say that the groups did not show statistically significant differences, without reporting any other information (e.g. a statistic or p-value).<ref>{{Cite journal |last1=Gates |first1=Simon |last2=Ealing |first2=Elizabeth |date=2019 |title=Reporting and interpretation of results from clinical trials that did not claim a treatment difference: survey of four general medical journals |journal=BMJ Open |language=en |volume=9 |issue=9 |pages=e024785 |doi=10.1136/bmjopen-2018-024785 |issn=2044-6055 |pmc=6738699 |pmid=31501094}}</ref> Exclusion of these studies would lead to a situation similar to publication bias, but their inclusion (assuming null effects) would also bias the meta-analysis. ===Problems related to the statistical approach=== Other weaknesses are that it has not been determined if the statistically most accurate method for combining results is the fixed, IVhet, random or quality effect models, though the criticism against the random effects model is mounting because of the perception that the new random effects (used in meta-analysis) are essentially formal devices to facilitate smoothing or shrinkage and prediction may be impossible or ill-advised.<ref>{{cite web | vauthors = Hodges JS, Clayton MK | title = Random effects old and new | date = February 2011 | pages = 1–23 | citeseerx = 10.1.1.225.2685 }}</ref> The main problem with the random effects approach is that it uses the classic statistical thought of generating a "compromise estimator" that makes the weights close to the naturally weighted estimator if [[study heterogeneity|heterogeneity]] across studies is large but close to the inverse variance weighted estimator if the between study heterogeneity is small. However, what has been ignored is the distinction between the model ''we choose'' to analyze a given dataset, and the ''mechanism by which the data came into being''.<ref name="ReferenceB">{{cite book | vauthors = Hodges JS | chapter = Random effects old and new. |title=Richly parameterized linear models: additive, time series, and spatial models using random effects |date=2014 |publisher=CRC Press |location=Boca Raton |isbn=978-1-4398-6683-2 |pages=285–302}}</ref> A random effect can be present in either of these roles, but the two roles are quite distinct. There's no reason to think the analysis model and data-generation mechanism (model) are similar in form, but many sub-fields of statistics have developed the habit of assuming, for theory and simulations, that the data-generation mechanism (model) is identical to the analysis model we choose (or would like others to choose). As a hypothesized mechanisms for producing the data, the random effect model for meta-analysis is silly and it is more appropriate to think of this model as a superficial description and something we choose as an analytical tool – but this choice for meta-analysis may not work because the study effects are a fixed feature of the respective meta-analysis and the probability distribution is only a descriptive tool.<ref name="ReferenceB"/> ===Problems arising from agenda-driven bias=== The most severe fault in meta-analysis often occurs when the person or persons doing the meta-analysis have an [[economic]], [[Social issues|social]], or [[political]] agenda such as the passage or defeat of [[legislation]].<ref>{{Cite news |title=Research into trans medicine has been manipulated |url=https://www.economist.com/united-states/2024/06/27/research-into-trans-medicine-has-been-manipulated |access-date=2024-09-28 |newspaper=The Economist |issn=0013-0613}}</ref> People with these types of agendas may be more likely to abuse meta-analysis due to personal [[bias]]. For example, researchers favorable to the author's agenda are likely to have their studies [[Cherry picking (fallacy)|cherry-picked]] while those not favorable will be ignored or labeled as "not credible". In addition, the favored authors may themselves be biased or paid to produce results that support their overall political, social, or economic goals in ways such as selecting small favorable data sets and not incorporating larger unfavorable data sets. The influence of such biases on the results of a meta-analysis is possible because the methodology of meta-analysis is highly malleable.<ref name="Stegenga">{{cite journal | vauthors = Stegenga J | title = Is meta-analysis the platinum standard of evidence? | journal = Studies in History and Philosophy of Biological and Biomedical Sciences | volume = 42 | issue = 4 | pages = 497–507 | date = December 2011 | pmid = 22035723 | doi = 10.1016/j.shpsc.2011.07.003 | url = https://philpapers.org/rec/STEIMT | author-link = Stegenga J }}</ref> A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals, 15 from specialty medicine journals, and three from the [[Cochrane Database of Systematic Reviews]]. The 29 meta-analyses reviewed a total of 509 [[randomized controlled trials]] (RCTs). Of these, 318 RCTs reported funding sources, with 219 (69%) receiving funding from industry (i.e. one or more authors having financial ties to the pharmaceutical industry). Of the 509 RCTs, 132 reported author conflict of interest disclosures, with 91 studies (69%) disclosing one or more authors having financial ties to industry. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded "without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers' understanding and appraisal of the evidence from the meta-analysis may be compromised."<ref>{{citation | title=Reporting of Conflicts of Interest in Meta-analyses of Trials of Pharmacological Treatments | journal = Journal of the American Medical Association | vauthors = Roseman M, Milette K, Bero LA, Coyne JC, Lexchin J, Turner EH, Thombs BD | volume = 305 | issue = 10 | pages = 1008–1017 | year = 2011 | doi = 10.1001/jama.2011.257 | pmid = 21386079 | url = https://www.rug.nl/research/portal/en/publications/reporting-of-conflicts-of-interest-in-metaanalyses-of-trials-of-pharmacological-treatments(d4a95ee2-429f-45a4-a917-d794ee954797).html | hdl = 11370/d4a95ee2-429f-45a4-a917-d794ee954797 | s2cid = 11270323 | hdl-access = free }}</ref> For example, in 1998, a US federal judge found that the United States [[United States Environmental Protection Agency|Environmental Protection Agency]] had abused the meta-analysis process to produce a study claiming cancer risks to non-smokers from environmental tobacco smoke (ETS) with the intent to influence policy makers to pass smoke-free–workplace laws.<ref>{{Cite journal |last=Spink |first=Paul |date=1999 |title=Challenging Environmental Tobacco Smoke in the Workplace |url=https://journals.sagepub.com/doi/10.1177/146145299900100402 |journal=Environmental Law Review |language=en |volume=1 |issue=4 |pages=243–265 |doi=10.1177/146145299900100402 |bibcode=1999EnvLR...1..243S |issn=1461-4529}}</ref><ref>{{Cite web |last=Will |first=George |date=1998-07-30 |title=Polluted by the anti-tobacco crusade |url=https://www.tampabay.com/archive/1998/07/30/polluted-by-the-anti-tobacco-crusade/ |access-date=2024-09-28 |website=Tampa Bay Times |language=en}}</ref><ref>{{Cite journal |last1=Nelson |first1=Jon P. |last2=Kennedy |first2=Peter E. |date=2009 |title=The Use (and Abuse) of Meta-Analysis in Environmental and Natural Resource Economics: An Assessment |url=https://link.springer.com/article/10.1007/s10640-008-9253-5 |journal=Environmental and Resource Economics |language=en |volume=42 |issue=3 |pages=345–377 |doi=10.1007/s10640-008-9253-5 |bibcode=2009EnREc..42..345N |issn=0924-6460}}</ref> ===Comparability and validity of included studies=== Meta-analysis may often not be a substitute for an adequately powered primary study, particularly in the biological sciences.<ref>{{cite journal | vauthors = Munafò MR, Flint J | title = Meta-analysis of genetic association studies | journal = Trends in Genetics | volume = 20 | issue = 9 | pages = 439–444 | date = September 2004 | pmid = 15313553 | doi = 10.1016/j.tig.2004.06.014 }}</ref> Heterogeneity of methods used may lead to faulty conclusions.<ref>{{cite journal | vauthors = Stone DL, Rosopa PJ |title=The Advantages and Limitations of Using Meta-analysis in Human Resource Management Research |journal=Human Resource Management Review |date=1 March 2017 |volume=27 |issue=1 |pages=1–7 |doi=10.1016/j.hrmr.2016.09.001 |language=en |issn=1053-4822}}</ref> For instance, differences in the forms of an intervention or the cohorts that are thought to be minor or are unknown to the scientists could lead to substantially different results, including results that distort the meta-analysis' results or are not adequately considered in its data. Vice versa, results from meta-analyses may also make certain hypothesis or interventions seem nonviable and preempt further research or approvals, despite certain modifications – such as intermittent administration, [[personalized medicine|personalized criteria]] and [[combination therapy|combination measures]] – leading to substantially different results, including in cases where such have been successfully identified and applied in small-scale studies that were considered in the meta-analysis.{{citation needed|date=January 2022}} [[Standardization]], [[Reproducibility|reproduction of experiments]], [[open science|open data and open protocols]] may often not mitigate such problems, for instance as relevant factors and criteria could be unknown or not be recorded.{{citation needed|date=January 2022}} There is a debate about the appropriate balance between testing with as few animals or humans as possible and the need to obtain robust, reliable findings. It has been argued that unreliable research is inefficient and wasteful and that studies are not just wasteful when they stop too late but also when they stop too early. In large clinical trials, planned, sequential analyses are sometimes used if there is considerable expense or potential harm associated with testing participants.<ref>{{cite journal | vauthors = Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, Munafò MR | title = Power failure: why small sample size undermines the reliability of neuroscience | journal = Nature Reviews. Neuroscience | volume = 14 | issue = 5 | pages = 365–376 | date = May 2013 | pmid = 23571845 | doi = 10.1038/nrn3475 | s2cid = 455476 | doi-access = free }}</ref> In [[applied science|applied]] behavioural science, "megastudies" have been proposed to investigate the efficacy of many different interventions designed in an interdisciplinary manner by separate teams.<ref name="10.1038/s41586-021-04128-4">{{cite journal | vauthors = Milkman KL, Gromet D, Ho H, Kay JS, Lee TW, Pandiloski P, Park Y, Rai A, Bazerman M, Beshears J, Bonacorsi L, Camerer C, Chang E, Chapman G, Cialdini R, Dai H, Eskreis-Winkler L, Fishbach A, Gross JJ, Horn S, Hubbard A, Jones SJ, Karlan D, Kautz T, Kirgios E, Klusowski J, Kristal A, Ladhania R, Loewenstein G, Ludwig J, Mellers B, Mullainathan S, Saccardo S, Spiess J, Suri G, Talloen JH, Taxer J, Trope Y, Ungar L, Volpp KG, Whillans A, Zinman J, Duckworth AL | display-authors = 6 | title = Megastudies improve the impact of applied behavioural science | journal = Nature | volume = 600 | issue = 7889 | pages = 478–483 | date = December 2021 | pmid = 34880497 | doi = 10.1038/s41586-021-04128-4 | pmc = 8822539 | s2cid = 245047340 | bibcode = 2021Natur.600..478M | author40-link = Kevin Volpp }}</ref> One such study used a fitness chain to recruit a large number participants. It has been suggested that behavioural interventions are often hard to compare [in meta-analyses and reviews], as "different scientists test different intervention ideas in different samples using different outcomes over different time intervals", causing a lack of comparability of such individual investigations which limits "their potential to inform [[policy]]".<ref name="10.1038/s41586-021-04128-4"/> ===Weak inclusion standards lead to misleading conclusions=== Meta-analyses in education are often not restrictive enough in regards to the methodological quality of the studies they include. For example, studies that include small samples or researcher-made measures lead to inflated effect size estimates.<ref>{{Cite journal| vauthors = Cheung AC, Slavin RE |date=2016-06-01|title=How Methodological Features Affect Effect Sizes in Education |journal=Educational Researcher|language=en|volume=45|issue=5|pages=283–292|doi=10.3102/0013189X16656615|s2cid=148531062|issn=0013-189X}}</ref> However, this problem also troubles meta-analysis of clinical trials. The use of different quality assessment tools (QATs) lead to including different studies and obtaining conflicting estimates of average treatment effects.<ref>{{cite journal | vauthors = Jüni P, Witschi A, Bloch R, Egger M | title = The hazards of scoring the quality of clinical trials for meta-analysis | journal = JAMA | volume = 282 | issue = 11 | pages = 1054–1060 | date = September 1999 | pmid = 10493204 | doi = 10.1001/jama.282.11.1054 | doi-access = free }}</ref><ref>{{cite journal | vauthors = Armijo-Olivo S, Fuentes J, Ospina M, Saltaji H, Hartling L | title = Inconsistency in the items included in tools used in general health research and physical therapy to evaluate the methodological quality of randomized controlled trials: a descriptive analysis | journal = BMC Medical Research Methodology | volume = 13 | issue = 1 | pages = 116 | date = September 2013 | pmid = 24044807 | pmc = 3848693 | doi = 10.1186/1471-2288-13-116 | doi-access = free }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)