Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Meta-analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Publication bias: the file drawer problem=== [[File:Example of a symmetrical funnel plot created with MetaXL Sept 2015.jpg|thumb|right|A funnel plot expected without the file drawer problem. The largest studies converge at the tip while smaller studies show more or less symmetrical scatter at the base.]] [[File:Funnel plot depicting asymmetry Sept 2015.jpg|thumb|right|A funnel plot expected with the file drawer problem. The largest studies still cluster around the tip, but the bias against publishing negative studies has caused the smaller studies as a whole to have an unjustifiably favorable result to the hypothesis.]] Another potential pitfall is the reliance on the available body of published studies, which may create exaggerated outcomes due to [[publication bias]],<ref>{{Cite journal |last=Wagner |first=John A |date=2022-09-03 |title=The influence of unpublished studies on results of recent meta-analyses: publication bias, the file drawer problem, and implications for the replication crisis |url=https://www.tandfonline.com/doi/full/10.1080/13645579.2021.1922805 |journal=International Journal of Social Research Methodology |language=en |volume=25 |issue=5 |pages=639–644 |doi=10.1080/13645579.2021.1922805 |issn=1364-5579}}</ref> as studies which show [[Null result|negative results]] or [[statistically insignificant|insignificant]] results are less likely to be published.<ref>{{Cite journal | vauthors = Polanin JR, Tanner-Smith EE, Hennessy EA |date=2016 |title=Estimating the Difference Between Published and Unpublished Effect Sizes: A Meta-Review |url=http://journals.sagepub.com/doi/10.3102/0034654315582067 |journal=Review of Educational Research |language=en |volume=86 |issue=1 |pages=207–236 |doi=10.3102/0034654315582067 |s2cid=145513046 |issn=0034-6543}}</ref> For example, pharmaceutical companies have been known to hide negative studies<ref>{{Cite journal |last1=Nassir Ghaemi |first1=S. |last2=Shirzadi |first2=Arshia A. |last3=Filkowski |first3=Megan |date=2008-09-10 |title=Publication Bias and the Pharmaceutical Industry: The Case of Lamotrigine in Bipolar Disorder |journal=The Medscape Journal of Medicine |volume=10 |issue=9 |pages=211 |issn=1934-1997 |pmc=2580079 |pmid=19008973}}</ref> and researchers may have overlooked unpublished studies such as dissertation studies or conference abstracts that did not reach publication.<ref>{{Cite journal |last1=Martin |first1=José Luis R. |last2=Pérez |first2=Víctor |last3=Sacristán |first3=Montse |last4=Álvarez |first4=Enric |date=2005 |title=Is grey literature essential for a better control of publication bias in psychiatry? An example from three meta-analyses of schizophrenia |url=https://www.cambridge.org/core/product/identifier/S0924933800066967/type/journal_article |journal=European Psychiatry |language=en |volume=20 |issue=8 |pages=550–553 |doi=10.1016/j.eurpsy.2005.03.011 |pmid=15994063 |issn=0924-9338}}</ref> This is not easily solved, as one cannot know how many studies have gone unreported.<ref name=Rosenthal1979>{{cite journal |doi=10.1037/0033-2909.86.3.638 |year=1979 | vauthors = Rosenthal R |author-link=Robert Rosenthal (psychologist) |title=The "File Drawer Problem" and the Tolerance for Null Results |journal=[[Psychological Bulletin]] |volume=86 |issue=3 |pages=638–641|s2cid=36070395 }}</ref><ref name="Publication bias in research synthe" /> This [[file drawer problem]] characterized by negative or non-significant results being tucked away in a cabinet, can result in a biased distribution of effect sizes thus creating a serious [[base rate fallacy]], in which the significance of the published studies is overestimated, as other studies were either not submitted for publication or were rejected. This should be seriously considered when interpreting the outcomes of a meta-analysis.<ref name=Rosenthal1979/><ref name=Hunter&Schmidt1990>{{Cite book|year=1990 | vauthors = Hunter JE, Schmidt FL |author-link1=John E. Hunter |author-link2=Frank L. Schmidt |title=Methods of Meta-Analysis: Correcting Error and Bias in Research Findings |place=Newbury Park, California; London; New Delhi |publisher=[[SAGE Publications]]}}</ref> The distribution of effect sizes can be visualized with a [[funnel plot]] which (in its most common version) is a scatter plot of standard error versus the effect size.<ref>{{Cite journal |last1=Nakagawa |first1=Shinichi |last2=Lagisz |first2=Malgorzata |last3=Jennions |first3=Michael D. |last4=Koricheva |first4=Julia |last5=Noble |first5=Daniel W. A. |last6=Parker |first6=Timothy H. |last7=Sánchez-Tójar |first7=Alfredo |last8=Yang |first8=Yefeng |last9=O'Dea |first9=Rose E. |date=2022 |title=Methods for testing publication bias in ecological and evolutionary meta-analyses |url=https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.13724 |journal=Methods in Ecology and Evolution |language=en |volume=13 |issue=1 |pages=4–21 |doi=10.1111/2041-210X.13724 |bibcode=2022MEcEv..13....4N |hdl=1885/294436 |s2cid=241159497 |issn=2041-210X|hdl-access=free }}</ref> It makes use of the fact that the smaller studies (thus larger standard errors) have more scatter of the magnitude of effect (being less precise) while the larger studies have less scatter and form the tip of the funnel. If many negative studies were not published, the remaining positive studies give rise to a funnel plot in which the base is skewed to one side (asymmetry of the funnel plot). In contrast, when there is no publication bias, the effect of the smaller studies has no reason to be skewed to one side and so a symmetric funnel plot results. This also means that if no publication bias is present, there would be no relationship between standard error and effect size.<ref>{{cite book | vauthors = Light RJ, Pillemer DB |title=Summing up: the science of reviewing research |date=1984 |publisher=Harvard University Press |location=Cambridge, Massachusetts |isbn=978-0-674-85431-4 |url=https://archive.org/details/summingupscience00ligh }}</ref> A negative or positive relation between standard error and effect size would imply that smaller studies that found effects in one direction only were more likely to be published and/or to be submitted for publication. Apart from the visual funnel plot, statistical methods for detecting publication bias have also been proposed.<ref name="Publication bias in research synthe">{{cite journal | vauthors = Vevea JL, Woods CM | title = Publication bias in research synthesis: sensitivity analysis using a priori weight functions | journal = Psychological Methods | volume = 10 | issue = 4 | pages = 428–443 | date = December 2005 | pmid = 16392998 | doi = 10.1037/1082-989X.10.4.428 }}</ref> These are controversial because they typically have low power for detection of bias, but also may make false positives under some circumstances.<ref name="Ioannidis and Trikalinos">{{cite journal | vauthors = Ioannidis JP, Trikalinos TA | title = The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey | journal = CMAJ | volume = 176 | issue = 8 | pages = 1091–1096 | date = April 2007 | pmid = 17420491 | pmc = 1839799 | doi = 10.1503/cmaj.060410 }}</ref> For instance small study effects (biased smaller studies), wherein methodological differences between smaller and larger studies exist, may cause asymmetry in effect sizes that resembles publication bias. However, small study effects may be just as problematic for the interpretation of meta-analyses, and the imperative is on meta-analytic authors to investigate potential sources of bias.<ref>{{Cite journal | vauthors = Hedges LV, Vevea JL |date=1996 |title=Estimating Effect Size Under Publication Bias: Small Sample Properties and Robustness of a Random Effects Selection Model |url=http://journals.sagepub.com/doi/10.3102/10769986021004299 |journal=Journal of Educational and Behavioral Statistics |language=en |volume=21 |issue=4 |pages=299–332 |doi=10.3102/10769986021004299 |s2cid=123680599 |issn=1076-9986}}</ref> The problem of publication bias is not trivial as it is suggested that 25% of meta-analyses in the psychological sciences may have suffered from publication bias.<ref name="Ferguson and Brannick">{{cite journal |vauthors=Ferguson CJ, Brannick MT |date=March 2012 |title=Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses |journal=Psychological Methods |volume=17 |issue=1 |pages=120–128 |doi=10.1037/a0024445 |pmid=21787082}}</ref> However, low power of existing tests and problems with the visual appearance of the funnel plot remain an issue, and estimates of publication bias may remain lower than what truly exists. Most discussions of publication bias focus on journal practices favoring publication of statistically significant findings. However, questionable research practices, such as reworking statistical models until significance is achieved, may also favor statistically significant findings in support of researchers' hypotheses.<ref name=Simmons>{{cite journal | vauthors = Simmons JP, Nelson LD, Simonsohn U | title = False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant | journal = Psychological Science | volume = 22 | issue = 11 | pages = 1359–1366 | date = November 2011 | pmid = 22006061 | doi = 10.1177/0956797611417632 | doi-access = free }}</ref><ref name=LeBel>{{Cite journal |year=2011 | vauthors = LeBel E, Peters K |title=Fearing the future of empirical psychology: Bem's (2011) evidence of psi as a case study of deficiencies in modal research practice |journal=[[Review of General Psychology]] |volume=15 |issue=4 |pages=371–379 |url=http://publish.uwo.ca/~elebel/documents/l&p(2011,rgp).pdf |doi=10.1037/a0025172 |s2cid=51686730 |url-status=dead |archive-url=https://web.archive.org/web/20121124154834/http://publish.uwo.ca/~elebel/documents/l%26p%282011%2Crgp%29.pdf |archive-date=24 November 2012}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)