Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Occam's razor
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Empirical === Occam's razor has gained strong empirical support in helping to converge on better theories (see [[#Uses|Uses]] section below for some examples). In the related concept of [[overfitting]], excessively complex models are affected by [[statistical noise]] (a problem also known as the [[bias–variance tradeoff]]), whereas simpler models may capture the underlying structure better and may thus have better [[predictive inference|predictive]] performance. It is, however, often difficult to deduce which part of the data is noise (cf. [[model selection]], [[test set]], [[minimum description length]], [[Bayesian inference]], etc.). ==== Testing the razor ==== {{Original research section|reason=Author of this section cites very few reliable sources, and also consistently conflates simplicity with (logical) truth. Occam's razor is not built to differentiate true hypotheses from false ones.|date=January 2023}} The razor's statement that "other things being equal, simpler explanations are generally better than more complex ones" is amenable to empirical testing. Another interpretation of the razor's statement would be that "simpler hypotheses are generally better than the complex ones". The procedure to test the former interpretation would compare the track records of simple and comparatively complex explanations. If one accepts the first interpretation, the validity of Occam's razor as a tool would then have to be rejected if the more complex explanations were more often correct than the less complex ones (while the converse would lend support to its use). If the latter interpretation is accepted, the validity of Occam's razor as a tool could possibly be accepted if the simpler hypotheses led to correct conclusions more often than not. Even if some increases in complexity are sometimes necessary, there still remains a justified general bias toward the simpler of two competing explanations. To understand why, consider that for each accepted explanation of a phenomenon, there is always an infinite number of possible, more complex, and ultimately incorrect, alternatives. This is so because one can always burden a failing explanation with an [[ad hoc hypothesis]]. Ad hoc hypotheses are justifications that prevent theories from being falsified. [[File:Celtic Fairy Tales-1892-048-1.jpg|thumb|Possible explanations can become needlessly complex. It might be coherent, for instance, to add the involvement of [[leprechaun]]s to any explanation, but Occam's razor would prevent such additions unless they were necessary.]] For example, if a man, accused of breaking a vase, makes [[supernatural]] claims that [[leprechauns]] were responsible for the breakage, a simple explanation might be that the man did it, but ongoing ad hoc justifications (e.g., "... and that's not me breaking it on the film; they tampered with that, too") could successfully prevent complete disproof. This endless supply of elaborate competing explanations, called saving hypotheses, cannot be technically ruled out – except by using Occam's razor.<ref name="Stanovich2007">Stanovich, Keith E. (2007). ''How to Think Straight About Psychology''. Boston: Pearson Education, pp. 19–33.</ref><ref>{{Cite web |url=http://skepdic.com/adhoc.html |title=ad hoc hypothesis - The Skeptic's Dictionary - Skepdic.com |website=skepdic.com |url-status=dead |archive-url=https://web.archive.org/web/20090427010136/http://www.skepdic.com/adhoc.html |archive-date=27 April 2009}}</ref><ref>Swinburne 1997 and Williams, Gareth T, 2008.</ref> Any more complex theory might still possibly be true. A study of the predictive validity of Occam's razor found 32 published papers that included 97 comparisons of economic forecasts from simple and complex forecasting methods. None of the papers provided a balance of evidence that complexity of method improved forecast accuracy. In the 25 papers with quantitative comparisons, complexity increased forecast errors by an average of 27 percent.<ref>{{Cite journal |last1=Green |first1=K. C. |last2=Armstrong |first2=J. S. |year=2015 |title=Simple versus complex forecasting: The evidence |url=https://repository.upenn.edu/marketing_papers/366 |journal=Journal of Business Research |volume=68 |issue=8 |pages=1678–1685 |doi=10.1016/j.jbusres.2015.03.026 |access-date=22 January 2019 |archive-date=8 June 2020 |archive-url=https://web.archive.org/web/20200608134337/https://repository.upenn.edu/marketing_papers/366/ |url-status=live }}{{subscription required}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)