Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Scientific method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Rationality== Rationality embodies the essence of sound reasoning, a cornerstone not only in philosophical discourse but also in the realms of science and practical decision-making. According to the traditional viewpoint, rationality serves a dual purpose: it governs beliefs, ensuring they align with logical principles, and it steers actions, directing them towards coherent and beneficial outcomes. This understanding underscores the pivotal role of reason in shaping our understanding of the world and in informing our choices and behaviours.{{sfnp|Gauch Jr|2002|pp=29-31}} The following section will first explore beliefs and biases, and then get to the rational reasoning most associated with the sciences. ===Beliefs and biases=== {{multiple image | align = right | direction = vertical | width = 220 | image1 = Jean Louis Théodore Géricault 001.jpg | caption1 = Flying gallop as shown by this painting ([[Théodore Géricault]], 1821) is [[Falsifiability|falsified]]; see below. | image2 = The Horse in Motion high res.jpg | caption2 = [[Sallie Gardner at a Gallop|Muybridge's photographs]] of ''The Horse in Motion'', 1878, were used to answer the question of whether all four feet of a galloping horse are ever off the ground at the same time. This demonstrates a use of photography as an experimental tool in science. }} Scientific methodology often directs that [[Hypothesis|hypotheses]] be tested in [[Scientific control|controlled]] conditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy. The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as in [[confirmation bias]]; this is a [[heuristic]] that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).<ref name="beliefCreatesReality">{{cite book | chapter-url=https://doi.org/10.1016/S0065-2601(08)60146-X | doi=10.1016/S0065-2601(08)60146-X | chapter=When Belief Creates Reality | title=Advances in Experimental Social Psychology Volume 18 | date=1984 | last1=Snyder | first1=Mark | volume=18 | pages=247–305 | isbn=978-0-12-015218-6 }}</ref> {{Blockquote|text=[T]he action of thought is excited by the irritation of doubt, and ceases when belief is attained.|author=[[C.S. Peirce]]|source=''How to Make Our Ideas Clear'' (1877)<ref name= How/>}} A historical example is the belief that the legs of a [[Horse gallop|galloping]] horse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by [[Eadweard Muybridge]] showed this to be false, and that the legs are instead gathered together.<ref>{{harvp|Needham |Wang|1954|p=166}} shows how the 'flying gallop' image propagated from China to the West.</ref> Another important human bias that plays a role is a preference for new, surprising statements (see ''[[Appeal to novelty]]''), which can result in a search for evidence that the new is true.{{sfnp|Goldhaber|Nieto|2010|page=940}} Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic.<ref name= mythIsAbelief >Ronald R. Sims (2003). ''Ethics and corporate social responsibility: Why giants fall.'' p. 21: {{"'}}A myth is a belief given uncritical acceptance by members of a group ...' – Weiss, ''Business Ethics'' p. 15."</ref> {{anchor|robustTheory}}Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hard{{snd}}though certainly never impossible{{snd}}to overturn".{{sfnp|Goldhaber|Nieto|2010|page=942}} When a narrative is constructed its elements become easier to believe.{{sfnp|Lakatos|1976|pp=1—19}}<ref name= narrativeFallacy >{{harvp|Taleb|2007|p=72}} lists ways to avoid the narrative fallacy and confirmation bias; the narrative fallacy being a substitute for explanation.</ref> {{anchor|genesisOfScientificFact}}{{harvp|Fleck|1979|p=27}} notes "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it". Sometimes, these relations have their elements assumed ''[[A priori and a posteriori|a priori]]'', or contain some other logical or methodological flaw in the process that ultimately produced them. [[Donald M. MacKay]] has analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement.{{efn-lg|name= macKay| 1=The scientific method requires testing and validation [[Empirical evidence|''a posteriori'']] before ideas are accepted.<ref name= conjugatePairs>{{cite book |quote=Invariably one came up against fundamental physical limits to the accuracy of measurement. ... The art of physical measurement seemed to be a matter of compromise, of choosing between reciprocally related uncertainties. ... Multiplying together the conjugate pairs of uncertainty limits mentioned, however, I found that they formed invariant products of not one but two distinct kinds. ... The first group of limits were calculable ''a priori'' from a specification of the instrument. The second group could be calculated only ''a posteriori'' from a specification of what was ''done'' with the instrument. ... In the first case each unit [of information] would add one additional ''dimension'' (conceptual category), whereas in the second each unit would add one additional ''atomic fact''. |pages=1–4 |last=MacKay |first=Donald M. |year=1969 |title=Information, Mechanism, and Meaning |place=Cambridge, MA |publisher=MIT Press |isbn=0-262-63032-X}} </ref>}} === Deductive and inductive reasoning{{anchor|i&d}} === {{Main|Deductive reasoning|Inductive reasoning}} The idea of there being two opposed justifications for truth has shown up throughout the history of scientific method as analysis versus synthesis, non-ampliative/ampliative, or even confirmation and verification. (And there are other kinds of reasoning.) One to use what is observed to build towards fundamental truths – and the other to derive from those fundamental truths more specific principles.<ref name="SEP_SM">{{cite web | last1=Hepburn | first1=Brian | last2=Andersen | first2=Hanne | title=Scientific Method | website=Stanford Encyclopedia of Philosophy | date=13 November 2015 | url=https://plato.stanford.edu/archives/sum2021/entries/scientific-method | access-date=21 April 2024}}</ref> Deductive reasoning is the building of knowledge based on what has been shown to be true before. It requires the assumption of fact established prior, and, given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. Inductive reasoning builds knowledge not from established truth, but from a body of observations. It requires stringent scepticism regarding observed phenomena, because cognitive assumptions can distort the interpretation of initial perceptions.<ref name="Gauch Jr 2002 p30/ch4"/> [[File:Perihelio.svg|right|thumb|[[Apsidal precession|Precession]] of the [[Perihelion and aphelion|perihelion]]{{snd}}exaggerated in the case of Mercury, but observed in the case of [[S2 (star)|S2]]'s [[apsidal precession]] around [[Sagittarius A*]]<ref>{{cite web |date=16 April 2020 |title=ESO Telescope Sees Star Dance Around Supermassive Black Hole, Proves Einstein Right |url=https://www.eso.org/public/news/eso2006/ |url-status=live |archive-url=https://web.archive.org/web/20200515210420/https://www.eso.org/public/news/eso2006/ |archive-date=2020-05-15 |access-date=2020-04-17 |work=Science Release |publisher=[[European Southern Observatory]]}}</ref>]] [[File:Inductive Deductive Reasoning.svg|thumb|Inductive Deductive Reasoning]] {{anchor|precession of Mercury}}An example for how inductive and deductive reasoning works can be found in the [[history of gravitational theory]].{{efn|The philosophy of knowledge arising through observation is also called [[inductivism]]. A radical proponent of this approach to knowledge was [[John Stuart Mill]] who took all knowledge – even mathematical knowledge – to arise from experience through induction. The inductivist approach is still common place, though Mill's extreme views are outdated today.<ref name="Psillos 2013">{{cite book | last=Psillos | first=Stathis | title=Reason and Rationality | chapter=1. Reason and Science | publisher=DE GRUYTER | date=2013-12-31 | isbn=978-3-11-032514-0 | doi=10.1515/9783110325867.33 | pages=33–52}}</ref>{{rp|35}}}} It took thousands of years of measurements, from the [[Chaldea]]n, [[India]]n, [[History of Iran|Persian]], [[Greece|Greek]], [[Arabs|Arabic]], and [[Ethnic groups in Europe|European]] astronomers, to fully record the motion of planet [[Earth]].{{efn|name=Astronomy101 |1= [[Hipparchus]] used his own observations of the stars, as well as the observations by Chaldean and Babylonian astronomers to estimate Earth's precession.<ref name=astron101 >Brad Snowder's Astronomy Pages [https://astro101.wwu.edu/a101_precession.html ( Precession of the Equinox]</ref>}} [[Johannes Kepler|Kepler]](and others) were then able to build their early theories by [[Inductive reasoning#inductive generalization|generalizing the collected data inductively]], and [[Isaac Newton|Newton]] was able to unify prior theory and measurements into the consequences of his [[Newton's laws of motion|laws of motion]] in 1727.{{efn|name= keplerNewton |1= Isaac Newton (1727) [[Philosophiæ Naturalis Principia Mathematica#Book 3, De mundi systemate|On the System of the World]] condensed Kepler's law of for the planetary motion of Mars, Galileo's law of falling bodies, the motion of the planets of the Solar system, etc. into consequences of his three laws of motion.<ref name= systOfWorld >[[Isaac Newton]] (1727) [[Philosophiæ Naturalis Principia Mathematica#Book 3, De mundi systemate|On the System of the World]]</ref> ''See Motte's translation ([https://en.wikisource.org/wiki/The_Mathematical_Principles_of_Natural_Philosophy_(1846)/The_System_of_the_World 1846])''}} Another common example of inductive reasoning is the observation of a [[counterexample]] to current theory inducing the need for new ideas. [[Urbain Le Verrier|Le Verrier]] in 1859 pointed out problems with the [[Perihelion and aphelion|perihelion]] of [[Mercury (planet)|Mercury]] that showed Newton's theory to be at least incomplete. The observed difference of Mercury's [[Apsidal precession|precession]] between Newtonian theory and observation was one of the things that occurred to [[Albert Einstein|Einstein]] as a possible early test of his [[theory of relativity]]. His relativistic calculations matched observation much more closely than Newtonian theory did.{{efn|name=LeVerrier1859 |1=The difference is approximately 43 arc-seconds per century. And the precession of Mercury's orbit is cited in [[Tests of general relativity]]: U. Le Verrier (1859), (in French), [https://archive.org/stream/comptesrendusheb49acad#page/378/mode/2up "Lettre de M. Le Verrier à M. Faye sur la théorie de Mercure et sur le mouvement du périhélie de cette planète"], Comptes rendus hebdomadaires des séances de l'Académie des sciences (Paris), vol. 49 (1859), pp.379–383.}} Though, today's [[Standard Model]] of physics suggests that we still do not know at least some of the concepts surrounding Einstein's theory, it holds to this day and is being built on deductively. A theory being assumed as true and subsequently built on is a common example of deductive reasoning. Theory building on Einstein's achievement can simply state that 'we have shown that this case fulfils the conditions under which general/special relativity applies, therefore its conclusions apply also'. If it was properly shown that 'this case' fulfils the conditions, the conclusion follows. An extension of this is the assumption of a solution to an open problem. This weaker kind of deductive reasoning will get used in current research, when multiple scientists or even teams of researchers are all gradually solving specific cases in working towards proving a larger theory. This often sees hypotheses being revised again and again as new proof emerges. This way of presenting inductive and deductive reasoning shows part of why science is often presented as being a cycle of iteration. It is important to keep in mind that that cycle's foundations lie in reasoning, and not wholly in the following of procedure. ===Certainty, probabilities, and statistical inference=== Claims of scientific truth can be opposed in three ways: by falsifying them, by questioning their certainty, or by asserting the claim itself to be incoherent.{{efn| ...simplified and (post-modern) philosophy notwithstanding.{{harvp|Gauch Jr|2002|p=33}}}} Incoherence, here, means internal errors in logic, like stating opposites to be true; falsification is what Popper would have called the honest work of conjecture and refutation<ref name= trialAndErr/> — certainty, perhaps, is where difficulties in telling truths from non-truths arise most easily. Measurements in scientific work are usually accompanied by estimates of their [[uncertainty]].<ref name="conjugatePairs" /> The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to [[data collection]] limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the [[sampling method]] used and the number of samples taken. In the case of measurement imprecision, there will simply be a 'probable deviation' expressing itself in a study's conclusions. Statistics are different. [[Inductive reasoning#Statistical generalisation|Inductive statistical generalisation]] will take sample data and extrapolate more general conclusions, which has to be justified — and scrutinised. It can even be said that statistical models are only ever useful, [[All models are wrong|but never a complete representation of circumstances]]. In statistical analysis, expected and unexpected bias is a large factor.<ref name="Welsby Weatherall 2022 pp. 793–798">{{cite journal | last1=Welsby | first1=Philip D | last2=Weatherall | first2=Mark | title=Statistics: an introduction to basic principles | journal=Postgraduate Medical Journal | volume=98 | issue=1164 | date=2022-10-01 | issn=0032-5473 | doi=10.1136/postgradmedj-2020-139446 | pages=793–798| pmid=34039698 }}</ref> [[Research question]]s, the collection of data, or the interpretation of results, all are subject to larger amounts of scrutiny than in comfortably logical environments. Statistical models go through a [[Statistical model validation|process for validation]], for which one could even say that awareness of potential biases is more important than the hard logic; errors in logic are easier to find in [[peer review]], after all.{{efn|... and [[John Ioannidis]], in 2005,<ref name="mostRwrong" /> has shown that not everybody respects the principles of statistical analysis; whether they be the principles of inference or otherwise.{{Broader|#Relationship with statistics}}}} More general, claims to rational knowledge, and especially statistics, have to be put into their appropriate context.<ref name="Gauch Jr 2002 p30/ch4">{{harvp|Gauch Jr|2002|loc=Quotes from p. 30, expanded on in ch. 4}}: Gauch gives two simplified statements on what he calls "rational-knowledge claim". It is either "I hold belief X for reasons R with level of confidence C, where inquiry into X is within the domain of competence of method M that accesses the relevant aspects of reality" (inductive reasoning) or "I hold belief X because of presuppositions P." (deductive reasoning)</ref> Simple statements such as '9 out of 10 doctors recommend' are therefore of unknown quality because they do not justify their methodology. Lack of familiarity with statistical methodologies can result in erroneous conclusions. Foregoing the easy example,{{efn|For instance, extrapolating from a single scientific observation, such as "This experiment yielded these results, so it should apply broadly," exemplifies inductive wishful thinking. [[inductive reasoning#statistical generalization|Statistical generalisation]] is a form of inductive reasoning. Conversely, assuming that a specific outcome will occur based on general trends observed across multiple experiments, as in "Most experiments have shown this pattern, so it will likely occur in this case as well," illustrates faulty [[Deductive reasoning#Probability logic|deductive probability logic]].}} multiple probabilities interacting is where, for example medical professionals,<!--justification: medical professional = authoritative science communicator--><ref name="Gigerenzer 2015">{{cite book | last=Gigerenzer | first=Gerd | title=Risk Savvy | publisher=Penguin | publication-place=New York, New York | date=2015-03-31 | isbn=978-0-14-312710-9 | page=}} leads: (n=1000) only 21% of [[gynaecologist]]s got an example question on [[Bayes' theorem]] right. Book, including the assertion, introduced in {{cite web | last=Kremer | first=William | title=Do doctors understand test results? | website=BBC News | date=6 July 2014 | url=https://www.bbc.com/news/magazine-28166019 | access-date=24 April 2024}}</ref> have shown a lack of proper understanding. [[Bayes' theorem]] is the mathematical principle lining out how standing probabilities are adjusted given new information. The [[boy or girl paradox]] is a common example. In knowledge representation, [[mutual information#Bayesian estimation of mutual information|Bayesian estimation of mutual information]] between [[random variable]]s is a way to measure dependence, independence, or interdependence of the information under scrutiny.<ref name= prml >{{cite book |first1=Christopher M. |last1=Bishop |url=https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf |date=2006 |title=Pattern Recognition and Machine Learning |pages=21, 30, 55, 152, 161, 277, 360, 448, 580 |publisher=Springer Science+Business Media |via=Microsoft }}</ref> Beyond commonly associated [[survey methodology]] of [[field research]], the concept together with [[probabilistic reasoning]] is used to advance fields of science where research objects have no definitive states of being. For example, in [[statistical mechanics]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)