Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Design of experiments
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Fisher's principles== A methodology for designing experiments was proposed by [[Ronald Fisher]], in his innovative books: ''The Arrangement of Field Experiments'' (1926) and ''[[The Design of Experiments]]'' (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the [[lady tasting tea]] [[hypothesis]], that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.<ref name="Miller00">[[Geoffrey Miller (psychologist)|Miller, Geoffrey]] (2000). ''The Mating Mind: how sexual choice shaped the evolution of human nature'', London: Heineman, {{ISBN|0-434-00741-2}} (also Doubleday, {{ISBN|0-385-49516-1}}) "To biologists, he was an architect of the 'modern synthesis' that used mathematical models to integrate Mendelian genetics with Darwin's selection theories. To psychologists, Fisher was the inventor of various statistical tests that are still supposed to be used whenever possible in psychology journals. To farmers, Fisher was the founder of experimental agricultural research, saving millions from starvation through rational crop breeding programs." p.54.</ref> ;Comparison :In some fields of study it is not possible to have independent measurements to a traceable [[Standard (metrology)|metrology standard]]. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a [[scientific control]] or traditional treatment that acts as baseline. ;[[Randomization]] :Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment".<ref>Creswell, J.W. (2008), ''Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd edition)'', Upper Saddle River, NJ: Prentice Hall. 2008, p. 300. {{ISBN|0-13-613550-1}}</ref> There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate [[confounding]], which makes effects due to factors other than the treatment to appear to result from the treatment. :The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger [[statistical population]] of units only if the experimental units are a [[Sampling (statistics)|random sample]] from the larger population; the probable error of such an extrapolation depends on the sample size, among other things. ;[[Replication (statistics)|Statistical replication]] :Measurements are usually subject to variation and [[measurement uncertainty]]; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic.<ref>{{cite web|last=Dr. Hani|title=Replication study|url=http://www.experiment-resources.com/replication-study.html|access-date=27 October 2011|year=2009|archive-url=https://web.archive.org/web/20120602061136/http://www.experiment-resources.com/replication-study.html|archive-date=2 June 2012|url-status=dead}}</ref> However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a [[peer-review]]ed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible.<ref>{{citation|last=Burman|first=Leonard E.|title=A call for replication studies|url=http://pfr.sagepub.com|journal=[[Public Finance Review]] | volume=38 |issue=6|access-date=27 October 2011|author2=Robert W. Reed |author3=James Alm |pages=787β793|doi=10.1177/1091142110385210|year=2010|s2cid=27838472|url-access=subscription}}</ref> ;[[Blocking (statistics)|Blocking]] :[[File:No block vs block chart.jpg|thumb|150x150px|Blocking (right) ]]Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study. ; ; ; ;[[Orthogonality#Statistics.2C econometrics.2C and economics|Orthogonality]] [[File:Factorial Design.svg|thumb|Example of orthogonal factorial design]] :Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are ''T'' treatments and ''T'' β 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts. ;Multifactorial experiments :Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible [[Interaction (statistics)|interactions]] of several factors (independent variables). Analysis of [[experiment]] design is built on the foundation of the [[analysis of variance]], a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)