Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Field experiment
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Experiment conducted outside the laboratory}} {{More citations needed|date=June 2022}} {{Research}} '''Field experiments''' are [[experiment]]s carried out outside of [[laboratory]] settings. They [[randomization|randomly]] assign subjects (or other sampling units) to either treatment or control groups to test claims of [[causality|causal]] relationships. Random assignment helps establish the comparability of the treatment and control group so that any differences between them that emerge after the treatment has been administered plausibly reflect the influence of the treatment rather than pre-existing differences between the groups. The distinguishing characteristics of field experiments are that they are conducted in real-world settings and often unobtrusively and control not only the subject pool but selection and overtness, as defined by leaders such as [[John A. List]]. This is in contrast to laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Field experiments have some contextual differences as well from naturally-occurring experiments and quasi-experiments.<ref>{{cite journal|last1=Meyer| first1= B. D.| year= 1995| title= Natural and quasi-experiments in economics| journal= Journal of Business & Economic Statistics| volume= 13| issue=2| pages= 151β161| doi= 10.2307/1392369|jstor = 1392369| url= http://www.nber.org/papers/t0170.pdf}}</ref> While naturally-occurring experiments rely on an external force (e.g. a government, nonprofit, etc.) controlling the [[random assignment|randomization]] treatment assignment and implementation, field experiments require researchers to retain control over randomization and implementation. Quasi-experiments occur when treatments are administered as-if randomly (e.g. U.S. Congressional districts where candidates win with slim margins,<ref>{{cite journal|last1=Lee|first1= D. S.| last2= Moretti| first2= E.| last3= Butler| first3= M. J.| year= 2004| title= Do voters affect or elect policies? Evidence from the US House| journal= The Quarterly Journal of Economics| volume= 119| issue=3| pages= 807β859|jstor = 25098703|doi= 10.1162/0033553041502153}}</ref> weather patterns, natural disasters, etc.). Field experiments encompass a broad array of experimental designs, each with varying degrees of generality. Some criteria of generality (e.g. authenticity of treatments, participants, contexts, and outcome measures) refer to the contextual similarities between the subjects in the experimental sample and the rest of the population. They are increasingly used in the social sciences to study the effects of policy-related interventions in domains such as health, education, crime, social welfare, and politics. ==Characteristics== Under random assignment, outcomes of field experiments are reflective of the real-world because subjects are assigned to groups based on non-deterministic probabilities.<ref>{{cite journal|doi=10.1198/016214504000001880|title=Causal Inference Using Potential Outcomes|journal=Journal of the American Statistical Association|volume=100|issue=469|pages=322β331|year=2005|last1=Rubin|first1=Donald B.|s2cid=842793}}</ref> Two other core assumptions underlie the ability of the researcher to collect unbiased potential outcomes: excludability and non-interference.<ref>{{cite journal|doi=10.1016/j.electstud.2016.12.002|title=Door-to-door canvassing in the European elections: Evidence from a Swedish field experiment|journal=Electoral Studies|volume=45|pages=110β118|year=2017|last1=Nyman|first1=PΓ€r|url=https://zenodo.org/record/891052}}</ref><ref>{{cite journal|doi=10.1017/pan.2017.27|title=The Design of Field Experiments with Survey Outcomes: A Framework for Selecting More Efficient, Robust, and Ethical Designs|journal=Political Analysis|volume=25|issue=4|pages=435β464|year=2017|last1=Broockman|first1=David E.|last2=Kalla|first2=Joshua L.|last3=Sekhon|first3=Jasjeet S.|s2cid=233321039|url=https://escholarship.org/uc/item/7kt5d1p2}}</ref> The excludability assumption provides that the only relevant causal agent is through the receipt of the treatment. Asymmetries in assignment, administration or measurement of treatment and control groups violate this assumption. The non-interference assumption, or [[Rubin causal model|Stable Unit Treatment Value Assumption]] (SUTVA), indicates that the value of the outcome depends only on whether or not the subject is assigned the treatment and not whether or not other subjects are assigned to the treatment. When these three core assumptions are met, researchers are more likely to provide unbiased estimates through field experiments. After designing the field experiment and gathering the data, researchers can use [[statistical inference]] tests to determine the size and strength of the intervention's effect on the subjects. Field experiments allow researchers to collect diverse amounts and types of data. For example, a researcher could design an experiment that uses pre- and post-trial information in an appropriate statistical inference method to see if an intervention has an effect on subject-level changes in outcomes. ==Practical uses== Field experiments offer researchers a way to test theories and answer questions with higher [[external validity]] because they simulate real-world occurrences.<ref>{{cite report|url=http://econ-www.mit.edu/files/800|first= Esther|last= Duflo| title= Field Experiments in Development Economics|year=2006|publisher=Massachusetts Institute of Technology}}</ref> Compared to surveys and lab experiments, one strength of field experiments is that they can test people without them being aware that they are in a study, which could influence how they respond (called the "[[Hawthorne effect|Hawthorne Effect]]"). For example, researchers used a field experiment by posting different types of employment ads to test people's preferences for stable versus exciting jobs as a way to check the validity of people's responses to survey measures.<ref>{{Cite journal |last=Harati |first=Hamidreza |last2=Talhelm |first2=Thomas |date=2023-07-01 |title=Cultures in Water-Scarce Environments Are More Long-Term Oriented |url=https://journals.sagepub.com/doi/full/10.1177/09567976231172500 |journal=Psychological Science |language=EN |volume=34 |issue=7 |pages=754β770 |doi=10.1177/09567976231172500 |issn=0956-7976}}</ref> Some researchers argue that field experiments are a better guard against potential [[bias]] and [[bias of an estimator|biased estimators]]. Field experiments can act as benchmarks for comparing observational data to experimental results. Using field experiments as benchmarks can help determine levels of bias in observational studies, and, since researchers often develop a hypothesis from an [[wikt:a priori|a priori]] judgment, benchmarks can help to add credibility to a study.<ref>{{cite journal|last1=Harrison| first1= G. W.| last2= List| first2= J. A.| year= 2004| title= Field experiments| journal= Journal of Economic Literature| volume= 42| issue=4| pages= 1009β1055|jstor = 3594915| doi= 10.1257/0022051043004577}}</ref> While some argue that covariate adjustment or matching designs might work just as well in eliminating bias, field experiments can increase certainty<ref>{{cite journal| last1=LaLonde| first1= R. J.| year=1986| title= Evaluating the econometric evaluations of training programs with experimental data| journal= The American Economic Review|volume=76| issue=4| pages= 604β620|jstor = 1806062}}</ref> by displacing omitted variable bias because they better allocate observed and unobserved factors.<ref>{{cite journal|journal=Marketing Science|doi=10.2139/ssrn.3033144|title=A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook|year=2017|last1=Gordon|first1=Brett R.|last2=Zettelmeyer|first2=Florian|last3=Bhargava|first3=Neha|last4=Chapsky|first4=Dan|s2cid=197733986}}</ref> Researchers can utilize machine learning methods to simulate, reweight, and generalize experimental data.<ref>{{cite journal|doi=10.1073/pnas.1510489113|pmid=27382149|pmc=4941430|title=Recursive partitioning for heterogeneous causal effects: Table 1|journal=Proceedings of the National Academy of Sciences|volume=113|issue=27|pages=7353β7360|year=2016|last1=Athey|first1=Susan|author-link1=Susan Athey|last2=Imbens|first2=Guido|author-link2=Guido Imbens|doi-access=free}}</ref> This increases the speed and efficiency of gathering experimental results and reduces the costs of implementing the experiment. Another cutting-edge technique in field experiments is the use of the [[multi armed bandit]] design,<ref>{{cite journal|doi=10.1002/asmb.874|title=A modern Bayesian look at the multi-armed bandit|journal=Applied Stochastic Models in Business and Industry|volume=26|issue=6|pages=639β658|year=2010|last1=Scott|first1=Steven L.}}</ref> including similar adaptive designs on experiments with variable outcomes and variable treatments over time.<ref>{{cite arXiv |last1=Raj| first1= V.| last2= Kalyani| first2= S.| year=2017| title= Taming non-stationary bandits: A Bayesian approach |eprint = 1707.09727| class= stat.ML}}</ref> ==Limitations== There are limitations of and arguments against using field experiments in place of other research designs (e.g. lab experiments, survey experiments, observational studies, etc.). Given that field experiments necessarily take place in a specific geographic and political setting, there is a concern about extrapolating outcomes to formulate a general theory regarding the population of interest. However, researchers have begun to find strategies to effectively generalize causal effects outside of the sample by comparing the environments of the treated population and external population, accessing information from larger sample size, and accounting and modeling for treatment effects heterogeneity within the sample.<ref>{{cite report|last1=Dehejia| first1= R.| last2= Pop-Eleches| first2= C.| last3= Samii| first3= C.| year=2015| title= From local to global: External validity in a fertility natural experiment|docket=w21459| publisher= National Bureau of Economic Research|url=https://www.nber.org/papers/w21459.pdf}}</ref> Others have used covariate blocking techniques to generalize from field experiment populations to external populations.<ref>{{cite web| url= https://scholar.princeton.edu/sites/default/files/negami/files/covselect.pdf| last1= Egami| first1= Naoki| first2= Erin| last2= Hartman| date= 19 July 2018| title= Covariate Selection for Generalizing Experimental Results| website= Princeton.edu| access-date= 31 December 2018| archive-date= 10 July 2020| archive-url= https://web.archive.org/web/20200710231307/https://scholar.princeton.edu/sites/default/files/negami/files/covselect.pdf| url-status= dead}}</ref> Noncompliance issues affecting field experiments (both one-sided and two-sided noncompliance)<ref>{{cite journal|doi=10.1080/01621459.2016.1246363|title=Instrumental Variable Methods for Conditional Effects and Causal Interaction in Voter Mobilization Experiments|journal=Journal of the American Statistical Association|volume=112|issue=518|pages=590β599|year=2017|last1=Blackwell|first1=Matthew|s2cid=55878137|url=https://figshare.com/articles/journal_contribution/4052172 }}</ref><ref name="Aronow 2013"/> can occur when subjects who are assigned to a certain group never receive their assigned intervention. Other problems to data collection include attrition (where subjects who are treated do not provide outcome data) which, under certain conditions, will bias the collected data. These problems can lead to imprecise data analysis; however, researchers who use field experiments can use statistical methods in calculating useful information even when these difficulties occur.<ref name="Aronow 2013">{{cite journal|doi=10.1093/pan/mpt013|title=Beyond LATE: Estimation of the Average Treatment Effect with an Instrumental Variable|journal=Political Analysis|volume=21|issue=4|pages=492β506|year=2013|last1=Aronow|first1=Peter M.|last2=Carnegie|first2=Allison}}</ref> Using field experiments can also lead to concerns over interference<ref>{{cite journal| last1=Aronow| first1= P. M.| last2= Samii| first2= C.| year=2017| title= Estimating average causal effects under general interference, with application to a social network experiment| journal= The Annals of Applied Statistics| volume= 11| issue=4| pages= 1912β1947| doi=10.1214/16-AOAS1005| arxiv= 1305.6156| s2cid= 26963450}}</ref> between subjects. When a treated subject or group affects the outcomes of the nontreated group (through conditions like displacement, communication, contagion etc.), nontreated groups might not have an outcome that is the true untreated outcome. A subset of interference is the spillover effect, which occurs when the treatment of treated groups has an effect on neighboring untreated groups. Field experiments can be expensive, time-consuming to conduct, difficult to replicate, and plagued with ethical pitfalls. Subjects or populations might undermine the implementation process if there is a perception of unfairness in treatment selection (e.g. in '[[negative income tax]]' experiments communities may lobby for their community to get a cash transfer so the assignment is not purely random). There are limitations to collecting consent forms from all subjects. Comrades administering interventions or collecting data could contaminate the randomization scheme. The resulting [[data]], therefore, could be more varied: larger [[standard deviation]], less [[precision and accuracy]], etc. This leads to the use of larger [[sample size]]s for field testing. However, others argue that, even though replicability is difficult, if the results of the experiment are important then there a larger chance that the experiment will get replicated. As well, field experiments can adopt a "[[stepped-wedge trial|stepped-wedge]]" design that will eventually give the entire sample access to the intervention on different timing schedules.<ref>{{cite journal|last1=Woertman| first1= W.| last2= de Hoop| first2= E.| last3= Moerbeek| first3= M.| last4= Zuidema| first4= S. U.| last5= Gerritsen| first5= D. L.| last6= Teerenstra| first6= S.| year=2013| title= Stepped wedge designs could reduce the required sample size in cluster randomized trials| journal= Journal of Clinical Epidemiology| volume= 66| issue=7| pages= 752β758| pmid= 23523551| doi= 10.1016/j.jclinepi.2013.01.009| doi-access= free| hdl= 2066/117688| hdl-access= free}}</ref> Researchers can also design a [[blinded experiment|blinded]] field experiment to remove possibilities of manipulation. ==Examples== The [[history of experiments]] in the lab and the field has left longstanding impacts in the physical, natural, and life sciences. Modern use field experiments has roots in the 1700s, when [[James Lind]] utilized a controlled field experiment to identify a treatment for [[scurvy]].<ref>{{cite journal|url= http://www.jameslindlibrary.org/articles/james-lind-and-scurvy-1747-to-1795/|last1=TrΓΆhler| first1= U.| year=2005| title= Lind and scurvy: 1747 to 1795| journal= Journal of the Royal Society of Medicine| volume= 98| issue=11| pages= 519β522|doi=10.1177/014107680509801120|pmid=16260808| pmc= 1276007}}</ref> Other categorical examples of sciences that use field experiments include: * [[Economists]] have used field experiments to analyze [[discrimination]] (e.g., in the labor market,<ref>{{cite journal|last1=Bertrand| first1= Marianne| last2=Mullainathan| first2= Sendhil| year= 2004| title= Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination| volume= 94| issue=4| pages= 991β1013| journal= American Economic Review| doi= 10.1257/0002828042002561| url= http://s3.amazonaws.com/fieldexperiments-papers2/papers/00216.pdf}}</ref><ref>{{cite journal|last1=Gneezy| first1= Uri| last2=List| first2= John A| year= 2006| title= Putting behavioral economics to work: Testing for gift exchange in labor markets using field experiments| volume= 74| issue=5| pages= 1365β1384| journal= Econometrica| doi= 10.1111/j.1468-0262.2006.00707.x| url= http://www.nber.org/papers/w12063.pdf}}</ref> in housing,<ref>{{cite journal|last1=Ahmed| first1= Ali M| last2=Hammarstedt| first2= Mats| year= 2008| title= Discrimination in the rental housing market: A field experiment on the Internet| volume= 64| issue=2| pages= 362β372| journal= Journal of Urban Economics| doi= 10.1016/j.jue.2008.02.004}}</ref> in the sharing economy,<ref>{{cite journal|last1=Edelman| first1=Benjamin| last2=Luca| first2= Michael| last3=Svirsky| first3= Dan| year= 2017| title=Racial discrimination in the sharing economy: Evidence from a field experiment| volume= 9| issue=2| pages= 1β22| journal= American Economic Journal: Applied Economics| doi=10.1257/app.20160213| doi-access=free}}</ref> in the credit market,<ref>{{cite journal|last1=Pager| first1=Devah| last2=Shepherd| first2= Hana| year= 2008| title=The sociology of discrimination: Racial discrimination in employment, housing, credit, and consumer markets| volume= 34| pages= 181β209| journal= Annual Review of Sociology| doi=10.1146/annurev.soc.33.040406.131740| pmid=20689680| pmc=2915460}}</ref> or in integration<ref>{{Cite journal|last1=Nesseler|first1=Cornel|last2=Carlos|first2=Gomez-Gonzalez|last3=Dietl|first3=Helmut|date=2019|title=What's in a name? Measuring access to social activities with a field experiment|journal=Palgrave Communications|volume=5|pages=1β7|doi=10.1057/s41599-019-0372-0|doi-access=free|hdl=11250/2635691|hdl-access=free}}</ref>), [[health care]] programs,<ref>{{cite journal|last1=Ashraf| first1=Nava| last2=Berry| first2= James| last3=Shapiro| first3= Jesse M| year= 2010| title=Can higher prices stimulate product use? Evidence from a field experiment in Zambia| volume= 100| issue=5| pages= 2383β2413| journal= American Economic Review| doi=10.1257/aer.100.5.2383| s2cid=6392533| url=http://www.nber.org/papers/w13247.pdf}}</ref> [[charitable fundraising]],<ref>{{cite journal|last1=Karlan| first1= Dean| last2=List| first2= John A| year= 2007| title= Does price matter in charitable giving? Evidence from a large-scale natural field experiment| volume= 97| issue=5| pages= 1774β1793| journal= American Economic Review| doi= 10.1257/aer.97.5.1774| s2cid= 10041821| url= http://www.nber.org/papers/w12338.pdf}}</ref> [[education]],<ref>{{cite journal|last1=Fryer Jr| first1= Roland G| year= 2014| title= Injecting charter school best practices into traditional public schools: Evidence from field experiments| volume= 129| issue=3| pages= 1355β1407| journal= The Quarterly Journal of Economics| doi= 10.1093/qje/qju011}}</ref> information aggregation in markets, and [[microfinance]] programs.<ref>{{cite journal|last1=Field| first1= Erica| last2=Pande| first2= Rohini| year= 2008| title= Repayment frequency and default in microfinance: evidence from India| volume= 6| issue=2β3| pages= 501β509| journal= Journal of the European Economic Association| doi= 10.1162/JEEA.2008.6.2-3.501| doi-access= }}</ref> * [[Engineer]]s often conduct field tests of [[prototype]] products to validate earlier laboratory tests and to obtain broader feedback. *Researchers in [[social psychology]] often use field experiments, such as [[Stanley Milgram]]'s [[Stanford prison experiment|Stanford Prison Experiment]] and [[Robert Cialdini]]'s door-in-the-face study.<ref>{{Cite journal |last=Cialdini |first=Robert B. |last2=Vincent |first2=Joyce E. |last3=Lewis |first3=Stephen K. |last4=Catalan |first4=Jose |last5=Wheeler |first5=Diane |last6=Darby |first6=Betty Lee |date=February 1975 |title=Reciprocal concessions procedure for inducing compliance: The door-in-the-face technique. |url=https://doi.apa.org/doi/10.1037/h0076284 |journal=Journal of Personality and Social Psychology |language=en |volume=31 |issue=2 |pages=206β215 |doi=10.1037/h0076284 |issn=1939-1315|url-access=subscription }}</ref> *[[Agricultural science]] researcher [[Ronald Fisher|R.A. Fisher]] analyzed randomized actual "field" experimental data<ref>{{cite book|first1=R.A.|last1= Fisher| title= The Design of Experiments| year= 1937| url= http://krishikosh.egranth.ac.in/bitstream/1/2040342/1/TNV-65.pdf|publisher=Oliver and Boyd Ltd.}}</ref> for crops. *[[Political Science]] researcher Harold Gosnell conducted an early field experiment on voter participation in 1924 and 1925.<ref>{{cite journal|doi=10.1017/S0003055400110524|title=An Experiment in the Stimulation of Voting|journal=American Political Science Review|volume=20|issue=4|pages=869β874|year=1926|last1=Gosnell|first1=Harold F.|doi-access=free}}</ref> *[[Ecology]] [[Joseph H. Connell]]βs field experiment.<ref>{{Cite journal|last1=Grodwohl|first1=Jean-Baptiste|last2=Porto|first2=Franco|last3=El-Hani|first3=Charbel N.|date=2018-07-31|title=The instability of field experiments: building an experimental research tradition on the rocky seashores (1950β1985)|url=https://doi.org/10.1007/s40656-018-0209-y|journal=History and Philosophy of the Life Sciences|language=en|volume=40|issue=3|pages=45|doi=10.1007/s40656-018-0209-y|pmid=30066110|s2cid=51889466|issn=1742-6316|url-access=subscription}}</ref> ==See also== * [[Field research]] ==References== {{Reflist}} {{DEFAULTSORT:Field Experiment}} [[Category:Design of experiments]] [[Category:Tests]] [[Category:Causal inference]] [[Category:Mathematical and quantitative methods (economics)]] [[Category:Field research]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite report
(
edit
)
Template:Cite web
(
edit
)
Template:More citations needed
(
edit
)
Template:Reflist
(
edit
)
Template:Research
(
edit
)
Template:Short description
(
edit
)