Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Trolley problem
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Implications for autonomous vehicles == Variants of the original Trolley Driver dilemma arise in the design of software to control [[autonomous car]]s.<ref name=":0" /> Situations are anticipated where a potentially fatal collision appears to be unavoidable, but in which choices made by the car's [[software]], such as into whom or what to crash, can affect the particulars of the deadly outcome. For example, should the software value the safety of the car's occupants more, or less, than that of potential victims outside the car.<ref>{{cite magazine|url=https://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/|title=The Ethics of Autonomous Cars|date=October 8, 2013|magazine=The Atlantic|author=Patrick Lin}}</ref><ref>{{cite journal|arxiv=1510.03346|author2=Azim Shariff|title=Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?|date=October 13, 2015|author1=Jean-François Bonnefon|author3=Iyad Rahwan|doi=10.1126/science.aaf2654|pmid=27339987|volume=352|issue=6293|journal=Science|pages=1573–1576|bibcode=2016Sci...352.1573B|s2cid=35400794}}</ref><ref>{{cite web|url=http://www.technologyreview.com/view/542626/why-self-driving-cars-must-be-programmed-to-kill/|title=Why Self-Driving Cars Must Be Programmed to Kill|date=October 22, 2015|publisher=MIT Technology review|author=Emerging Technology From the arXiv|access-date=October 24, 2015|archive-date=January 26, 2016|archive-url=https://web.archive.org/web/20160126085929/http://www.technologyreview.com/view/542626/why-self-driving-cars-must-be-programmed-to-kill/|url-status=dead}}</ref><ref>{{Cite journal|last2=Shariff|first2=Azim|last3=Rahwan|first3=Iyad|year=2016|title=The social dilemma of autonomous vehicles|journal=Science|volume=352|issue=6293|pages=1573–1576|doi=10.1126/science.aaf2654|first1=Jean-François|last1=Bonnefon|pmid=27339987|arxiv=1510.03346|bibcode=2016Sci...352.1573B|s2cid=35400794}}</ref> A platform called [[Moral Machine]]<ref name="Moral Machine">{{Cite web|url=http://moralmachine.mit.edu/|title=Moral Machine|website=Moral Machine|access-date=2019-01-31}}</ref> was created by [[MIT Media Lab]] to allow the public to express their opinions on what decisions autonomous vehicles should make in scenarios that use the trolley problem paradigm. Analysis of the data collected through Moral Machine showed broad differences in relative preferences among different countries.<ref name="Awad2018">{{cite journal |last1=Awad |first1=Edmond |last2=Dsouza |first2=Sohan |last3=Kim |first3=Richard |last4=Schulz |first4=Jonathan |last5=Henrich |first5=Joseph |last6=Shariff |first6=Azim |last7=Bonnefon |first7=Jean-François |last8=Rahwan |first8=Iyad |title=The Moral Machine experiment |journal=Nature |date=October 24, 2018 |volume=563 |issue=7729 |pages=59–64 |doi=10.1038/s41586-018-0637-6 |pmid=30356211 |hdl=10871/39187 |hdl-access=free |bibcode=2018Natur.563...59A |s2cid=53029241 }}</ref> Other approaches make use of virtual reality to assess human behavior in experimental settings.<ref>{{cite journal |last1=Sütfeld |first1=Leon R. |last2=Gast |first2=Richard |last3=König | first3=Peter | last4=Pipa | first4=Gordon |title=Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure |journal=Frontiers in Behavioral Neuroscience |volume= 11|pages= 122|doi=10.3389/fnbeh.2017.00122|pmid=28725188 |pmc=5496958 |year=2017 |doi-access=free }}</ref><ref>{{cite journal |last1=Skulmowski |first1=Alexander |last2=Bunge |first2=Andreas |last3=Kaspar | first3=Kai | last4=Pipa | first4=Gordon |date=December 16, 2014 |title=Forced-choice decision-making in modified trolley dilemma situations: a virtual reality and eye tracking study |journal=Frontiers in Behavioral Neuroscience |volume= 8|pages= 426|doi= 10.3389/fnbeh.2014.00426 |pmid=25565997 |pmc=4267265 |doi-access=free }}</ref><ref>{{Cite journal|last1=Francis|first1=Kathryn B.|last2=Howard|first2=Charles|last3=Howard|first3=Ian S.|last4=Gummerum|first4=Michaela|last5=Ganis|first5=Giorgio|last6=Anderson|first6=Grace|last7=Terbeck|first7=Sylvia|date=October 10, 2016|title=Virtual Morality: Transitioning from Moral Judgment to Moral Action?|journal=PLOS ONE|volume=11|issue=10|pages=e0164374|doi=10.1371/journal.pone.0164374|pmid=27723826|issn=1932-6203|bibcode=2016PLoSO..1164374F|pmc=5056714|doi-access=free}}</ref><ref>{{Cite journal|last1=Patil|first1=Indrajeet|last2=Cogoni|first2=Carlotta|last3=Zangrando|first3=Nicola|last4=Chittaro|first4=Luca|last5=Silani|first5=Giorgia|date=January 2, 2014|title=Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas|journal=Social Neuroscience|volume=9|issue=1|pages=94–107|doi=10.1080/17470919.2013.870091|issn=1747-0919|pmid=24359489|s2cid=706534|url=http://psyarxiv.com/ry3ap/}}</ref> However, some argue that the investigation of trolley-type cases is not necessary to address the ethical problem of driverless cars, because the trolley cases have a serious practical limitation. It would need to be a top-down plan in order to fit the current approaches of addressing emergencies in [[artificial intelligence]].<ref>{{Cite journal|last=Himmelreich|first=Johannes|date=June 1, 2018|title=Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations|journal=Ethical Theory and Moral Practice|language=en|volume=21|issue=3|pages=669–684|doi=10.1007/s10677-018-9896-4|s2cid=150184601|issn=1572-8447}}</ref> Also, a question remains of whether the law should dictate the ethical standards that all autonomous vehicles must use, or whether individual autonomous car owners or drivers should determine their car's ethical values, such as favoring safety of the owner or the owner's family over the safety of others.<ref name=":0" /> Although most people would not be willing to use an automated car that might sacrifice themselves in a life-or-death dilemma, some{{Who|date = August 2019}} believe the somewhat counterintuitive claim that using mandatory ethics values would nevertheless be in their best interest. According to Gogoll and Müller, "the reason is, simply put, that [personalized ethics settings]<!-- original text: "a PES regime" --> would most likely result in a [[prisoner’s dilemma]]."<ref>{{Cite journal|last1=Gogoll|first1=Jan|last2=Müller|first2=Julian F.|date=June 1, 2017|title=Autonomous Cars: In Favor of a Mandatory Ethics Setting|journal=Science and Engineering Ethics|language=en|volume=23|issue=3|pages=681–700|doi=10.1007/s11948-016-9806-x|pmid=27417644|s2cid=3632738|issn=1471-5546}}</ref> In 2016, the German government appointed a commission to study the ethical implications of autonomous driving.<ref>{{cite web|url=http://www.bmvi.de/SharedDocs/DE/Publikationen/G/bericht-der-ethik-kommission.html|title=Bericht der Ethik-Kommission Automatisiertes und vernetztes Fahren|author=BMVI Commission|date=June 20, 2016|publisher=Federal Ministry of Transport and Digital Infrastructure (German: Bundesministerium für Verkehr und digitale Infrastruktur)|archive-url=https://web.archive.org/web/20171115224017/http://www.bmvi.de/SharedDocs/DE/Publikationen/G/bericht-der-ethik-kommission.html|archive-date=November 15, 2017|url-status=dead}}</ref><ref name=BMVI-rules>{{cite web|url=https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.html|title=Ethics Commission's complete report on automated and connected driving|author=BMVI Commission|date=28 August 2017|publisher=Federal Ministry of Transport and Digital Infrastructure (German: Bundesministerium für Verkehr und digitale Infrastruktur)|access-date=January 20, 2021|archive-url=https://web.archive.org/web/20170915110611/http://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.html|archive-date=15 September 2017}}</ref> The commission adopted 20 rules to be implemented in the laws that will govern the ethical choices that autonomous vehicles will make.<ref name=BMVI-rules />{{rp|10–13}} Relevant to the trolley dilemma is this rule: <blockquote> 8. Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation, incorporating “unpredictable” behaviour by parties affected. They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable. Technological systems must be designed to avoid accidents. However, they cannot be standardized to a complex or intuitive assessment of the impacts of an accident in such a way that they can replace or anticipate the decision of a responsible driver with the moral capacity to make correct judgements. It is true that a human driver would be acting unlawfully if he killed a person in an emergency to save the lives of one or more other persons, but he would not necessarily be acting culpably. Such legal judgements, made in retrospect and taking special circumstances into account, cannot readily be transformed into abstract/general ex ante appraisals and thus also not into corresponding programming activities. …<ref name=BMVI-rules />{{rp|11}} </blockquote>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)