Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Three Laws of Robotics
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications to future technology== {{See also|Philosophy of artificial intelligence| Ethics of artificial intelligence|Friendly artificial intelligence}} [[File:HONDA ASIMO.jpg|thumb|[[ASIMO]] was an advanced [[humanoid robot]] developed by [[Honda]]. Shown here at [[Expo 2005]].]] Robots and artificial intelligences do not inherently contain or obey the Three Laws; their human creators must choose to program them in, and devise a means to do so. Robots already exist (for example, a [[Roomba]]) that are too simple to understand when they are causing pain or injury and know to stop. Some are constructed with physical safeguards such as bumpers, warning beepers, safety cages, or restricted-access zones to prevent accidents. Even the most complex robots currently produced are incapable of understanding and applying the Three Laws; significant advances in artificial intelligence would be needed to do so, and even if AI could reach human-level intelligence, the inherent ethical complexity as well as cultural/contextual dependency of the laws prevent them from being a good candidate to formulate robotics design constraints.<ref>{{cite journal | url=http://www.inf.ufrgs.br/~prestes/Courses/Robotics/beyond%20asimov.pdf | title=Beyond Asimov: The Three Laws of Responsible Robotics | journal=IEEE Intelligent Systems | date=July 2009 | issue=4 | volume=24 | pages=14–20 | doi=10.1109/mis.2009.69 | last1=Murphy | first1=Robin | last2=Woods | first2=David D. | s2cid=3165389 | access-date=2014-07-30 | archive-date=2023-04-09 | archive-url=https://web.archive.org/web/20230409123207/https://www.inf.ufrgs.br/~prestes/Courses/Robotics/beyond%20asimov.pdf | url-status=live }}</ref> However, as the complexity of robots has increased, so has interest in developing guidelines and safeguards for their operation.<ref name="moravec">[[Hans Moravec|Moravec, Hans]]. "The Age of Robots", ''Extro 1, Proceedings of the First [[Extropy Institute]] Conference on TransHumanist Thought'' (1994) pp. 84–100. [http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1993/Robot93.html June 1993 version] {{Webarchive|url=https://web.archive.org/web/20060615055406/http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1993/Robot93.html |date=2006-06-15 }} available online.</ref><ref>{{cite journal| url=https://www.newscientist.com/channel/mech-tech/robots/mg18925445.600-rules-for-the-modern-robot.html| title=Rules for the modern robot| journal=New Scientist| date=27 March 2006| access-date=2006-06-12| issue=2544| page=27| archive-date=2024-09-25| archive-url=https://web.archive.org/web/20240925011045/https://www.newscientist.com/article/mg18925445-600-rules-for-the-modern-robot/| url-status=live}}</ref> In a 2007 guest editorial in the journal ''[[Science (journal)|Science]]'' on the topic of "Robot Ethics", SF author [[Robert J. Sawyer]] argues that since the [[United States Armed Forces|U.S. military]] is a major source of funding for robotic research (and already uses armed [[unmanned aerial vehicles]] to kill enemies) it is unlikely such laws would be built into their designs.<ref name="SAWSCI">{{cite journal| last=Sawyer| first=Robert J.| url=http://sfwriter.com/science.htm| doi=10.1126/science.1151606| journal=Science| date=16 November 2007| title=Guest Editorial: Robot Ethics| access-date=2010-10-10| volume=318| issue=5853| page=1037| pmid=18006710| doi-access=free| archive-date=2024-09-25| archive-url=https://web.archive.org/web/20240925011027/https://sfwriter.com/science.htm| url-status=live| url-access=subscription}}</ref> In a separate essay, Sawyer generalizes this argument to cover other industries stating: <blockquote>The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.)<ref name="SAWON3LAWS">{{cite web| last=Sawyer| first=Robert J.| url=http://www.sfwriter.com/rmasilaw.htm| title=On Asimov's Three Laws of Robotics| year=1991| access-date=2006-06-12| archive-date=2006-06-23| archive-url=https://web.archive.org/web/20060623171959/http://www.sfwriter.com/rmasilaw.htm| url-status=live}}</ref></blockquote> [[David Langford]] has suggested<ref>Originally in a speech entitled "[https://ansible.uk/writing/crystal.html A Load of Crystal Balls] {{Webarchive|url=https://web.archive.org/web/20240925011517/https://ansible.uk/writing/crystal.html |date=2024-09-25 }}" at the Novacon SF convention in 1985; published 1986 in the fanzine Prevert #15; collected in ''Platen Stories'' (1987) and the 2015 ebook version of ''The Silence of the Langford''</ref> a tongue-in-cheek set of laws: # A robot will not harm authorized Government personnel but will [[Terminate with extreme prejudice|terminate intruders with extreme prejudice]]. # A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law. # A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive. Roger Clarke (aka Rodger Clarke) wrote a pair of papers analyzing the complications in implementing these laws in the event that systems were someday capable of employing them. He argued "Asimov's Laws of Robotics have been a successful literary device. Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov's stories disprove the contention that he began with: It is not possible to reliably constrain the behaviour of robots by devising and applying a set of rules."<ref>Clarke, Roger. ''Asimov's laws of robotics: Implications for information technology''. [http://csdl.computer.org/comp/mags/co/1993/12/rz053abs.htm Part 1: IEEE Computer, December 1993, p53–61.] {{Webarchive|url=https://web.archive.org/web/20050410043033/http://csdl.computer.org/comp/mags/co/1993/12/rz053abs.htm|date=2005-04-10}} [http://csdl.computer.org/comp/mags/co/1994/01/r1057abs.htm Part 2: IEEE Computer, Jan 1994, p57–66.] {{Webarchive|url=https://web.archive.org/web/20170311084522/https://csdl.computer.org/comp/mags/co/1994/01/r1057abs.htm|date=2017-03-11}} Both parts are available without fee at [http://www.rogerclarke.com/SOS/Asimov.html] {{Webarchive|url=https://web.archive.org/web/20111007042040/http://www.rogerclarke.com/SOS/Asimov.html|date=2011-10-07}}. Under "Enhancements to codes of ethics".</ref> On the other hand, Asimov's later novels ''[[The Robots of Dawn]]'', ''[[Robots and Empire]]'' and ''[[Foundation and Earth]]'' imply that the robots inflicted their worst long-term harm by obeying the Three Laws perfectly well, thereby depriving humanity of inventive or risk-taking behaviour. In March 2007 the [[South Korea]]n government announced that later in the year it would issue a "Robot Ethics Charter" setting standards for both users and manufacturers. According to Park Hye-Young of the Ministry of Information and Communication the Charter may reflect Asimov's Three Laws, attempting to set ground rules for the future development of robotics.<ref>{{cite news | title=Robotic age poses ethical dilemma | url=http://news.bbc.co.uk/2/hi/technology/6425927.stm | work=BBC News | date=2007-03-07 | access-date=2007-03-07 | archive-date=2024-09-25 | archive-url=https://web.archive.org/web/20240925012029/http://news.bbc.co.uk/2/hi/technology/6425927.stm | url-status=live }}</ref> The futurist [[Hans Moravec]] (a prominent figure in the [[transhumanism|transhumanist]] movement) proposed that the Laws of Robotics should be adapted to "corporate intelligences" — the [[corporation]]s driven by AI and robotic manufacturing power which Moravec believes will arise in the near future.<ref name="moravec"/> In contrast, the [[David Brin]] novel ''[[Foundation's Triumph]]'' (1999) suggests that the Three Laws may decay into obsolescence: Robots use the Zeroth Law to rationalize away the First Law and robots hide themselves from human beings so that the Second Law never comes into play. Brin even portrays [[R. Daneel Olivaw]] worrying that, should robots continue to reproduce themselves, the Three Laws would become an evolutionary handicap and [[natural selection]] would sweep the Laws away — Asimov's careful foundation undone by [[evolutionary computation]]. Although the robots would not be evolving through ''design'' instead of ''mutation'' because the robots would have to follow the Three Laws while designing and the prevalence of the laws would be ensured,<ref>{{cite book | last = Brin | first = David | author-link = David Brin | title = Foundation's Triumph | year = 1999 | publisher = HarperCollins | isbn = 978-0-06-105241-5 | url = https://archive.org/details/foundationstrium00brin_0 }}</ref> design flaws or construction errors could functionally take the place of biological mutation. In the July/August 2009 issue of ''IEEE Intelligent Systems'', Robin Murphy (Raytheon Professor of Computer Science and Engineering at Texas A&M) and David D. Woods (director of the Cognitive Systems Engineering Laboratory at Ohio State) proposed "The Three Laws of Responsible Robotics" as a way to stimulate discussion about the role of responsibility and authority when designing not only a single robotic platform but the larger system in which the platform operates. The laws are as follows: # A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. # A robot must respond to humans as appropriate for their roles. # A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.<ref name="IEEEMWTTLRR">{{cite web |url=http://researchnews.osu.edu/archive/roblaw.htm |title=Want Responsible Robotics? Start With Responsible Humans |publisher=Researchnews.osu.edu |access-date=2015-03-28 |url-status=dead |archive-url=https://web.archive.org/web/20160215190449/http://researchnews.osu.edu/archive/roblaw.htm |archive-date=2016-02-15 }}</ref> Woods said, "Our laws are a little more realistic, and therefore a little more boring” and that "The philosophy has been, ‘sure, people make mistakes, but robots will be better – a perfect version of ourselves’. We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways."<ref name="IEEEMWTTLRR"/> In early 2011, the UK published what is now considered the first national-level AI softlaw, which consisted largely of a revised set of 5 laws, the first 3 of which updated Asimov's. These laws ere published with commentary, by the EPSRC/AHRC working group in 2010:<ref>{{cite web |url=https://webarchive.nationalarchives.gov.uk/ukgwa/20210701125353/https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/ |title=Principles of robotics – EPSRC website |publisher=Epsrc.ac.uk |access-date=2022-11-17 |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925011653/https://webarchive.nationalarchives.gov.uk/ukgwa/20210701125353/https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/##skipnav |url-status=live }}</ref><ref>{{cite web |author=Alan Winfield |url=http://alanwinfield.blogspot.co.uk/2013/10/ethical-robots-some-technical-and.html |title=Alan Winfield's Web Log: Ethical Robots: some technical and ethical challenges |publisher=Alanwinfield.blogspot.co.uk |date=2013-10-30 |access-date=2015-03-28 |archive-date=2015-04-02 |archive-url=https://web.archive.org/web/20150402173617/http://alanwinfield.blogspot.co.uk/2013/10/ethical-robots-some-technical-and.html |url-status=live }}</ref> # Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. # Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. # Robots are products. They should be designed using processes which assure their safety and security. # Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. # The person with legal responsibility for a robot should be attributed.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)