Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Three Laws of Robotics
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===By Asimov=== Asimov's stories test his Three Laws in a wide variety of circumstances leading to proposals and rejection of modifications. Science fiction scholar [[James Gunn (author)|James Gunn]] writes in 1982, "The Asimov robot stories as a whole may respond best to an analysis on this basis: the ambiguity in the Three Laws and the ways in which Asimov played twenty-nine variations upon a theme".<ref>Gunn (1982).</ref> While the original set of Laws provided inspirations for multiple stories, Asimov introduced modified versions from time to time. ====First Law modified==== In "[[Little Lost Robot]]" several NS-2, or "Nestor", robots are created with only part of the First Law.<ref name="IROBOT"/> It reads: {{quote|1. A robot may not harm a human being.}} This modification is motivated by a practical difficulty as robots have to work alongside human beings who are exposed to low doses of radiation. Because their [[positronic brain]]s are highly sensitive to [[gamma ray]]s the robots are rendered inoperable by doses reasonably safe for humans. The robots are being destroyed attempting to rescue the humans who are in no actual danger but "might forget to leave" the irradiated area within the exposure time limit. Removing the First Law's "inaction" clause solves this problem but creates the possibility of an even greater one: a robot could initiate an action that would harm a human (dropping a heavy weight and failing to catch it is the example given in the text), knowing that it was capable of preventing the harm and then decide not to do so.<ref name="IROBOT"/> [[Gaia (Foundation universe)|Gaia]] is a planet with [[collective intelligence]] in the [[Foundation (book series)|''Foundation'' series]] which adopts a law similar to the First Law, and the Zeroth Law, as its philosophy: {{quote|Gaia may not harm life or allow life to come to harm.}} ====Zeroth Law added==== Asimov once added a "[[Zero-based numbering|Zeroth]] Law"—so named to continue the pattern where lower-numbered laws supersede the higher-numbered laws—stating that a robot must not harm humanity. The robotic character [[R. Daneel Olivaw]] was the first to give the Zeroth Law a name in the novel ''[[Robots and Empire]]'';<ref name="BBCAsimov">{{cite web |title=Isaac Asimov |url=https://www.bbc.co.uk/dna/h2g2/A42253922 |publisher=BBC |access-date=11 November 2010 |archive-date=10 January 2010 |archive-url=https://web.archive.org/web/20100110060251/http://www.bbc.co.uk/dna/h2g2/A42253922 |url-status=live }}</ref> however, the character Susan Calvin articulates the concept in the short story "[[The Evitable Conflict]]". In the final scenes of the novel ''Robots and Empire'', [[R. Giskard Reventlov]] is the first robot to act according to the Zeroth Law. Giskard is [[telepathic]], like the robot Herbie in the short story "[[Liar! (short story)|Liar!]]", and tries to apply the Zeroth Law through his understanding of a more subtle concept of "harm" than most robots can grasp.<ref name="SC1">{{cite news |title=Sci-fi writer Isaac Asimov |url=http://archive.thedailystar.net/campus/2007/07/05/autprofile.htm |work=Campus Star |publisher=[[The Daily Star (Bangladesh)|The Daily Star]] |date=29 July 2007 |access-date=7 August 2016 |quote=Only highly advanced robots (such as Daneel and Giskard) could comprehend this law. |archive-date=8 November 2016 |archive-url=https://web.archive.org/web/20161108132248/http://archive.thedailystar.net/campus/2007/07/05/autprofile.htm |url-status=live }}</ref> However, unlike Herbie, Giskard grasps the philosophical concept of the Zeroth Law allowing him to harm individual human beings if he can do so in service to the abstract concept of humanity. The Zeroth Law is never programmed into Giskard's brain but instead is a rule he attempts to comprehend through pure [[metacognition]]. Although he fails – it ultimately destroys his positronic brain as he is not certain whether his choice will turn out to be for the ultimate good of humanity or not – he gives his successor R. Daneel Olivaw his telepathic abilities. Over the course of thousands of years Daneel adapts himself to be able to fully obey the Zeroth Law.{{fact|date=December 2023}} Daneel originally formulated the Zeroth Law in both the novel ''[[Foundation and Earth]]'' (1986) and the subsequent novel ''[[Prelude to Foundation]]'' (1988): {{quote|A robot may not injure humanity or, through inaction, allow humanity to come to harm.}} A condition stating that the Zeroth Law must not be broken was added to the original Three Laws, although Asimov recognized the difficulty such a law would pose in practice. Asimov's novel ''[[Foundation and Earth]]'' contains the following passage: {{bquote| Trevize frowned. "How do you decide what is injurious, or not injurious, to humanity as a whole?" "Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."}} A translator incorporated the concept of the Zeroth Law into one of Asimov's novels before Asimov himself made the law explicit.<ref name="Brécard" /> Near the climax of ''[[The Caves of Steel]]'', [[Elijah Baley]] makes a bitter comment to himself thinking that the First Law forbids a robot from harming a human being. He determines that it must be so unless the robot is clever enough to comprehend that its actions are for humankind's long-term good. In Jacques Brécard's 1956 [[French language|French]] translation entitled ''[[:fr:Les Cavernes d'acier|Les Cavernes d'acier]]'' Baley's thoughts emerge in a slightly different way: {{quote|A robot may not harm a human being, unless he finds a way to prove that ultimately the harm done would benefit humanity in general!<ref name="Brécard">{{cite book| last=Asimov| first=Isaac| title=The Caves of Steel| publisher=Doubleday| year=1952}}, translated by Jacques Brécard as {{cite book| title=Les Cavernes d'acier| publisher=J'ai Lu Science-fiction| year=1975| isbn=978-2-290-31902-4| title-link=:fr:Les Cavernes d'acier}}</ref>}} ====Removal of the Three Laws==== Three times during his writing career, Asimov portrayed robots that disregard the Three Laws entirely. The first case was a [[Vignette (literature)|short-short story]] entitled "[[First Law]]" and is often considered an insignificant "tall tale"<ref>Patrouch (1974), p. 50.</ref> or even [[apocrypha]]l.<ref>Gunn (1980); reprinted in Gunn (1982), p. 69.</ref> On the other hand, the short story "[[Cal (short story)|Cal]]" (from the collection ''[[Gold (Asimov)|Gold]]''), told by a first-person robot narrator, features a robot who disregards the Three Laws because he has found something far more important—he wants to be a writer. Humorous, partly autobiographical and unusually experimental in style, "Cal" has been regarded as one of ''Gold'''s strongest stories.<ref>{{cite web|last=Jenkins |first=John H. |work=Jenkins' Spoiler-Laden Guide to Isaac Asimov |url=http://preem.tejat.net/~tseng/Asimov/Stories/Story419.html |year=2002 |access-date=2009-06-26 |title=Review of "Cal" |url-status=dead |archive-url=https://web.archive.org/web/20090911085355/http://preem.tejat.net/~tseng/Asimov/Stories/Story419.html |archive-date=2009-09-11 }}</ref> The third is a short story entitled "[[Sally (Asimov)|Sally]]" in which cars fitted with positronic brains are apparently able to harm and kill humans in disregard of the First Law. However, aside from the positronic brain concept, this story does not refer to other robot stories and may not be set in the same [[continuity (fiction)|continuity]]. The title story of the ''[[Robot Dreams (short story collection)|Robot Dreams]]'' collection portrays LVX-1, or "Elvex", a robot who enters a state of unconsciousness and dreams thanks to the unusual [[fractal]] construction of his positronic brain. In his dream the first two Laws are absent and the Third Law reads "A robot must protect its own existence".<ref name="RDA1">{{cite book|last=Asimov |first=Isaac |title=Robot Dreams |year=1986 |author-link=Isaac Asimov |access-date=11 November 2010 |url=http://www.tcnj.edu/~miranda/classes/topics/reading/asimov.pdf |quote=“But you quote it in incomplete fashion. The Third Law is ‘A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.’ ” “Yes, Dr. Calvin. That is the Third Law in reality, but in my dream, the Law ended with the word ‘existence’. There was no mention of the First or Second Law.” |url-status=bot: unknown |archive-url=https://web.archive.org/web/20120316134042/http://www.tcnj.edu/~miranda/classes/topics/reading/asimov.pdf |archive-date=16 March 2012 }}</ref> Asimov took varying positions on whether the Laws were optional: although in his first writings they were simply carefully engineered safeguards, in later stories Asimov stated that they were an inalienable part of the mathematical foundation underlying the positronic brain. Without the basic theory of the Three Laws the fictional scientists of Asimov's universe would be unable to design a workable brain unit. This is historically consistent: the occasions where roboticists modify the Laws generally occur early within the stories' chronology and at a time when there is less existing work to be re-done. In "Little Lost Robot" Susan Calvin considers modifying the Laws to be a terrible idea, although possible,<ref name="BBC2">{{cite web |title='The Complete Robot' by Isaac Asimov |url=https://www.bbc.co.uk/dna/h2g2/A455898 |publisher=BBC |access-date=11 November 2010 |date=3 November 2000 |quote=The answer is that it had had its First Law modified}}</ref> while centuries later Dr. Gerrigel in ''[[The Caves of Steel]]'' believes it to require a century just to redevelop the positronic brain theory from scratch. The character Dr. Gerrigel uses the term "Asenion" to describe robots programmed with the Three Laws. The robots in Asimov's stories, being Asenion robots, are incapable of knowingly violating the Three Laws but, in principle, a robot in science fiction or in the real world could be non-Asenion. "Asenion" is a misspelling of the name Asimov which was made by an editor of the magazine ''Planet Stories.''<ref>Asimov (1979), pp. 291–2.</ref> Asimov used this obscure variation to insert himself into ''The Caves of Steel'' just like he referred to himself as "Azimuth or, possibly, Asymptote" in ''[[Thiotimoline]] to the Stars'', in much the same way that [[Vladimir Nabokov]] appeared in ''[[Lolita]]'' [[anagram]]matically disguised as "Vivian Darkbloom". Characters within the stories often point out that the Three Laws, as they exist in a robot's mind, are not the written versions usually quoted by humans but abstract mathematical concepts upon which a robot's entire developing consciousness is based. This concept is unclear in earlier stories depicting rudimentary robots who are only programmed to comprehend basic physical tasks, where the Three Laws act as an overarching safeguard, but by the era of ''The Caves of Steel'' featuring robots with human or beyond-human intelligence the Three Laws have become the underlying basic ethical worldview that determines the actions of all robots.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)