Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Three Laws of Robotics
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Zeroth Law added==== Asimov once added a "[[Zero-based numbering|Zeroth]] Law"—so named to continue the pattern where lower-numbered laws supersede the higher-numbered laws—stating that a robot must not harm humanity. The robotic character [[R. Daneel Olivaw]] was the first to give the Zeroth Law a name in the novel ''[[Robots and Empire]]'';<ref name="BBCAsimov">{{cite web |title=Isaac Asimov |url=https://www.bbc.co.uk/dna/h2g2/A42253922 |publisher=BBC |access-date=11 November 2010 |archive-date=10 January 2010 |archive-url=https://web.archive.org/web/20100110060251/http://www.bbc.co.uk/dna/h2g2/A42253922 |url-status=live }}</ref> however, the character Susan Calvin articulates the concept in the short story "[[The Evitable Conflict]]". In the final scenes of the novel ''Robots and Empire'', [[R. Giskard Reventlov]] is the first robot to act according to the Zeroth Law. Giskard is [[telepathic]], like the robot Herbie in the short story "[[Liar! (short story)|Liar!]]", and tries to apply the Zeroth Law through his understanding of a more subtle concept of "harm" than most robots can grasp.<ref name="SC1">{{cite news |title=Sci-fi writer Isaac Asimov |url=http://archive.thedailystar.net/campus/2007/07/05/autprofile.htm |work=Campus Star |publisher=[[The Daily Star (Bangladesh)|The Daily Star]] |date=29 July 2007 |access-date=7 August 2016 |quote=Only highly advanced robots (such as Daneel and Giskard) could comprehend this law. |archive-date=8 November 2016 |archive-url=https://web.archive.org/web/20161108132248/http://archive.thedailystar.net/campus/2007/07/05/autprofile.htm |url-status=live }}</ref> However, unlike Herbie, Giskard grasps the philosophical concept of the Zeroth Law allowing him to harm individual human beings if he can do so in service to the abstract concept of humanity. The Zeroth Law is never programmed into Giskard's brain but instead is a rule he attempts to comprehend through pure [[metacognition]]. Although he fails – it ultimately destroys his positronic brain as he is not certain whether his choice will turn out to be for the ultimate good of humanity or not – he gives his successor R. Daneel Olivaw his telepathic abilities. Over the course of thousands of years Daneel adapts himself to be able to fully obey the Zeroth Law.{{fact|date=December 2023}} Daneel originally formulated the Zeroth Law in both the novel ''[[Foundation and Earth]]'' (1986) and the subsequent novel ''[[Prelude to Foundation]]'' (1988): {{quote|A robot may not injure humanity or, through inaction, allow humanity to come to harm.}} A condition stating that the Zeroth Law must not be broken was added to the original Three Laws, although Asimov recognized the difficulty such a law would pose in practice. Asimov's novel ''[[Foundation and Earth]]'' contains the following passage: {{bquote| Trevize frowned. "How do you decide what is injurious, or not injurious, to humanity as a whole?" "Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."}} A translator incorporated the concept of the Zeroth Law into one of Asimov's novels before Asimov himself made the law explicit.<ref name="Brécard" /> Near the climax of ''[[The Caves of Steel]]'', [[Elijah Baley]] makes a bitter comment to himself thinking that the First Law forbids a robot from harming a human being. He determines that it must be so unless the robot is clever enough to comprehend that its actions are for humankind's long-term good. In Jacques Brécard's 1956 [[French language|French]] translation entitled ''[[:fr:Les Cavernes d'acier|Les Cavernes d'acier]]'' Baley's thoughts emerge in a slightly different way: {{quote|A robot may not harm a human being, unless he finds a way to prove that ultimately the harm done would benefit humanity in general!<ref name="Brécard">{{cite book| last=Asimov| first=Isaac| title=The Caves of Steel| publisher=Doubleday| year=1952}}, translated by Jacques Brécard as {{cite book| title=Les Cavernes d'acier| publisher=J'ai Lu Science-fiction| year=1975| isbn=978-2-290-31902-4| title-link=:fr:Les Cavernes d'acier}}</ref>}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)