Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Friendly artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Criticism == {{See also|Technological singularity#Criticisms}} Some critics believe that both human-level AI and superintelligence are unlikely and that, therefore, friendly AI is unlikely. Writing in ''[[The Guardian]]'', Alan Winfield compares human-level artificial intelligence with faster-than-light travel in terms of difficulty and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|access-date=17 September 2014|work=[[The Guardian]]|date=9 August 2014|archive-date=17 September 2014|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|url-status=live}}</ref> Boyles and Joaquin, on the other hand, argue that Luke Muehlhauser and [[Nick Bostrom]]’s proposal to create friendly AIs appear to be bleak. This is because Muehlhauser and Bostrom seem to hold the idea that intelligent machines could be programmed to think counterfactually about the moral values that human beings would have had.<ref name=think13 /> In an article in ''[[AI & Society]]'', Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that are more ideal than the ones human beings possess at present, and the apparent disconnect between counterfactual antecedents and ideal value consequent.<ref name=boyles2019 /> Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful.<ref>{{cite journal | last=Kornai | first=András | title=Bounding the impact of AGI | journal=Journal of Experimental & Theoretical Artificial Intelligence | publisher=Informa UK Limited | volume=26 | issue=3 | date=2014-05-15 | issn=0952-813X | doi=10.1080/0952813x.2014.895109 | pages=417–438 | s2cid=7067517 |quote=...the essence of AGIs is their reasoning facilities, and it is the very logic of their being that will compel them to behave in a moral fashion... The real nightmare scenario (is one where) humans find it advantageous to strongly couple themselves to AGIs, with no guarantees against self-deception.}}</ref> Other critics question whether artificial intelligence can be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal ''[[The New Atlantis (journal)|The New Atlantis]]'', say that it will be impossible ever to guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes but certainty and consensus on how one values the different outcomes.<ref>{{cite magazine |url=http://www.thenewatlantis.com/publications/the-problem-with-friendly-artificial-intelligence |first1=Adam |last1=Keiper |first2=Ari N. |last2=Schulman |title=The Problem with 'Friendly' Artificial Intelligence |journal=The New Atlantis |number=32 |date=Summer 2011 |page= |pages=80–89 |access-date=2012-01-16 |archive-date=2012-01-15 |archive-url=https://web.archive.org/web/20120115062805/http://www.thenewatlantis.com/publications/the-problem-with-friendly-artificial-intelligence |url-status=live }}</ref> The inner workings of advanced AI systems may be complex and difficult to interpret, leading to concerns about transparency and accountability.<ref>{{Cite book |last=Norvig |first=Peter |title=Artificial Intelligence: A Modern Approach |last2=Russell |first2=Stuart |publisher=Pearson |year=2010 |isbn=978-0136042594 |edition=3rd}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)