Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Technological singularity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Uncertainty and risk=== {{Further|Existential risk from artificial general intelligence}} The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.<ref name="positive-and-negative">{{Citation |last=Yudkowsky |first=Eliezer |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks |page=303 |year=2008 |editor-last=Bostrom |editor-first=Nick |url=http://singinst.org/AIRisk.pdf |archive-url=https://web.archive.org/web/20080807132337/http://www.singinst.org/AIRisk.pdf |archive-date=2008-08-07 |url-status=dead |publisher=Oxford University Press |bibcode=2008gcr..book..303Y |isbn=978-0-19-857050-9 |editor2-last=Cirkovic |editor2-first=Milan}}.</ref><ref name="theuncertainfuture"/> It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an [[Existential risk|existential threat]].<ref name="sandberg-bostrom2008"/><ref name="bostrom-risks"/> Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the [[Future of Humanity Institute]] (until 2024), the [[Machine Intelligence Research Institute]],<ref name="positive-and-negative"/> the [[Center for Human-Compatible Artificial Intelligence]], and the [[Future of Life Institute]]. Physicist [[Stephen Hawking]] said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."<ref name=hawking_2014/> Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."<ref name=hawking_2014/> Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:<ref name="hawking_2014">{{cite web |author=Hawking |first=Stephen |author-link=Stephen Hawking |date=1 May 2014 |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence β but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |url-status=live |archive-url=https://web.archive.org/web/20150925153716/http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |archive-date=25 September 2015 |access-date=May 5, 2014 |work=[[The Independent]]}}</ref>{{blockquote|So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here{{snd}}we'll leave the lights on"? Probably not{{snd}}but this is more or less what is happening with AI.}} {{Harvtxt|Berglas|2008}} claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"] {{Webarchive|url=https://web.archive.org/web/20181008090224/http://www.nickbostrom.com/ethics/ai.html|date=2018-10-08}}, in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12β17.</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf|date=2012-06-11}}. Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file).</ref><ref name="singinst9">{{Cite web |url=http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ |title=The Stamp Collecting Device |first=Nick |last=Hay |access-date=2010-08-21 |archive-date=2012-06-17 |archive-url=https://web.archive.org/web/20120617191319/http://singinst.org/blog/2007/06/11/the-stamp-collecting-device/ |url-status=dead |date=June 11, 2007 |publisher=Singularity Institute |work=SIAI Blog}}</ref> [[Anders Sandberg]] has also elaborated on this scenario, addressing various common counter-arguments.<ref name="aleph">{{Cite web |title=Why we should fear the Paperclipper |date=February 14, 2011 |first=Anders |last=Sandberg |work=Andart |url=http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html |access-date=2023-06-14}}</ref> AI researcher [[Hugo de Garis]] suggests that artificial intelligences may simply eliminate the human race [[instrumental convergence|for access to scarce resources]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems"/> and humans would be powerless to stop them.<ref name="forbes">{{Cite web |last=de Garis |first=Hugo |title=The Coming Artilect War |url=https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html |website=Forbes |date=June 22, 2009 |access-date=2023-06-14 |language=en}}</ref> Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.<ref name="nickbostrom7" /> {{Harvtxt|Bostrom|2002}} discusses human extinction scenarios, and lists superintelligence as a possible cause: {{blockquote|When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.}} According to [[Eliezer Yudkowsky]], a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">{{Cite web |url=http://singinst.org/upload/CEV.html |title=Coherent Extrapolated Volition |first=Eliezer S. |last=Yudkowsky |date=May 2004 |archiveurl=https://web.archive.org/web/20100815055725/http://singinst.org/upload/CEV.html |archivedate=2010-08-15 |url-status=dead}}</ref> {{harvtxt|Bill Hibbard|2014}} proposes an AI design that avoids several dangers including self-delusion,<ref name="JAGI2012">{{Citation| journal=Journal of Artificial General Intelligence| year=2012| volume=3| issue=1| title=Model-Based Utility Functions| first=Bill| last=Hibbard| postscript=.| doi=10.2478/v10229-011-0013-5| page=1|arxiv = 1111.3934 |bibcode = 2012JAGI....3....1H | s2cid=8434596}}</ref> unintended instrumental actions,<ref name="selfawaresystems"/><ref name="AGI-12a">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf Avoiding Unintended AI Behaviors.] {{Webarchive|url=https://web.archive.org/web/20130629072904/http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf |date=2013-06-29 }} Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. [http://intelligence.org/2012/12/19/december-2012-newsletter/ This paper won the Machine Intelligence Research Institute's 2012 Turing Prize for the Best AGI Safety Paper] {{Webarchive|url=https://web.archive.org/web/20210215095130/https://intelligence.org/2012/12/19/december-2012-newsletter/ |date=2021-02-15 }}.</ref> and corruption of the reward generator.<ref name="AGI-12a"/> He also discusses social impacts of AI<ref name="JET2008">{{Citation| url=http://jetpress.org/v17/hibbard.htm| journal=Journal of Evolution and Technology| year=2008| volume=17| title=The Technology of Mind and a New Social Contract| first=Bill| last=Hibbard| postscript=.| access-date=2013-01-05| archive-date=2021-02-15| archive-url=https://web.archive.org/web/20210215095140/http://jetpress.org/v17/hibbard.htm| url-status=live}}</ref> and testing AI.<ref name="AGI-12b">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf Decision Support for Safe AI Design|.] {{Webarchive|url=https://web.archive.org/web/20210215095047/http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf |date=2021-02-15 }} Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.</ref> His 2001 book ''[[Super-Intelligent Machines]]'' advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)