Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Technological singularity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Algorithm improvements == Some intelligence technologies, like "seed AI",<ref name="Yampolskiy, Roman V 2015"/><ref name="ReferenceA"/> may also have the potential to not just make themselves faster, but also more efficient, by modifying their [[source code]]. These improvements would make further improvements possible, which would make further improvements possible, and so on. The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.{{citation needed|date=July 2017}} An AI rewriting its own source code could do so while contained in an [[AI box]]. Second, as with [[Vernor Vinge]]'s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. [[Eliezer Yudkowsky]] compares it to the changes that human intelligence brought: humans changed the world thousands of times quicker than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.<ref name="yudkowsky">{{cite web |author=Yudkowsky |first=Eliezer S. |title=Power of Intelligence |url=http://yudkowsky.net/singularity/power |url-status=live |archive-url=https://web.archive.org/web/20181003033529/http://yudkowsky.net/singularity/power |archive-date=2018-10-03 |access-date=2011-09-09 |publisher=Yudkowsky}}</ref> There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might self-modify, potentially causing the AI to optimise for something other than what was originally intended.<ref name="selfawaresystems">{{Cite web |last=Omohundro |first=Stephen M. |date=30 November 2007 |editor-last=Wang |editor-first=Pei |editor2-last=Goertzel |editor2-first=Ben |editor3-last=Franklin |editor3-first=Stan |title="The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, Vol. 171. |url=http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ |url-status=live |archive-url=https://web.archive.org/web/20180917003322/https://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ |archive-date=2018-09-17 |access-date=2010-08-20 |publisher=IOS |place=Amsterdam, Netherlands}}</ref><ref name="kurzweilai">{{cite web |url=http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time |title=Artificial General Intelligence: Now Is the Time |publisher=KurzweilAI |access-date=2011-09-09 |archive-date=2011-12-04 |archive-url=https://web.archive.org/web/20111204070036/http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time |url-status=live }}</ref> Secondly, AIs could compete for the same scarce resources humankind uses to survive.<ref name="selfawaresystems.com">{{Cite web |url=http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ |title=Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010. |date=6 October 2007 |access-date=2010-08-20 |archive-date=2018-06-12 |archive-url=https://web.archive.org/web/20180612163100/https://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ |url-status=live }}</ref><ref>{{cite book|last1=Barrat|first1=James|title=Our Final Invention|year=2013|publisher=St. Martin's Press|location=New York|isbn=978-0312622374|pages=78–98|edition=First|chapter=6, "Four Basic Drives"|title-link=Our Final Invention}}</ref> While not actively malicious, AIs would promote the goals of their programming, not necessarily broader human goals, and thus might crowd out humans.<ref name="kurzweilai.net">{{cite web |url=http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 |title=Max More and Ray Kurzweil on the Singularity |publisher=KurzweilAI |access-date=2011-09-09 |archive-date=2018-11-21 |archive-url=https://web.archive.org/web/20181121213047/http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 |url-status=live }}</ref><ref name="ReferenceB">{{cite web |url=http://singinst.org/riskintro/index.html |title=Concise Summary | Singularity Institute for Artificial Intelligence |publisher=Singinst.org |access-date=2011-09-09 |archive-date=2011-06-21 |archive-url=https://web.archive.org/web/20110621172641/http://singinst.org/riskintro/index.html |url-status=dead }}</ref><ref name="nickbostrom7">{{Cite web |url=http://www.nickbostrom.com/fut/evolution.html |last=Bostrom |first=Nick |title=The Future of Human Evolution |year=2004 |access-date=2010-08-20 |archive-date=2018-08-28 |archive-url=https://web.archive.org/web/20180828203258/https://nickbostrom.com/fut/evolution.html |url-status=live}}<!-- Published in Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy (Ria University Press: Palo Alto, California, 2004): pp. 339-371. --></ref> [[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.<ref name="ShulmanSandberg2010">{{cite journal |last1=Shulman |first1=Carl |last2=Sandberg |first2=Anders |year=2010 |editor1-last=Mainzer |editor1-first=Klaus |title=Implications of a Software-Limited Singularity |url=http://intelligence.org/files/SoftwareLimited.pdf |url-status=live |journal=ECAP10: VIII European Conference on Computing and Philosophy |archive-url=https://web.archive.org/web/20190430061928/https://intelligence.org/files/SoftwareLimited.pdf |archive-date=30 April 2019 |access-date=17 May 2014}}</ref> An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang".<ref name="MuehlhauserSalamon2012">{{cite book |last1=Muehlhauser |first1=Luke |title=Singularity Hypotheses: A Scientific and Philosophical Assessment |last2=Salamon |first2=Anna |publisher=Springer |year=2012 |editor=Eden |editor-first=Amnon |chapter=Intelligence Explosion: Evidence and Import |access-date=2018-08-28 |editor2=Søraker |editor-first2=Johnny |editor3=Moor |editor-first3=James H. |editor4=Steinhart |editor-first4=Eric |chapter-url=http://intelligence.org/files/IE-EI.pdf |archive-url=https://web.archive.org/web/20141026105011/http://intelligence.org/files/IE-EI.pdf |archive-date=2014-10-26 |url-status=live}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)