Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Technological singularity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Hard or soft takeoff== [[File:Recursive self-improvement.svg|thumb|upright=1.6|In this sample recursive self-improvement scenario, humans modifying an AI's architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>]] In a hard takeoff scenario, an artificial superintelligence rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the agent's goals. In a soft takeoff scenario, the AI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AI's development.<ref>Bugaj, Stephan Vladimir, and Ben Goertzel. "Five ethical imperatives and their implications for human-AGI interaction." Dynamical Psychology (2007).</ref><ref>Sotala, Kaj, and Roman V. Yampolskiy. "Responses to catastrophic AGI risk: a survey." Physica Scripta 90.1 (2014): 018001.</ref> [[Ramez Naam]] argues against a hard takeoff. He has pointed out that we already see recursive self-improvement by superintelligences, such as corporations. [[Intel]], for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of [[Moore's law]].<ref name=Naam2014Further>{{cite web|last=Naam|first=Ramez|title=The Singularity Is Further Than It Appears|url=http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html|access-date=16 May 2014|year=2014|archive-date=17 May 2014|archive-url=https://web.archive.org/web/20140517114905/http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html|url-status=live}}</ref> Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably ''more'' than twice as hard as creating a mind of intelligence 1."<ref name="Naam2014Ascend">{{cite web |last=Naam |first=Ramez |year=2014 |title=Why AIs Won't Ascend in the Blink of an Eye β Some Math |url=http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html |url-status=live |archive-url=https://web.archive.org/web/20140517115830/http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html |archive-date=17 May 2014 |access-date=16 May 2014}}</ref> [[J. Storrs Hall]] believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular{{snd}}they seem to assume hyperhuman capabilities at the ''starting point'' of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460β467|url=http://www.agiri.org/takeoff_hall.pdf|access-date=16 May 2014|archive-date=1 December 2014|archive-url=https://web.archive.org/web/20141201201504/http://www.agiri.org/takeoff_hall.pdf|url-status=live}}</ref> Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. He refers to this scenario as a "semihard takeoff".<ref name="Goertzel2014">{{cite news|last1=Goertzel|first1=Ben|title=Superintelligence β Semi-hard Takeoff Scenarios|url=http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/|access-date=25 October 2014|agency=h+ Magazine|date=26 Sep 2014|archive-date=25 October 2014|archive-url=https://web.archive.org/web/20141025053847/http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/|url-status=live}}</ref> [[Max More]] disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|access-date=10 November 2014|archive-date=28 August 2009|archive-url=https://web.archive.org/web/20090828023928/http://hanson.gmu.edu/vc.html#more|url-status=live}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)