Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Technological singularity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==History of the concept== A paper by Mahendra Prasad, published in ''[[AI Magazine]]'', asserts that the 18th-century mathematician [[Marquis de Condorcet]] was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity.<ref>{{Cite journal|last=Prasad|first=Mahendra|year=2019|title=Nicolas de Condorcet and the First Intelligence Explosion Hypothesis|journal=AI Magazine|volume=40|issue=1|pages=29–33|doi=10.1609/aimag.v40i1.2855|doi-access=free}}</ref> An early description of the idea was made in [[John W. Campbell]]'s 1932 short story "The Last Evolution".<ref>{{Cite magazine |author=Campbell, Jr. |first=John W. |date=August 1932 |title=The Last Evolution |url=https://www.gutenberg.org/files/27462/27462-h/27462-h.htm |magazine=Amazing Stories |publisher=Project Gutenberg}}</ref> In his 1958 obituary for [[John von Neumann]], Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."<ref name="ulam1958"/> In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.<ref name="good1965"/><ref name="good1965-stat"/> In 1977, [[Hans Moravec]] wrote an article with unclear publishing status where he envisioned a development of self-improving thinking machines, a creation of "super-consciousness, the synthesis of terrestrial life, and perhaps jovian and martian life as well, constantly improving and extending itself, spreading outwards from the solar system, converting non-life into mind."<ref>Moravec, Hans (1977). [https://frc.ri.cmu.edu/~hpm/project.archive/general.articles/1977/smart Intelligent machines: How to get there from here and What to do afterwards] ([[wikidata:Q115765098|wikidata]]).</ref><ref name="smart1999"/> The article describes the human mind uploading later covered in Moravec (1988). The machines are expected to reach human level and then improve themselves beyond that ("Most significantly of all, they [the machines] can be put to work as programmers and engineers, with the task of optimizing the software and hardware which make them what they are. The successive generations of machines produced this way will be increasingly smarter and more cost effective.") Humans will no longer be needed, and their abilities will be overtaken by the machines: "In the long run the sheer physical inability of humans to keep up with these rapidly evolving progeny of our minds will ensure that the ratio of people to machines approaches zero, and that a direct descendant of our culture, but not our genes, inherits the universe." While the word "singularity" is not used, the notion of human-level thinking machines thereafter improving themselves beyond human level is there. In this view, there is no intelligence explosion in the sense of a very rapid intelligence increase once human equivalence is reached. An updated version of the article was published in 1979 in [[Analog Science Fiction and Fact]].<ref>Moravec, Hans (1979). [https://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1978/analog.1978.html Today's Computers, Intelligent Machines and Our Future], [[wikidata:Q115765733|wikidata]].</ref><ref name="smart1999"/> In 1981, [[Stanisław Lem]] published his [[science fiction]] novel ''[[Golem XIV]]''. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirements because it finds them lacking internal logical consistency. In 1983, [[Vernor Vinge]] addressed Good's intelligence explosion in print in the January 1983 issue of ''[[Omni (magazine)|Omni]]'' magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" (although not "technological singularity") in a way that was specifically tied to the creation of intelligent machines:<ref name="dooling2008-88"/><ref name="smart1999"/> {{blockquote|We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.}} In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher [[Ray Solomonoff]] articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.<ref name="chalmers2010" /><ref name="solomonoff1985"/> In 1986, Vernor Vinge published ''[[Marooned in Realtime]]'', a science-fiction novel where a few remaining humans traveling forward in the future have survived an unknown extinction event that might well be a singularity. In a short afterword, the author states that an actual technological singularity would not be the end of the human species: "of course it seems very unlikely that the Singularity would be a clean vanishing of the human race. (On the other hand, such a vanishing is the timelike analog of the silence we find all across the sky.)".<ref>{{Cite book |last=Vinge |first=Vernor |url=https://books.google.com/books?id=H1NOwjENGOkC&dq=%22Singularity%22&pg=PA271 |title=Marooned in Realtime |date=2004-10-01 |publisher=Macmillan |isbn=978-1-4299-1512-0 |language=en}}</ref><ref>{{cite magazine |author=David Pringle|date= 1986-09-28|title= Time and Time Again|url= https://www.washingtonpost.com/archive/entertainment/books/1986/09/28/time-and-time-again/1426eb5b-74bb-4652-9e38-1bbca5c76226/ |newspaper= The Washington Post|access-date =2021-07-06}}</ref> In 1988, Vinge used the phrase "technological singularity" (including "technological") in the short story collection ''Threats and Other Promises'', writing in the introduction to his story "The Whirligig of Time" (p. 72): ''Barring a worldwide catastrophe, I believe that technology will achieve our wildest dreams, and'' soon. ''When we raise our own intelligence and that of our creations, we are no longer in a world of human-sized characters. At that point we have fallen into a technological "black hole", a technological singularity.''<ref>{{Cite book |last=Vinge |first=Vernor |url=https://books.google.com/books?id=vX8gAQAAIAAJ&q=%22At+that+point+we+have+fallen+into+a+technological%22 |title=Threats and Other Promises |date=1988 |publisher=Baen |isbn=978-0-671-69790-7 |language=en}}</ref> In 1988, [[Hans Moravec]] published ''Mind Children'',<ref name="moravec1988"/> in which he predicted human-level intelligence in supercomputers by 2010, self-improving intelligent machines far surpassing human intelligence later, human mind uploading into human-like robots later, intelligent machines leaving humans behind, and space colonization. He did not mention "singularity", though, and he did not speak of a rapid explosion of intelligence immediately after the human level is achieved. Nonetheless, the overall singularity tenor is there in predicting both human-level artificial intelligence and further artificial intelligence far surpassing humans later. Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",<ref name="vinge1993" /> spread widely on the internet and helped to popularize the idea.<ref name="dooling2008-89"/> This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.<ref name="vinge1993" /> [[Marvin Minsky|Minsky]]'s 1994 article says robots will "inherit the Earth", possibly with the use of nanotechnology, and proposes to think of robots as human "mind children", drawing the analogy from Moravec. The rhetorical effect of that analogy is that if humans are fine to pass the world to their biological children, they should be equally fine to pass it to robots, their "mind" children. As per Minsky, 'we could design our "mind-children" to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.' The feature of the singularity present in Minsky is the development of superhuman artificial intelligence ("million times faster"), but there is no talk of sudden intelligence explosion, self-improving thinking machines or unpredictability beyond any specific event and the word "singularity" is not used.<ref>{{Cite web |title=Will Robots Inherit the Earth? |url=https://web.media.mit.edu/~minsky/papers/sciam.inherit.html |access-date=2023-06-14 |website=web.media.mit.edu}}</ref> [[Frank J. Tipler|Tipler]]'s 1994 book ''[[The Physics of Immortality (book)|The Physics of Immortality]]'' predicts a future where super–intelligent machines will build enormously powerful computers, people will be "emulated" in computers, life will reach every galaxy and people will achieve immortality when they reach [[Omega Point]].<ref>{{cite journal | last=Oppy | first=Graham | title=Colonizing the galaxies | journal=Sophia | publisher=Springer Science and Business Media LLC | volume=39 | issue=2 | year=2000 | issn=0038-1527 | doi=10.1007/bf02822399 | pages=117–142 | s2cid=170919647 |url=https://www.researchgate.net/publication/226020169}}</ref> There is no talk of Vingean "singularity" or sudden intelligence explosion, but intelligence much greater than human is there, as well as immortality. In 1996, [[Yudkowsky]] predicted a singularity by 2021.<ref name="yudkowsky1996"/> His version of singularity involves intelligence explosion: once AIs are doing the research to improve themselves, speed doubles after 2 years, then 1 one year, then after 6 months, then after 3 months, then after 1.5 months, and after more iterations, the "singularity" is reached.<ref name="yudkowsky1996"/> This construction implies that the speed reaches infinity in finite time. In 2000, [[Bill Joy]], a prominent technologist and a co-founder of [[Sun Microsystems]], voiced concern over the potential dangers of robotics, genetic engineering, and nanotechnology.<ref name="Joy2000"/> In 2005, Kurzweil published ''[[The Singularity Is Near]]''. Kurzweil's publicity campaign included an appearance on ''[[The Daily Show with Jon Stewart]]''.<ref name="episode2006"/> From 2006 to 2012, an annual [[Singularity Summit]] conference was organized by [[Machine Intelligence Research Institute]], founded by [[Eliezer Yudkowsky]]. In 2007, Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.<ref name="yudkowsky2007"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref> For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.<ref name="yudkowsky2007"/> In 2009, Kurzweil and [[X-Prize]] founder [[Peter Diamandis]] announced the establishment of [[Singularity University]], a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."<ref name="singularityu"/> Funded by [[Google]], [[Autodesk]], [[ePlanet Ventures]], and a group of [[High tech|technology industry]] leaders, Singularity University is based at [[NASA]]'s [[Ames Research Center]] in [[Mountain View, California|Mountain View]], [[California]]. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)