Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Technological singularity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Potential impacts== Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from the [[Paleolithic]] era until the [[Neolithic Revolution]]. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.<ref name="Hanson">{{Citation |last=Hanson |first=Robin |title=Economics Of The Singularity |date=1 June 2008 |work=IEEE Spectrum Special Report: The Singularity |url=https://www.spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity |access-date=2009-07-25 |archive-url=https://web.archive.org/web/20110811005825/http://spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity |archive-date=2011-08-11 |url-status=dead}} & [http://hanson.gmu.edu/longgrow.pdf Long-Term Growth As A Sequence of Exponential Modes] {{Webarchive|url=https://web.archive.org/web/20190527020444/https://spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity|date=2019-05-27}}.</ref> ===Uncertainty and risk=== {{Further|Existential risk from artificial general intelligence}} The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.<ref name="positive-and-negative">{{Citation |last=Yudkowsky |first=Eliezer |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks |page=303 |year=2008 |editor-last=Bostrom |editor-first=Nick |url=http://singinst.org/AIRisk.pdf |archive-url=https://web.archive.org/web/20080807132337/http://www.singinst.org/AIRisk.pdf |archive-date=2008-08-07 |url-status=dead |publisher=Oxford University Press |bibcode=2008gcr..book..303Y |isbn=978-0-19-857050-9 |editor2-last=Cirkovic |editor2-first=Milan}}.</ref><ref name="theuncertainfuture"/> It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an [[Existential risk|existential threat]].<ref name="sandberg-bostrom2008"/><ref name="bostrom-risks"/> Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the [[Future of Humanity Institute]] (until 2024), the [[Machine Intelligence Research Institute]],<ref name="positive-and-negative"/> the [[Center for Human-Compatible Artificial Intelligence]], and the [[Future of Life Institute]]. Physicist [[Stephen Hawking]] said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."<ref name=hawking_2014/> Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."<ref name=hawking_2014/> Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:<ref name="hawking_2014">{{cite web |author=Hawking |first=Stephen |author-link=Stephen Hawking |date=1 May 2014 |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence β but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |url-status=live |archive-url=https://web.archive.org/web/20150925153716/http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |archive-date=25 September 2015 |access-date=May 5, 2014 |work=[[The Independent]]}}</ref>{{blockquote|So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here{{snd}}we'll leave the lights on"? Probably not{{snd}}but this is more or less what is happening with AI.}} {{Harvtxt|Berglas|2008}} claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"] {{Webarchive|url=https://web.archive.org/web/20181008090224/http://www.nickbostrom.com/ethics/ai.html|date=2018-10-08}}, in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12β17.</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf|date=2012-06-11}}. Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file).</ref><ref name="singinst9">{{Cite web |url=http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ |title=The Stamp Collecting Device |first=Nick |last=Hay |access-date=2010-08-21 |archive-date=2012-06-17 |archive-url=https://web.archive.org/web/20120617191319/http://singinst.org/blog/2007/06/11/the-stamp-collecting-device/ |url-status=dead |date=June 11, 2007 |publisher=Singularity Institute |work=SIAI Blog}}</ref> [[Anders Sandberg]] has also elaborated on this scenario, addressing various common counter-arguments.<ref name="aleph">{{Cite web |title=Why we should fear the Paperclipper |date=February 14, 2011 |first=Anders |last=Sandberg |work=Andart |url=http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html |access-date=2023-06-14}}</ref> AI researcher [[Hugo de Garis]] suggests that artificial intelligences may simply eliminate the human race [[instrumental convergence|for access to scarce resources]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems"/> and humans would be powerless to stop them.<ref name="forbes">{{Cite web |last=de Garis |first=Hugo |title=The Coming Artilect War |url=https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html |website=Forbes |date=June 22, 2009 |access-date=2023-06-14 |language=en}}</ref> Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.<ref name="nickbostrom7" /> {{Harvtxt|Bostrom|2002}} discusses human extinction scenarios, and lists superintelligence as a possible cause: {{blockquote|When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.}} According to [[Eliezer Yudkowsky]], a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">{{Cite web |url=http://singinst.org/upload/CEV.html |title=Coherent Extrapolated Volition |first=Eliezer S. |last=Yudkowsky |date=May 2004 |archiveurl=https://web.archive.org/web/20100815055725/http://singinst.org/upload/CEV.html |archivedate=2010-08-15 |url-status=dead}}</ref> {{harvtxt|Bill Hibbard|2014}} proposes an AI design that avoids several dangers including self-delusion,<ref name="JAGI2012">{{Citation| journal=Journal of Artificial General Intelligence| year=2012| volume=3| issue=1| title=Model-Based Utility Functions| first=Bill| last=Hibbard| postscript=.| doi=10.2478/v10229-011-0013-5| page=1|arxiv = 1111.3934 |bibcode = 2012JAGI....3....1H | s2cid=8434596}}</ref> unintended instrumental actions,<ref name="selfawaresystems"/><ref name="AGI-12a">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf Avoiding Unintended AI Behaviors.] {{Webarchive|url=https://web.archive.org/web/20130629072904/http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf |date=2013-06-29 }} Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. [http://intelligence.org/2012/12/19/december-2012-newsletter/ This paper won the Machine Intelligence Research Institute's 2012 Turing Prize for the Best AGI Safety Paper] {{Webarchive|url=https://web.archive.org/web/20210215095130/https://intelligence.org/2012/12/19/december-2012-newsletter/ |date=2021-02-15 }}.</ref> and corruption of the reward generator.<ref name="AGI-12a"/> He also discusses social impacts of AI<ref name="JET2008">{{Citation| url=http://jetpress.org/v17/hibbard.htm| journal=Journal of Evolution and Technology| year=2008| volume=17| title=The Technology of Mind and a New Social Contract| first=Bill| last=Hibbard| postscript=.| access-date=2013-01-05| archive-date=2021-02-15| archive-url=https://web.archive.org/web/20210215095140/http://jetpress.org/v17/hibbard.htm| url-status=live}}</ref> and testing AI.<ref name="AGI-12b">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf Decision Support for Safe AI Design|.] {{Webarchive|url=https://web.archive.org/web/20210215095047/http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf |date=2021-02-15 }} Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.</ref> His 2001 book ''[[Super-Intelligent Machines]]'' advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator. ===Next step of sociobiological evolution=== {{Further|Sociocultural evolution}} {{off topic|date=October 2021}} [[File:Major Evolutionary Transitions digital.jpg|thumb|upright=1.7|Schematic Timeline of Information and Replicators in the Biosphere: Gillings et al.'s "[[The Major Transitions in Evolution|major evolutionary transitions]]" in information processing.<ref name="InfoBiosphere2016" />]] [[File:Biological vs. digital information.jpg|thumb|Amount of digital information worldwide (5{{e|21}} bytes) versus human genome information worldwide (10<sup>19</sup> bytes) in 2014<ref name="InfoBiosphere2016" />]] While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.{{citation needed|date=April 2018}} In addition, some argue that we are already in the midst of a [[The Major Transitions in Evolution|major evolutionary transition]] that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in ''[[Trends in Ecology & Evolution]]'' argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust [[artificial intelligence]] with our lives through [[Anti-lock braking system|antilock braking in cars]] and [[autopilot]]s in planes... With one in three courtships leading to marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". The article further argues that from the perspective of the [[evolution]], several previous [[The Major Transitions in Evolution|Major Transitions in Evolution]] have transformed life through innovations in information storage and replication ([[RNA]], [[DNA]], [[multicellularity]], and [[culture]] and [[language]]). In the current stage of life's evolution, the carbon-based biosphere has generated a system (humans) capable of creating technology that will result in a comparable [[The Major Transitions in Evolution|evolutionary transition]]. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 [[zettabyte]]s in 2014 (5{{e|21}} bytes).<ref>{{cite web |author=Hilbert |first=Martin |title=Information Quantity |url=http://www.martinhilbert.net/wp-content/uploads/2018/07/Hilbert2017_ReferenceWorkEntry_InformationQuantity.pdf}}</ref> In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1{{e|19}} bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3{{e|37}} base pairs, equivalent to 1.325{{e|37}} bytes of information. If growth in digital storage continues at its current rate of 30β38% compound annual growth per year,<ref name="HilbertLopez2011" /> it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".<ref name="InfoBiosphere2016">{{Cite journal|url=http://escholarship.org/uc/item/38f4b791|doi=10.1016/j.tree.2015.12.013|pmid=26777788|title=Information in the Biosphere: Biological and Digital Worlds|journal=Trends in Ecology & Evolution|volume=31|issue=3|pages=180β189|year=2016|last1=Kemp|first1=D. J.|last2=Hilbert|first2=M.|last3=Gillings|first3=M. R.|bibcode=2016TEcoE..31..180G |s2cid=3561873 |access-date=2016-05-24|archive-date=2016-06-04|archive-url=https://web.archive.org/web/20160604174011/http://escholarship.org/uc/item/38f4b791|url-status=live}}</ref> ===Implications for human society=== {{further|Artificial intelligence in fiction}} In February 2009, under the auspices of the [[Association for the Advancement of Artificial Intelligence]] (AAAI), [[Eric Horvitz]] chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at the Asilomar conference center in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire [[autonomy]], and to what degree they could use such abilities to pose threats or hazards.<ref name="nytimes july09" /> Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some [[computer virus]]es can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.<ref name="nytimes july09">{{cite news|last=Markoff|first=John|url=https://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper|title=Scientists Worry Machines May Outsmart Man|work=The New York Times|date=26 July 2009|archive-url=https://web.archive.org/web/20170701084625/http://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper|archive-date=2017-07-01}}</ref> Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability.<ref name=":0">{{cite magazine |last=Robinson |first=Frank S. |title=The Human Future: Upgrade or Replacement? |magazine=[[The Humanist]] |date=27 June 2013 |url=https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement |access-date=1 May 2020 |archive-date=15 February 2021 |archive-url=https://web.archive.org/web/20210215095131/https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement |url-status=live }}</ref> Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)