Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Technological singularity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Implications for human society=== {{further|Artificial intelligence in fiction}} In February 2009, under the auspices of the [[Association for the Advancement of Artificial Intelligence]] (AAAI), [[Eric Horvitz]] chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at the Asilomar conference center in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire [[autonomy]], and to what degree they could use such abilities to pose threats or hazards.<ref name="nytimes july09" /> Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some [[computer virus]]es can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.<ref name="nytimes july09">{{cite news|last=Markoff|first=John|url=https://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper|title=Scientists Worry Machines May Outsmart Man|work=The New York Times|date=26 July 2009|archive-url=https://web.archive.org/web/20170701084625/http://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper|archive-date=2017-07-01}}</ref> Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability.<ref name=":0">{{cite magazine |last=Robinson |first=Frank S. |title=The Human Future: Upgrade or Replacement? |magazine=[[The Humanist]] |date=27 June 2013 |url=https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement |access-date=1 May 2020 |archive-date=15 February 2021 |archive-url=https://web.archive.org/web/20210215095131/https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement |url-status=live }}</ref> Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)