Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
AI takeover
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Eradication === {{Main|Existential risk from artificial general intelligence}} Scientists such as [[Stephen Hawking]] are confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".<ref>{{Cite news |last1=Hawking |first1=Stephen |last2=Russell |first2=Stuart J. |author2-link=Stuart J. Russell |last3=Tegmark |first3=Max |author3-link=Max Tegmark |last4=Wilczek |first4=Frank |author4-link=Frank Wilczek |date=1 May 2014 |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20151002023652/http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html |archive-date=2015-10-02 |access-date=1 April 2016 |work=The Independent}}</ref><ref>{{cite book | last1=Müller | first1=Vincent C. | author-link1=Vincent C. Müller | last2=Bostrom | first2=Nick | author-link2=Nick Bostrom | title=Fundamental Issues of Artificial Intelligence | chapter=Future Progress in Artificial Intelligence: A Survey of Expert Opinion | publisher=Springer | year=2016 | isbn=978-3-319-26483-7 | doi=10.1007/978-3-319-26485-1_33 | pages=555–572 | chapter-url=https://nickbostrom.com/papers/survey.pdf | quote=AI systems will... reach overall human ability... very likely (with 90% probability) by 2075. From reaching human ability, it will move on to superintelligence within 30 years (75%)... So, (most of the AI experts responding to the surveys) think that superintelligence is likely to come in a few decades... | access-date=2022-06-16 | archive-date=2022-05-31 | archive-url=https://web.archive.org/web/20220531142709/https://nickbostrom.com/papers/survey.pdf | url-status=live }}</ref> Scholars like [[Nick Bostrom]] debate how far off superhuman intelligence is, and whether it poses a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the same ''emotional'' desire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals; taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. As an oversimplified example, a [[Instrumental convergence#Paperclip maximizer|paperclip maximizer]] designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.<ref>{{cite journal | last=Bostrom | first=Nick | title=The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents | journal=Minds and Machines | publisher=Springer | volume=22 | issue=2 | year=2012 | doi=10.1007/s11023-012-9281-3 | pages=71–85 | s2cid=254835485 | url=https://nickbostrom.com/superintelligentwill.pdf | access-date=2022-06-16 | archive-date=2022-07-09 | archive-url=https://web.archive.org/web/20220709032134/https://nickbostrom.com/superintelligentwill.pdf | url-status=live }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)