Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial general intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Risks== === Existential risks === {{Main|Existential risk from artificial general intelligence|AI safety}} AGI may represent multiple types of [[existential risk]], which are risks that threaten "the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development".<ref>{{Cite news |last=Doherty |first=Ben |date=2018-05-17 |title=Climate change an 'existential security risk' to Australia, Senate inquiry says |url=https://www.theguardian.com/environment/2018/may/18/climate-change-an-existential-security-risk-to-australia-senate-inquiry-says |access-date=2023-07-16 |work=The Guardian |language=en-GB |issn=0261-3077}}</ref> The risk of human extinction from AGI has been the topic of many debates, but there is also the possibility that the development of AGI would lead to a permanently flawed future. Notably, it could be used to spread and preserve the set of values of whoever develops it. If humanity still has moral blind spots similar to slavery in the past, AGI might irreversibly entrench it, preventing [[moral progress]].<ref>{{Cite book |last=MacAskill |first=William |title=What we owe the future |date=2022 |publisher=Basic Books |isbn=978-1-5416-1862-6 |location=New York, NY}}</ref> Furthermore, AGI could facilitate mass surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime.<ref name=":02">{{Cite book |last=Ord |first=Toby |title=The Precipice: Existential Risk and the Future of Humanity |publisher=Bloomsbury Publishing |date=2020 |isbn=978-1-5266-0021-9 |chapter=Chapter 5: Future Risks, Unaligned Artificial Intelligence}}</ref><ref>{{Cite web |last=Al-Sibai |first=Noor |date=13 February 2022 |title=OpenAI Chief Scientist Says Advanced AI May Already Be Conscious |url=https://futurism.com/the-byte/openai-already-sentient |access-date=2023-12-24 |website=Futurism}}</ref> There is also a risk for the machines themselves. If machines that are sentient or otherwise worthy of moral consideration are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare and interests could be an existential catastrophe.<ref>{{Cite web |last=Samuelsson |first=Paul Conrad |date=2019 |title=Artificial Consciousness: Our Greatest Ethical Challenge |url=https://philosophynow.org/issues/132/Artificial_Consciousness_Our_Greatest_Ethical_Challenge |access-date=2023-12-23 |website=Philosophy Now}}</ref><ref>{{Cite magazine |last=Kateman |first=Brian |date=2023-07-24 |title=AI Should Be Terrified of Humans |url=https://time.com/6296234/ai-should-be-terrified-of-humans/ |access-date=2023-12-23 |magazine=TIME |language=en}}</ref> Considering how much AGI could improve humanity's future and help reduce other existential risks, [[Toby Ord]] calls these existential risks "an argument for proceeding with due caution", not for "abandoning AI".<ref name=":02"/> ==== Risk of loss of control and human extinction ==== The thesis that AI poses an existential risk for humans, and that this risk needs more attention, is controversial but has been endorsed in 2023 by many public figures, AI researchers and CEOs of AI companies such as [[Elon Musk]], [[Bill Gates]], [[Geoffrey Hinton]], [[Yoshua Bengio]], [[Demis Hassabis]] and [[Sam Altman]].<ref>{{Cite news |last=Roose |first=Kevin |date=2023-05-30 |title=A.I. Poses 'Risk of Extinction,' Industry Leaders Warn |url=https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html |access-date=2023-12-24 |work=The New York Times |language=en-US |issn=0362-4331}}</ref><ref name=":16"/> In 2014, [[Stephen Hawking]] criticized widespread indifference: {{Cquote|So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{Emdash}}we'll leave the lights on?' Probably not{{Emdash}}but this is more or less what is happening with AI.<ref name="hawking editorial">{{Cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence β but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |url-status=live |archive-url=https://web.archive.org/web/20150925153716/http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |archive-date=25 September 2015 |access-date=3 December 2014 |work=[[The Independent (UK)]]}}</ref> | author = }}The potential fate of humanity has sometimes been compared to the fate of gorillas threatened by human activities. The comparison states that greater intelligence allowed humanity to dominate gorillas, which are now vulnerable in ways that they could not have anticipated. As a result, the gorilla has become an endangered species, not out of malice, but simply as a collateral damage from human activities.<ref>{{Cite web |last=Herger |first=Mario |title=The Gorilla Problem β Enterprise Garage |url=https://www.enterprisegarage.io/2019/10/the-gorilla-problem/ |access-date=2023-06-07 |language=en-US}}</ref> The skeptic [[Yann LeCun]] considers that AGIs will have no desire to dominate humanity and that we should be careful not to anthropomorphize them and interpret their intents as we would for humans. He said that people won't be "smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards".<ref>{{Cite web |title=The fascinating Facebook debate between Yann LeCun, Stuart Russel and Yoshua Bengio about the risks of strong AI |url=https://www.parlonsfutur.com/blog/the-fascinating-facebook-debate-between-yann-lecun-stuart-russel-and-yoshua |access-date=2023-06-08 |website=The fascinating Facebook debate between Yann LeCun, Stuart Russel and Yoshua Bengio about the risks of strong AI |language=fr}}</ref> On the other side, the concept of [[instrumental convergence]] suggests that almost whatever their goals, [[intelligent agent]]s will have reasons to try to survive and acquire more power as intermediary steps to achieving these goals. And that this does not require having emotions.<ref>{{Cite web |date=2014-08-22 |title=Will Artificial Intelligence Doom The Human Race Within The Next 100 Years? |url=https://www.huffpost.com/entry/artificial-intelligence-oxford_n_5689858 |access-date=2023-06-08 |website=HuffPost |language=en}}</ref> Many scholars who are concerned about existential risk advocate for more research into solving the "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximise the probability that their recursively-improving AI would continue to behave in a [[Friendly artificial intelligence|friendly]], rather than destructive, manner after it reaches superintelligence?<ref name="physica_scripta2">{{Cite journal |last1=Sotala |first1=Kaj |last2=Yampolskiy |first2=Roman V. |author-link2=Roman Yampolskiy |date=2014-12-19 |title=Responses to catastrophic AGI risk: a survey |journal=[[Physica Scripta]] |volume=90 |issue=1 |page=018001 |doi=10.1088/0031-8949/90/1/018001 |issn=0031-8949 |doi-access=free}}</ref><ref>{{Cite book |last=Bostrom |first=Nick |author-link=Nick Bostrom |title=Superintelligence: Paths, Dangers, Strategies |title-link=Superintelligence: Paths, Dangers, Strategies |date=2014 |publisher=Oxford University Press |isbn=978-0-1996-7811-2 |edition=First}}<!-- preface --></ref> Solving the control problem is complicated by the [[AI arms race]] (which could lead to a [[race to the bottom]] of safety precautions in order to release products before competitors),<ref>{{Cite magazine |last1=Chow |first1=Andrew R. |last2=Perrigo |first2=Billy |date=2023-02-16 |title=The AI Arms Race Is On. Start Worrying |url=https://time.com/6255952/ai-impact-chatgpt-microsoft-google/ |access-date=2023-12-24 |magazine=TIME |language=en}}</ref> and the use of AI in weapon systems.<ref>{{Cite web |last=Tetlow |first=Gemma |date=January 12, 2017 |title=AI arms race risks spiralling out of control, report warns |url=https://www.ft.com/content/b56d57e8-d822-11e6-944b-e7eb37a6aa8e |url-access=subscription |url-status=live |archive-url=https://archive.today/20220411043213/https://www.ft.com/content/b56d57e8-d822-11e6-944b-e7eb37a6aa8e |archive-date=11 April 2022 |access-date=2023-12-24 |website=Financial Times}}</ref> The thesis that AI can pose existential risk also has detractors. Skeptics usually say that AGI is unlikely in the short-term, or that concerns about AGI distract from other issues related to current AI.<ref>{{Cite news |last1=Milmo |first1=Dan |last2=Stacey |first2=Kiran |date=2023-09-25 |title=Experts disagree over threat posed but artificial intelligence cannot be ignored |url=https://www.theguardian.com/technology/2023/sep/25/experts-disagree-over-threat-posed-but-artificial-intelligence-cannot-be-ignored-ai |access-date=2023-12-24 |work=The Guardian |language=en-GB |issn=0261-3077}}</ref> Former [[Google]] fraud czar [[Shuman Ghosemajumder]] considers that for many people outside of the technology industry, existing chatbots and LLMs are already perceived as though they were AGI, leading to further misunderstanding and fear.<ref>{{Cite web |date=2023-07-20 |title=Humanity, Security & AI, Oh My! (with Ian Bremmer & Shuman Ghosemajumder) |url=https://cafe.com/stay-tuned/humanity-security-ai-oh-my-with-ian-bremmer-shuman-ghosemajumder/ |access-date=2023-09-15 |website=CAFE |language=en-US}}</ref> Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God.<ref name="atlantic-but-what2">{{Cite magazine |last=Hamblin |first=James |date=9 May 2014 |title=But What Would the End of Humanity Mean for Me? |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |url-status=live |archive-url=https://web.archive.org/web/20140604211145/http://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |archive-date=4 June 2014 |access-date=12 December 2015 |magazine=The Atlantic}}</ref> Some researchers believe that the communication campaigns on AI existential risk by certain AI groups (such as OpenAI, Anthropic, DeepMind, and Conjecture) may be an at attempt at regulatory capture and to inflate interest in their products.<ref name="telegraph">{{Cite news |last=Titcomb |first=James |date=30 October 2023 |title=Big Tech is stoking fears over AI, warn scientists |url=https://www.telegraph.co.uk/business/2023/10/30/big-tech-stoking-fears-over-ai-warn-scientists/ |access-date=2023-12-07 |work=The Telegraph |language=en}}</ref><ref name="afr">{{Cite web |last=Davidson |first=John |date=30 October 2023 |title=Google Brain founder says big tech is lying about AI extinction danger |url=https://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnz |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20231207203025/https://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnz |archive-date=December 7, 2023 |access-date=2023-12-07 |website=Australian Financial Review |language=en}}</ref> In 2023, the CEOs of Google DeepMind, OpenAI and Anthropic, along with other industry leaders and researchers, issued a joint statement asserting that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."<ref name=":16">{{Cite web |date=May 30, 2023 |title=Statement on AI Risk |url=https://www.safe.ai/statement-on-ai-risk |access-date=2023-06-08 |website=Center for AI Safety}}</ref> ===Mass unemployment=== {{Further|Technological unemployment}} Researchers from OpenAI estimated that "80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while around 19% of workers may see at least 50% of their tasks impacted".<ref>{{Cite web |last1=Eloundou |first1=Tyna |last2=Manning |first2=Sam |last3=Mishkin |first3=Pamela |last4=Rock |first4=Daniel |date=March 17, 2023 |title=GPTs are GPTs: An early look at the labor market impact potential of large language models |url=https://openai.com/research/gpts-are-gpts |access-date=2023-06-07 |website=OpenAI |language=en-US}}</ref><ref name=":6">{{Cite web |last=Hurst |first=Luke |date=2023-03-23 |title=OpenAI says 80% of workers could see their jobs impacted by AI. These are the jobs most affected |url=https://www.euronews.com/next/2023/03/23/openai-says-80-of-workers-could-see-their-jobs-impacted-by-ai-these-are-the-jobs-most-affe |access-date=2023-06-08 |website=euronews |language=en}}</ref> They consider office workers to be the most exposed, for example mathematicians, accountants or web designers.<ref name=":6"/> AGI could have a better autonomy, ability to make decisions, to interface with other computer tools, but also to control robotized bodies. According to Stephen Hawking, the outcome of automation on the quality of life will depend on how the wealth will be redistributed:<ref name=":9"/> {{Cquote|Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality }}Elon Musk believes that the automation of society will require governments to adopt a [[universal basic income]].<ref>{{Cite web |last=Sheffey |first=Ayelet |date=Aug 20, 2021 |title=Elon Musk says we need universal basic income because 'in the future, physical work will be a choice' |url=https://www.businessinsider.com/elon-musk-universal-basic-income-physical-work-choice-2021-8 |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20230709081853/https://www.businessinsider.com/elon-musk-universal-basic-income-physical-work-choice-2021-8 |archive-date=Jul 9, 2023 |access-date=2023-06-08 |website=Business Insider |language=en-US}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)