Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Superintelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Feasibility of artificial superintelligence == [[File:Test scores of AI systems on various capabilities relative to human performance - Our World in Data.png|thumb|upright=1.7|Artificial intelligence, especially [[Foundation model|foundation models]], has made rapid progress, surpassing human capabilities in various [[Benchmarks for artificial intelligence|benchmarks]].]] The creation of '''artificial superintelligence''' ('''ASI''') has been a topic of increasing discussion in recent years, particularly with the rapid advancements in [[artificial intelligence]] (AI) technologies.<ref>{{Cite news |date=2025-01-06 |title='Superintelligence' is the next big thing for OpenAI: Sam Altman |url=https://economictimes.indiatimes.com/tech/artificial-intelligence/superintelligence-is-the-next-big-thing-for-openai-sam-altman/articleshow/116993208.cms?from=mdr |access-date=2025-02-01 |work=The Economic Times |issn=0013-0389}}</ref><ref>{{Cite web |date=2024-06-20 |title=OpenAI co-founder Sutskever sets up new AI company devoted to 'safe superintelligence' |url=https://apnews.com/article/openai-sutskever-altman-artificial-intelligence-safety-c6b48a3675fb3fb459859dece2b45499 |access-date=2025-02-01 |website=AP News |language=en}}</ref> === Progress in AI and claims of AGI === Recent developments in AI, particularly in [[large language model]]s (LLMs) based on the [[Transformer (machine learning model)|transformer]] architecture, have led to significant improvements in various tasks. Models like [[GPT-3]], [[GPT-4]], [[Claude 3.5]] and others have demonstrated capabilities that some researchers argue approach or even exhibit aspects of [[artificial general intelligence]] (AGI).<ref>{{Cite web |title=Microsoft Researchers Claim GPT-4 Is Showing "Sparks" of AGI |url=https://futurism.com/gpt-4-sparks-of-agi |access-date=2023-12-13 |website=Futurism |date=23 March 2023 }}</ref> However, the claim that current LLMs constitute AGI is controversial. Critics argue that these models, while impressive, still lack true understanding and are primarily sophisticated pattern matching systems.<ref>{{Cite arXiv |last1=Marcus |first1=Gary |last2=Davis |first2=Ernest |title=GPT-4 and Beyond: The Future of Artificial Intelligence |eprint=2303.10130 |year=2023|class=econ.GN }}</ref> === Pathways to superintelligence === Philosopher [[David Chalmers]] argues that AGI is a likely path to ASI. He posits that AI can achieve equivalence to [[human intelligence]], be extended to surpass it, and then be amplified to dominate humans across arbitrary tasks.{{sfn|Chalmers|2010|p=7}} More recent research has explored various potential pathways to superintelligence: # Scaling current AI systems β Some researchers argue that continued scaling of existing AI architectures, particularly transformer-based models, could lead to AGI and potentially ASI.<ref>{{Cite arXiv |last1=Kaplan |first1=Jared |last2=McCandlish |first2=Sam |last3=Henighan |first3=Tom |last4=Brown |first4=Tom B. |last5=Chess |first5=Benjamin |last6=Child |first6=Rewon |last7=Gray |first7=Scott |last8=Radford |first8=Alec |last9=Wu |first9=Jeffrey |last10=Amodei |first10=Dario |title=Scaling Laws for Neural Language Models |year=2020|class=cs.LG |eprint=2001.08361 }}</ref> # Novel architectures β Others suggest that new AI architectures, potentially inspired by neuroscience, may be necessary to achieve AGI and ASI.<ref>{{Cite journal |last1=Hassabis |first1=Demis |last2=Kumaran |first2=Dharshan |last3=Summerfield |first3=Christopher |last4=Botvinick |first4=Matthew |title=Neuroscience-Inspired Artificial Intelligence |journal=Neuron |volume=95 |issue=2 |year=2017 |pages=245β258 |doi=10.1016/j.neuron.2017.06.011|pmid=28728020 }}</ref> # Hybrid systems β Combining different AI approaches, including symbolic AI and neural networks, could potentially lead to more robust and capable systems.<ref>{{Cite arXiv |last1=Garcez |first1=Artur d'Avila |last2=Lamb |first2=Luis C. |title=Neurosymbolic AI: The 3rd Wave |year=2020|class=cs.AI |eprint=2012.05876 }}</ref> === Computational advantages === Artificial systems have several potential advantages over biological intelligence: # Speed β Computer components operate much faster than biological neurons. Modern microprocessors (~2 GHz) are seven orders of magnitude faster than neurons (~200 Hz).{{sfn|Bostrom|2014|p=59}} # Scalability β AI systems can potentially be scaled up in size and computational capacity more easily than biological brains. # Modularity β Different components of AI systems can be improved or replaced independently. # Memory β AI systems can have perfect recall and vast knowledge bases. It is also much less constrained than humans when it comes to working memory.{{sfn|Bostrom|2014|p=59}} # Multitasking β AI can perform multiple tasks simultaneously in ways not possible for biological entities. === Potential path through transformer models === Recent advancements in transformer-based models have led some researchers to speculate that the path to ASI might lie in scaling up and improving these architectures. This view suggests that continued improvements in transformer models or similar architectures could lead directly to ASI.<ref>{{Cite journal |last1=Sutskever |first1=Ilya |title=A Brief History of Scaling |journal=ACM Queue |volume=21 |issue=4 |year=2023 |pages=31β43 |doi=10.1145/3595878.3605016|doi-broken-date=1 November 2024 }}</ref> Some experts even argue that current large language models like GPT-4 may already exhibit early signs of AGI or ASI capabilities.<ref>{{Cite arXiv |last1=Bubeck |first1=SΓ©bastien |last2=Chandrasekaran |first2=Varun |last3=Eldan |first3=Ronen |last4=Gehrke |first4=Johannes |last5=Horvitz |first5=Eric |last6=Kamar |first6=Ece |last7=Lee |first7=Peter |last8=Lee |first8=Yin Tat |last9=Li |first9=Yuanzhi |last10=Lundberg |first10=Scott |last11=Nori |first11=Harsha |last12=Palangi |first12=Hamid |last13=Precup |first13=Doina |last14=Sountsov |first14=Pavel |last15=Srivastava |first15=Sanjana |last16=Tessler |first16=Catherine |last17=Tian |first17=Jianfeng |last18=Zaheer |first18=Manzil |title=Sparks of Artificial General Intelligence: Early experiments with GPT-4 |date=22 March 2023|class=cs.CL |eprint=2303.12712 }}</ref> This perspective suggests that the transition from current AI to ASI might be more continuous and rapid than previously thought, blurring the lines between narrow AI, AGI, and ASI. However, this view remains controversial. Critics argue that current models, while impressive, still lack crucial aspects of general intelligence such as true understanding, reasoning, and adaptability across diverse domains.<ref>{{Cite arXiv |last1=Marcus |first1=Gary |title=The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence |year=2020|class=cs.AI |eprint=2002.06177 }}</ref> The debate over whether the path to ASI will involve a distinct AGI phase or a more direct scaling of current technologies remains ongoing, with significant implications for AI development strategies and safety considerations. === Challenges and uncertainties === Despite these potential advantages, there are significant challenges and uncertainties in achieving ASI: # [[Ethics of artificial intelligence|Ethical]] and [[AI safety|safety]] concerns β The development of ASI raises numerous ethical questions and potential risks that need to be addressed.{{sfn|Russell|2019}} # Computational requirements β The computational resources required for ASI might be far beyond current capabilities. # Fundamental limitations β There may be fundamental limitations to intelligence that apply to both artificial and biological systems. # Unpredictability β The path to ASI and its consequences are highly uncertain and difficult to predict. As research in AI continues to advance rapidly, the question of the feasibility of ASI remains a topic of intense debate and study in the scientific community.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)