Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial general intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Philosophical perspective == {{See also|Philosophy of artificial intelligence|Turing test}} === "Strong AI" as defined in philosophy === In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He proposed a distinction between two hypotheses about artificial intelligence:{{Efn|As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."{{Sfn|Russell|Norvig|2003}}}} * '''Strong AI hypothesis''': An artificial intelligence system can have "a mind" and "consciousness". * '''Weak AI hypothesis''': An artificial intelligence system can (only) ''act like'' it thinks and has a mind and consciousness. The first one he called "strong" because it makes a ''stronger'' statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.<ref>For example: * {{Harvnb|Russell|Norvig|2003}}, * [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html|date=3 December 2007}} (quoted in " Encyclopedia.com"), * [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html|date=19 July 2008}} (quoted in "AITopics"), * [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm|date=13 May 2008}} Anthony Tongen</ref> In contrast to Searle and mainstream AI, some futurists such as [[Ray Kurzweil]] use the term "strong AI" to mean "human level artificial general intelligence".<ref name="K"/> This is not the same as Searle's [[Chinese room#Strong AI|strong AI]], unless it is assumed that [[consciousness]] is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers the question is out-of-scope.{{Sfn|Russell|Norvig|2003|p=947}} Mainstream AI is most interested in how a program ''behaves''.<ref>though see [[Explainable artificial intelligence]] for curiosity by the field about why a program behaves the way it does</ref> According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "as long as the program works, they don't care if you call it real or a simulation."{{Sfn|Russell|Norvig|2003|p=947}} If the program can behave ''as if'' it has a mind, then there is no need to know if it ''actually'' has mind – indeed, there would be no way to tell. For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{Sfn|Russell|Norvig|2003|p=947}} Thus, for academic AI research, "Strong AI" and "AGI" are two different things. === Consciousness === {{Main|Artificial consciousness}} Consciousness can have various meanings, and some aspects play significant roles in science fiction and the [[ethics of artificial intelligence]]: * '''[[Sentience]]''' (or "phenomenal consciousness"): The ability to "feel" perceptions or emotions subjectively, as opposed to the ability to ''reason'' about perceptions. Some philosophers, such as [[David Chalmers]], use the term "consciousness" to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience.<ref>{{Cite news |last=Chalmers |first=David J. |date=August 9, 2023 |title=Could a Large Language Model Be Conscious? |url=https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/ |work=Boston Review}}</ref> Determining why and how subjective experience arises is known as the [[hard problem of consciousness]].<ref>{{Cite web |last=Seth |first=Anil |title=Consciousness |url=https://www.newscientist.com/definition/consciousness/ |access-date=2024-09-05 |website=New Scientist |language=en-US}}</ref> [[Thomas Nagel]] explained in 1974 that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "[[What Is It Like to Be a Bat?|what does it feel like to be a bat?]]" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not.{{Sfn|Nagel|1974}} In 2022, a Google engineer claimed that the company's AI chatbot, [[LaMDA]], had achieved sentience, though this claim was widely disputed by other experts.<ref>{{Cite news |date=11 June 2022 |title=The Google engineer who thinks the company's AI has come to life |url=https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ |access-date=2023-06-12 |newspaper=The Washington Post}}</ref> * '''[[Self-awareness]]''': To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one's own thoughts. This is opposed to simply being the "subject of one's thought"—an operating system or debugger is able to be "aware of itself" (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term "self-awareness".{{Efn|[[Alan Turing]] made this point in 1950.{{Sfn|Turing|1950}}}} In some advanced AI models, systems construct internal representations of their own cognitive processes and feedback patterns—occasionally referring to themselves using second-person constructs such as ‘you’ within self-modeling frameworks.{{Citation needed|date=April 2025}} These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals.<ref>{{Cite magazine |last=Kateman |first=Brian |date=2023-07-24 |title=AI Should Be Terrified of Humans |url=https://time.com/6296234/ai-should-be-terrified-of-humans/ |access-date=2024-09-05 |magazine=TIME |language=en}}</ref> Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights.<ref>{{Cite web |last=Nosta |first=John |date=December 18, 2023 |title=Should Artificial Intelligence Have Rights? |url=https://www.psychologytoday.com/us/blog/the-digital-self/202312/should-artificial-intelligence-have-rights |access-date=2024-09-05 |website=Psychology Today |language=en-US}}</ref> Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.<ref>{{Cite news |last=Akst |first=Daniel |date=April 10, 2023 |title=Should Robots With Artificial Intelligence Have Moral or Legal Rights? |url=https://www.wsj.com/articles/robots-ai-legal-rights-3c47ef40 |work=The Wall Street Journal}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)