Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial general intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Consciousness === {{Main|Artificial consciousness}} Consciousness can have various meanings, and some aspects play significant roles in science fiction and the [[ethics of artificial intelligence]]: * '''[[Sentience]]''' (or "phenomenal consciousness"): The ability to "feel" perceptions or emotions subjectively, as opposed to the ability to ''reason'' about perceptions. Some philosophers, such as [[David Chalmers]], use the term "consciousness" to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience.<ref>{{Cite news |last=Chalmers |first=David J. |date=August 9, 2023 |title=Could a Large Language Model Be Conscious? |url=https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/ |work=Boston Review}}</ref> Determining why and how subjective experience arises is known as the [[hard problem of consciousness]].<ref>{{Cite web |last=Seth |first=Anil |title=Consciousness |url=https://www.newscientist.com/definition/consciousness/ |access-date=2024-09-05 |website=New Scientist |language=en-US}}</ref> [[Thomas Nagel]] explained in 1974 that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "[[What Is It Like to Be a Bat?|what does it feel like to be a bat?]]" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not.{{Sfn|Nagel|1974}} In 2022, a Google engineer claimed that the company's AI chatbot, [[LaMDA]], had achieved sentience, though this claim was widely disputed by other experts.<ref>{{Cite news |date=11 June 2022 |title=The Google engineer who thinks the company's AI has come to life |url=https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ |access-date=2023-06-12 |newspaper=The Washington Post}}</ref> * '''[[Self-awareness]]''': To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one's own thoughts. This is opposed to simply being the "subject of one's thought"—an operating system or debugger is able to be "aware of itself" (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term "self-awareness".{{Efn|[[Alan Turing]] made this point in 1950.{{Sfn|Turing|1950}}}} In some advanced AI models, systems construct internal representations of their own cognitive processes and feedback patterns—occasionally referring to themselves using second-person constructs such as ‘you’ within self-modeling frameworks.{{Citation needed|date=April 2025}} These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals.<ref>{{Cite magazine |last=Kateman |first=Brian |date=2023-07-24 |title=AI Should Be Terrified of Humans |url=https://time.com/6296234/ai-should-be-terrified-of-humans/ |access-date=2024-09-05 |magazine=TIME |language=en}}</ref> Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights.<ref>{{Cite web |last=Nosta |first=John |date=December 18, 2023 |title=Should Artificial Intelligence Have Rights? |url=https://www.psychologytoday.com/us/blog/the-digital-self/202312/should-artificial-intelligence-have-rights |access-date=2024-09-05 |website=Psychology Today |language=en-US}}</ref> Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.<ref>{{Cite news |last=Akst |first=Daniel |date=April 10, 2023 |title=Should Robots With Artificial Intelligence Have Moral or Legal Rights? |url=https://www.wsj.com/articles/robots-ai-legal-rights-3c47ef40 |work=The Wall Street Journal}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)