Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Machine consciousness, sentience, and mind === {{Main|Philosophy of artificial intelligence|Artificial consciousness}} There is no settled consensus in [[philosophy of mind]] on whether a machine can have a [[mind]], [[consciousness]] and [[philosophy of mind|mental states]] in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. [[Stuart J. Russell|Russell]] and [[Norvig]] add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."{{Sfnp|Russell|Norvig|2021|p=986}} However, the question has become central to the philosophy of mind. It is also typically the central question at issue in [[artificial intelligence in fiction]]. ==== Consciousness ==== {{Main|Hard problem of consciousness|Theory of mind}} [[David Chalmers]] identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.{{Sfnp|Chalmers|1995}} The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this ''feels'' or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human [[Information processing (psychology)|information processing]] is easy to explain, human [[subjective experience]] is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to ''know what red looks like''.{{Sfnp|Dennett|1991}} ==== Computationalism and functionalism ==== {{Main|Computational theory of mind|Functionalism (philosophy of mind)}} Computationalism is the position in the [[philosophy of mind]] that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the [[mindβbody problem]]. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers [[Jerry Fodor]] and [[Hilary Putnam]].{{Sfnp|Horst|2005}} Philosopher [[John Searle]] characterized this position as "[[Strong AI hypothesis|strong AI]]": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."{{Efn|name="Searle's strong AI"| Searle presented this definition of "Strong AI" in 1999.{{Sfnp|Searle|1999}} Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."{{Sfnp|Searle|1980|p=1}} Strong AI is defined similarly by [[Stuart J. Russell|Russell]] and [[Norvig]]: "Stong AI β the assertion that machines that do so are ''actually'' thinking (as opposed to ''simulating'' thinking)."{{Sfnp|Russell|Norvig|2021|p=9817}} }} Searle challenges this claim with his [[Chinese room]] argument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind.<ref>Searle's [[Chinese room]] argument: {{Harvtxt|Searle|1980}}. Searle's original presentation of the thought experiment., {{Harvtxt|Searle|1999}}. Discussion: {{Harvtxt|Russell|Norvig|2021|pp=985}}, {{Harvtxt|McCorduck|2004|pp=443β445}}, {{Harvtxt|Crevier|1993|pp=269β271}}</ref> ==== AI welfare and rights ==== It is difficult or impossible to reliably evaluate whether an advanced [[Sentient AI|AI is sentient]] (has the ability to feel), and if so, to what degree.<ref>{{Cite web |last=Leith |first=Sam |date=2022-07-07 |title=Nick Bostrom: How can we be certain a machine isn't conscious? |url=https://www.spectator.co.uk/article/nick-bostrom-how-can-we-be-certain-a-machine-isnt-conscious |access-date=2024-02-23 |website=The Spectator |archive-date=26 September 2024 |archive-url=https://web.archive.org/web/20240926155639/https://www.spectator.co.uk/article/nick-bostrom-how-can-we-be-certain-a-machine-isnt-conscious/ |url-status=live }}</ref> But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.<ref name="Thomson-2022">{{Cite web |last=Thomson |first=Jonny |date=2022-10-31 |title=Why don't robots have rights? |url=https://bigthink.com/thinking/why-dont-robots-have-rights |access-date=2024-02-23 |website=Big Think |archive-date=13 September 2024 |archive-url=https://web.archive.org/web/20240913055336/https://bigthink.com/thinking/why-dont-robots-have-rights/ |url-status=live }}</ref><ref name="Kateman-2023">{{Cite magazine |last=Kateman |first=Brian |date=2023-07-24 |title=AI Should Be Terrified of Humans |url=https://time.com/6296234/ai-should-be-terrified-of-humans |access-date=2024-02-23 |magazine=Time |archive-date=25 September 2024 |archive-url=https://web.archive.org/web/20240925041601/https://time.com/6296234/ai-should-be-terrified-of-humans/ |url-status=live }}</ref> [[Sapience]] (a set of capacities related to high intelligence, such as discernment or [[self-awareness]]) may provide another moral basis for AI rights.<ref name="Thomson-2022"/> [[Robot rights]] are also sometimes proposed as a practical way to integrate autonomous agents into society.<ref>{{Cite news |last=Wong |first=Jeff |date=July 10, 2023 |title=What leaders need to know about robot rights |url=https://www.fastcompany.com/90920769/what-leaders-need-to-know-about-robot-rights |work=Fast Company |ref=none}}</ref> In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.<ref>{{Cite news |last=Hern |first=Alex |date=2017-01-12 |title=Give robots 'personhood' status, EU committee argues |url=https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues |access-date=2024-02-23 |work=The Guardian |issn=0261-3077 |archive-date=5 October 2024 |archive-url=https://web.archive.org/web/20241005171222/https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues |url-status=live }}</ref> Critics argued in 2018 that granting rights to AI systems would downplay the importance of [[human rights]], and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.<ref>{{Cite web |last=Dovey |first=Dana |date=2018-04-14 |title=Experts Don't Think Robots Should Have Rights |url=https://www.newsweek.com/robots-human-rights-electronic-persons-humans-versus-machines-886075 |access-date=2024-02-23 |website=Newsweek |archive-date=5 October 2024 |archive-url=https://web.archive.org/web/20241005171333/https://www.newsweek.com/robots-human-rights-electronic-persons-humans-versus-machines-886075 |url-status=live }}</ref><ref>{{Cite web |last=Cuddy |first=Alice |date=2018-04-13 |title=Robot rights violate human rights, experts warn EU |url=https://www.euronews.com/2018/04/13/robot-rights-violate-human-rights-experts-warn-eu |access-date=2024-02-23 |website=euronews |archive-date=19 September 2024 |archive-url=https://web.archive.org/web/20240919022327/https://www.euronews.com/2018/04/13/robot-rights-violate-human-rights-experts-warn-eu |url-status=live }}</ref> Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a [[Moral blindness|moral blind spot]] analogous to [[slavery]] or [[factory farming]], which could lead to [[Suffering risks|large-scale suffering]] if sentient AI is created and carelessly exploited.<ref name="Kateman-2023"/><ref name="Thomson-2022"/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)