Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Computationalism and functionalism ==== {{Main|Computational theory of mind|Functionalism (philosophy of mind)}} Computationalism is the position in the [[philosophy of mind]] that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the [[mindβbody problem]]. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers [[Jerry Fodor]] and [[Hilary Putnam]].{{Sfnp|Horst|2005}} Philosopher [[John Searle]] characterized this position as "[[Strong AI hypothesis|strong AI]]": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."{{Efn|name="Searle's strong AI"| Searle presented this definition of "Strong AI" in 1999.{{Sfnp|Searle|1999}} Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."{{Sfnp|Searle|1980|p=1}} Strong AI is defined similarly by [[Stuart J. Russell|Russell]] and [[Norvig]]: "Stong AI β the assertion that machines that do so are ''actually'' thinking (as opposed to ''simulating'' thinking)."{{Sfnp|Russell|Norvig|2021|p=9817}} }} Searle challenges this claim with his [[Chinese room]] argument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind.<ref>Searle's [[Chinese room]] argument: {{Harvtxt|Searle|1980}}. Searle's original presentation of the thought experiment., {{Harvtxt|Searle|1999}}. Discussion: {{Harvtxt|Russell|Norvig|2021|pp=985}}, {{Harvtxt|McCorduck|2004|pp=443β445}}, {{Harvtxt|Crevier|1993|pp=269β271}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)