Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Perceptron
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== ''Perceptrons'' (1969) === {{Main|Perceptrons (book)}} Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of [[neural network (machine learning)|neural network]] research to stagnate for many years, before it was recognised that a [[feedforward neural network]] with two or more layers (also called a [[multilayer perceptron]]) had greater processing power than perceptrons with one layer (also called a [[Feedforward neural network#A threshold (e.g. activation function) added|single-layer perceptron]]). Single-layer perceptrons are only capable of learning [[linearly separable]] patterns.<ref name="Sejnowski">{{Cite book |last=Sejnowski |first=Terrence J.|author-link=Terry Sejnowski|url=https://books.google.com/books?id=9xZxDwAAQBAJ |title=The Deep Learning Revolution |date=2018|publisher=MIT Press |isbn=978-0-262-03803-4 |language=en|page=47}}</ref> For a classification task with some step activation function, a single node will have a single line dividing the data points forming the patterns. More nodes can create more dividing lines, but those lines must somehow be combined to form more complex classifications. A second layer of perceptrons, or even linear nodes, are sufficient to solve many otherwise non-separable problems. In 1969, a famous book entitled ''[[Perceptrons (book)|Perceptrons]]'' by [[Marvin Minsky]] and [[Seymour Papert]] showed that it was impossible for these classes of network to learn an [[XOR]] function. It is often incorrectly believed that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page on ''[[Perceptrons (book)]]'' for more information.) Nevertheless, the often-miscited Minsky and Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s.<ref name="Sejnowski"/>{{Verify source|date=October 2024|reason=Does the source support all of the preceding text and is "often incorrectly believed" true today or was it only true in the past?}} This text was reprinted in 1987 as "Perceptrons - Expanded Edition" where some errors in the original text are shown and corrected.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)