Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Backpropagation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Early successes === Contributing to the acceptance were several applications in training neural networks via backpropagation, sometimes achieving popularity outside the research circles. In 1987, [[NETtalk (artificial neural network)|NETtalk]] learned to convert English text into pronunciation. Sejnowski tried training it with both backpropagation and Boltzmann machine, but found the backpropagation significantly faster, so he used it for the final NETtalk.<ref name=":1" />{{rp|p=324}} The NETtalk program became a popular success, appearing on the [[Today (American TV program)|''Today'' show]].<ref name=":02">{{Cite book |last=Sejnowski |first=Terrence J. |title=The deep learning revolution |date=2018 |publisher=The MIT Press |isbn=978-0-262-03803-4 |location=Cambridge, Massachusetts London, England}}</ref> In 1989, Dean A. Pomerleau published ALVINN, a neural network trained to [[Vehicular automation|drive autonomously]] using backpropagation.<ref>{{Cite journal |last=Pomerleau |first=Dean A. |date=1988 |title=ALVINN: An Autonomous Land Vehicle in a Neural Network |url=https://proceedings.neurips.cc/paper/1988/hash/812b4ba287f5ee0bc9d43bbf5bbe87fb-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Morgan-Kaufmann |volume=1}}</ref> The [[LeNet]] was published in 1989 to recognize handwritten zip codes. In 1992, [[TD-Gammon]] achieved top human level play in backgammon. It was a reinforcement learning agent with a neural network with two layers, trained by backpropagation.<ref>{{cite book |last1=Sutton |first1=Richard S. |last2=Barto |first2=Andrew G. |title=Reinforcement Learning: An Introduction |edition=2nd |publisher=MIT Press |place=Cambridge, MA |year=2018 |chapter=11.1 TD-Gammon |chapter-url=http://www.incompleteideas.net/book/11/node2.html}}</ref> In 1993, Eric Wan won an international pattern recognition contest through backpropagation.<ref name="schmidhuber2015">{{cite journal |last=Schmidhuber |first=Jürgen |author-link=Jürgen Schmidhuber |year=2015 |title=Deep learning in neural networks: An overview |journal=Neural Networks |volume=61 |pages=85–117 |arxiv=1404.7828 |doi=10.1016/j.neunet.2014.09.003 |pmid=25462637 |s2cid=11715509}}</ref><ref>{{cite book |last=Wan |first=Eric A. |title=Time Series Prediction : Forecasting the Future and Understanding the Past |publisher=Addison-Wesley |year=1994 |isbn=0-201-62601-2 |editor-last=Weigend |editor-first=Andreas S. |editor-link=Andreas Weigend |series=Proceedings of the NATO Advanced Research Workshop on Comparative Time Series Analysis |volume=15 |location=Reading |pages=195–217 |chapter=Time Series Prediction by Using a Connectionist Network with Internal Delay Lines |editor2-last=Gershenfeld |editor2-first=Neil A. |editor2-link=Neil Gershenfeld |s2cid=12652643}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)