Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial neuron
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==History== The first artificial neuron was the Threshold Logic Unit (TLU), or Linear Threshold Unit,<ref name="Anthony2001">{{cite book|author=Martin Anthony|title=Discrete Mathematics of Neural Networks: Selected Topics|url=https://books.google.com/books?id=qOy4yLBqhFcC&pg=PA3|date=January 2001|publisher=SIAM|isbn=978-0-89871-480-7|pages=3β}}</ref> first proposed by [[Warren McCulloch]] and [[Walter Pitts]] in 1943 in ''[[A Logical Calculus of the Ideas Immanent in Nervous Activity|A logical calculus of the ideas immanent in nervous activity]]''. The model was specifically targeted as a computational model of the "nerve net" in the brain.<ref name="Aggarwal2014">{{cite book|author=Charu C. Aggarwal|title=Data Classification: Algorithms and Applications|url=https://books.google.com/books?id=gJhBBAAAQBAJ&pg=PA209|date=25 July 2014|publisher=CRC Press|isbn=978-1-4665-8674-1|pages=209β}}</ref> As an activation function, it employed a threshold, equivalent to using the [[Heaviside step function]]. Initially, only a simple model was considered, with binary inputs and outputs, some restrictions on the possible weights, and a more flexible threshold value. Since the beginning it was already noticed that any [[Boolean function]] could be implemented by networks of such devices, what is easily seen from the fact that one can implement the AND and OR functions, and use them in the [[disjunctive normal form|disjunctive]] or the [[conjunctive normal form]]. Researchers also soon realized that cyclic networks, with [[feedback]]s through neurons, could define dynamical systems with memory, but most of the research concentrated (and still does) on strictly [[feed-forward network]]s because of the smaller difficulty they present. One important and pioneering artificial neural network that used the linear threshold function was the [[perceptron]], developed by [[Frank Rosenblatt]]. This model already considered more flexible weight values in the neurons, and was used in machines with adaptive capabilities. The representation of the threshold values as a bias term was introduced by [[Bernard Widrow]] in 1960 β see [[ADALINE]]. In the late 1980s, when research on neural networks regained strength, neurons with more continuous shapes started to be considered. The possibility of differentiating the activation function allows the direct use of the [[gradient descent]] and other optimization algorithms for the adjustment of the weights. Neural networks also started to be used as a general [[function approximation]] model. The best known training algorithm called [[backpropagation]] has been rediscovered several times but its first development goes back to the work of [[Paul Werbos]].<ref>[[Paul Werbos]], Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University, 1974</ref><ref>{{cite journal | last=Werbos | first=P.J. |author-link=Paul Werbos| title=Backpropagation through time: what it does and how to do it | journal=Proceedings of the IEEE | volume=78 | issue=10 | year=1990 | issn=0018-9219 | doi=10.1109/5.58337 | pages=1550β1560| s2cid=18470994 | url=https://zenodo.org/record/1262035 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)