Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gene expression programming
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Neural networks== An [[artificial neural network]] (ANN or NN) is a computational device that consists of many simple connected units or neurons. The connections between the units are usually weighted by real-valued weights. These weights are the primary means of learning in neural networks and a learning algorithm is usually used to adjust them. Structurally, a neural network has three different classes of units: input units, hidden units, and output units. An activation pattern is presented at the input units and then spreads in a forward direction from the input units through one or more layers of hidden units to the output units. The activation coming into one unit from other unit is multiplied by the weights on the links over which it spreads. All incoming activation is then added together and the unit becomes activated only if the incoming result is above the unit's threshold. In summary, the basic components of a neural network are the units, the connections between the units, the weights, and the thresholds. So, in order to fully simulate an artificial neural network one must somehow encode these components in a linear chromosome and then be able to express them in a meaningful way. In GEP neural networks (GEP-NN or GEP nets), the network architecture is encoded in the usual structure of a head/tail domain.<ref>{{cite web|last=Ferreira|first=C.|year=2006|title=Designing Neural Networks Using Gene Expression Programming|url= http://www.gene-expression-programming.com/webpapers/Ferreira-ASCT2006.pdf|publisher= In A. Abraham, B. de Baets, M. KΓΆppen, and B. Nickolay, eds., Applied Soft Computing Technologies: The Challenge of Complexity, pages 517β536, Springer-Verlag}}</ref> The head contains special functions/neurons that activate the hidden and output units (in the GEP context, all these units are more appropriately called functional units) and terminals that represent the input units. The tail, as usual, contains only terminals/input units. Besides the head and the tail, these neural network genes contain two additional domains, Dw and Dt, for encoding the weights and thresholds of the neural network. Structurally, the Dw comes after the tail and its length ''d<sub>w</sub>'' depends on the head size ''h'' and maximum arity ''n''<sub>max</sub> and is evaluated by the formula: :<math>d_{w} = hn_\max</math> The Dt comes after Dw and has a length ''d<sub>t</sub>'' equal to ''t''. Both domains are composed of symbols representing the weights and thresholds of the neural network. For each NN-gene, the weights and thresholds are created at the beginning of each run, but their circulation and adaptation are guaranteed by the usual genetic operators of [[gene expression programming#Mutation|mutation]], [[gene expression programming#Transposition|transposition]], [[gene expression programming#Inversion|inversion]], and [[gene expression programming#Recombination|recombination]]. In addition, special operators are also used to allow a constant flow of genetic variation in the set of weights and thresholds. For example, below is shown a neural network with two input units (''i''<sub>1</sub> and ''i''<sub>2</sub>), two hidden units (''h''<sub>1</sub> and ''h''<sub>2</sub>), and one output unit (''o''<sub>1</sub>). It has a total of six connections with six corresponding weights represented by the numerals 1β6 (for simplicity, the thresholds are all equal to 1 and are omitted): {| align="center" border="0" cellpadding="4" cellspacing="0" | [[File:Neural network with 5 units.png]] |} This representation is the canonical neural network representation, but neural networks can also be represented by a tree, which, in this case, corresponds to: {| align="center" border="0" cellpadding="4" cellspacing="0" | [[File:GEP neural network with 7 nodes.png]] |} where "aβ and "bβ represent the two inputs ''i''<sub>1</sub> and ''i''<sub>2</sub> and "Dβ represents a function with connectivity two. This function adds all its weighted arguments and then thresholds this activation in order to determine the forwarded output. This output (zero or one in this simple case) depends on the threshold of each unit, that is, if the total incoming activation is equal to or greater than the threshold, then the output is one, zero otherwise. The above NN-tree can be linearized as follows: :<code><nowiki>0123456789012</nowiki></code> : :<code><nowiki>DDDabab654321</nowiki></code> where the structure in positions 7β12 (Dw) encodes the weights. The values of each weight are kept in an array and retrieved as necessary for expression. As a more concrete example, below is shown a neural net gene for the [[exclusive or|exclusive-or]] problem. It has a head size of 3 and Dw size of 6: :<code><nowiki>0123456789012</nowiki></code> : :<code><nowiki>DDDabab393257</nowiki></code> Its expression results in the following neural network: {| align="center" border="0" cellpadding="4" cellspacing="0" | [[File:Expression of a GEP neural network for the exclusive-or.png]] |} which, for the set of weights: : ''W'' = {β1.978, 0.514, β0.465, 1.22, β1.686, β1.797, 0.197, 1.606, 0, 1.753} it gives: {| align="center" border="0" cellpadding="4" cellspacing="0" | [[File:GEP neural network solution for the exclusive-or.png]] |} which is a perfect solution to the exclusive-or function. Besides simple Boolean functions with binary inputs and binary outputs, the GEP-nets algorithm can handle all kinds of functions or neurons (linear neuron, tanh neuron, atan neuron, logistic neuron, limit neuron, radial basis and triangular basis neurons, all kinds of step neurons, and so on). Also interesting is that the GEP-nets algorithm can use all these neurons together and let evolution decide which ones work best to solve the problem at hand. So, GEP-nets can be used not only in Boolean problems but also in [[logistic regression]], [[Statistical classification|classification]], and [[Regression analysis|regression]]. In all cases, GEP-nets can be implemented not only with [[gene expression programming#Multigenic chromosomes|multigenic systems]] but also [[gene expression programming#Cells and code reuse|cellular systems]], both unicellular and multicellular. Furthermore, multinomial classification problems can also be tackled in one go by GEP-nets both with multigenic systems and multicellular systems.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)