Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Evolution strategy
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Algorithm in computer science}} {{Evolutionary algorithms}} {{For|a specific strategy or a set of strategies in game theory|Evolutionary strategy}} '''Evolution strategy (ES)''' from computer science is a subclass of [[evolutionary algorithms]], which serves as an [[optimization (mathematics)|optimization]] technique.<ref name=overview>{{cite journal |last1=Slowik |first1=Adam |last2=Kwasnicka |first2=Halina |title=Evolutionary algorithms and their applications to engineering problems |journal=Neural Computing and Applications |date=1 August 2020 |volume=32 |issue=16 |pages=12363–12379 |doi=10.1007/s00521-020-04832-8 |language=en |issn=1433-3058|doi-access=free }}</ref> It uses the major [[genetic operators]] [[mutation (evolutionary algorithm)|mutation]], [[recombination (evolutionary algorithm)|recombination]] and [[selection (evolutionary algorithm)|selection of parents]].<ref name=def>{{cite journal |last1=Alrashdi |first1=Zaid |last2=Sayyafzadeh |first2=Mohammad |title=(μ+λ) Evolution strategy algorithm in well placement, trajectory, control and joint optimisation |journal=Journal of Petroleum Science and Engineering |date=1 June 2019 |volume=177 |pages=1042–1058 |doi=10.1016/j.petrol.2019.02.047 |bibcode=2019JPSE..177.1042A |url=https://www.sciencedirect.com/science/article/pii/S0920410519301846 |issn=0920-4105|url-access=subscription }}</ref> ==History== The 'evolution strategy' optimization technique was created in the early 1960s and developed further in the 1970s and later by [[Ingo Rechenberg]], [[Hans-Paul Schwefel]] and their co-workers.<ref name=overview/> {| class="wikitable sortable" |+ Timeline of ES - selected algorithms<ref name=overview/> |- ! Year !! Description !! Reference |- | 1973 || ES introduced with mutation and selection || <ref>{{cite journal |last1=Vent |first1=W. |title=Rechenberg, Ingo, Evolutionsstrategie — Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. 170 S. mit 36 Abb. Frommann-Holzboog-Verlag. Stuttgart 1973. Broschiert |journal=Feddes Repertorium |date=January 1975 |volume=86 |issue=5 |pages=337 |doi=10.1002/fedr.19750860506}}</ref> |- | 1994 || Derandomized self-adaptation ES - Derandomized scheme of mutative step size control is used || <ref>{{cite journal |last1=Ostermeier |first1=Andreas |last2=Gawelczyk |first2=Andreas |last3=Hansen |first3=Nikolaus |title=A Derandomized Approach to Self-Adaptation of Evolution Strategies |journal=Evolutionary Computation |date=December 1994 |volume=2 |issue=4 |pages=369–380 |doi=10.1162/evco.1994.2.4.369}}</ref> |- | 1994 || CSA-ES - usage information from the old generations || <ref>{{cite book |last1=Ostermeier |first1=Andreas |last2=Gawelczyk |first2=Andreas |last3=Hansen |first3=Nikolaus |chapter=Step-size adaptation based on non-local use of selection information |title=Parallel Problem Solving from Nature — PPSN III |series=Lecture Notes in Computer Science |date=1994 |volume=866 |pages=189–198 |doi=10.1007/3-540-58484-6_263 |chapter-url=https://link.springer.com/chapter/10.1007/3-540-58484-6_263 |publisher=Springer |isbn=978-3-540-58484-1 |language=en}}</ref> |- | 2001 || [[CMA-ES]] || <ref>{{cite journal |last1=Hansen |first1=Nikolaus |last2=Ostermeier |first2=Andreas |title=Completely Derandomized Self-Adaptation in Evolution Strategies |journal=Evolutionary Computation |date=June 2001 |volume=9 |issue=2 |pages=159–195 |doi=10.1162/106365601750190398|pmid=11382355 }}</ref> |- | 2006 || Weighted multi-recombination ES - usage of weighted recombination || <ref>{{cite journal |last1=Arnold |first1=Dirk V. |title=Weighted multirecombination evolution strategies |journal=Theoretical Computer Science |date=28 August 2006 |volume=361 |issue=1 |pages=18–37 |doi=10.1016/j.tcs.2006.04.003 |url=https://www.sciencedirect.com/science/article/pii/S0304397506003008 |issn=0304-3975}}</ref> |- | 2007 || Meta-ES - incremental aggregation of partial semantic structures || <ref>{{cite book |last1=Jung |first1=Jason J. |last2=Jo |first2=Geun-Sik |last3=Yeo |first3=Seong-Won |chapter=Meta-evolution Strategy to Focused Crawling on Semantic Web |title=Artificial Neural Networks – ICANN 2007 |series=Lecture Notes in Computer Science |date=2007 |volume=4669 |pages=399–407 |doi=10.1007/978-3-540-74695-9_41 |chapter-url=https://link.springer.com/chapter/10.1007/978-3-540-74695-9_41 |publisher=Springer |isbn=978-3-540-74693-5 |language=en}}</ref> |- | 2008 || [[Natural evolution strategy|Natural ES]] - usage of natural gradient || <ref>{{cite journal |last1=Wierstra |first1=Daan |last2=Schaul |first2=Tom |last3=Glasmachers |first3=Tobias |last4=Sun |first4=Yi |last5=Peters |first5=Jan |last6=Schmidhuber |first6=Jürgen |title=Natural evolution strategies |journal=J. Mach. Learn. Res. |date=1 January 2014 |volume=15 |issue=1 |pages=949–980 |url=https://dl.acm.org/doi/abs/10.5555/2627435.2638566 |issn=1532-4435}}</ref> |- | 2010 || Exponential natural ES - a simpler version of natural ES || <ref>{{cite book |last1=Glasmachers |first1=Tobias |last2=Schaul |first2=Tom |last3=Yi |first3=Sun |last4=Wierstra |first4=Daan |last5=Schmidhuber |first5=Jürgen |chapter=Exponential natural evolution strategies |title=Proceedings of the 12th annual conference on Genetic and evolutionary computation |date=7 July 2010 |pages=393–400 |doi=10.1145/1830483.1830557 |chapter-url=https://dl.acm.org/doi/abs/10.1145/1830483.1830557 |publisher=Association for Computing Machinery|isbn=978-1-4503-0072-8 |url=https://infoscience.epfl.ch/record/163869/files/exponentialnaturalevolutionstrategies.pdf }}</ref> |- | 2014 || Limited memory CMA-ES - time–memory complexity reduction by covariance matrix decomposition || <ref>{{cite book |last1=Loshchilov |first1=Ilya |chapter=A computationally efficient limited memory CMA-ES for large scale optimization |title=Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation |date=12 July 2014 |pages=397–404 |doi=10.1145/2576768.2598294 |publisher=Association for Computing Machinery|arxiv=1404.5520 |isbn=978-1-4503-2662-9 }}</ref> |- | 2016 || Fitness inheritance CMA-ES - fitness evaluation computational cost reduction using fitness inheritance || <ref>{{cite book |last1=Liaw |first1=Rung-Tzuo |last2=Ting |first2=Chuan-Kang |chapter=Enhancing covariance matrix adaptation evolution strategy through fitness inheritance |title=2016 IEEE Congress on Evolutionary Computation (CEC) |date=July 2016 |pages=1956–1963 |doi=10.1109/CEC.2016.7744027 |isbn=978-1-5090-0623-6 }}</ref> |- | 2017 || RS-CMSA ES - usage of subpopulations || <ref>{{cite journal |last1=Ahrari |first1=Ali |last2=Deb |first2=Kalyanmoy |last3=Preuss |first3=Mike |title=Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations |journal=Evolutionary Computation |date=September 2017 |volume=25 |issue=3 |pages=439–471 |doi=10.1162/evco_a_00182|pmid=27070282 |hdl=1887/76707 |hdl-access=free }}</ref> |- | 2017 || MA-ES - COV update and COV matrix square root are not used || <ref>{{cite journal |last1=Beyer |first1=Hans-Georg |last2=Sendhoff |first2=Bernhard |title=Simplify Your Covariance Matrix Adaptation Evolution Strategy |journal=IEEE Transactions on Evolutionary Computation |date=October 2017 |volume=21 |issue=5 |pages=746–759 |doi=10.1109/TEVC.2017.2680320}}</ref> |- | 2018 || Weighted ES - weighted recombination of general convex quadratic functions || <ref>{{cite journal |last1=Akimoto |first1=Youhei |last2=Auger |first2=Anne |last3=Hansen |first3=Nikolaus |title=Quality gain analysis of the weighted recombination evolution strategy on general convex quadratic functions |journal=Theoretical Computer Science |date=6 September 2020 |volume=832 |pages=42–67 |doi=10.1016/j.tcs.2018.05.015 |url=https://www.sciencedirect.com/science/article/pii/S0304397518303359 |issn=0304-3975|arxiv=1608.04813 }}</ref> |} ==Methods== Evolution strategies use natural problem-dependent representations, so [[Genetic representation|problem space]] and [[Genetic representation|search space]] are identical. In common with [[evolutionary algorithms]], the operators are applied in a loop. An iteration of the loop is called a generation. The sequence of generations is continued until a termination criterion is met. The special feature of the ES is the self-adaptation of mutation step sizes and the [[coevolution]] associated with it. The ES is briefly presented using the standard form,<ref>{{Cite book |last=Schwefel |first=Hans-Paul |title=Evolution and Optimum Seeking |date=1995 |publisher=Wiley |isbn=978-0-471-57148-3 |edition= |series=Sixth-generation computer technology series |location=New York |url=https://www.researchgate.net/publication/220690578}}</ref><ref name=":0">{{Cite journal |last1=Bäck |first1=Thomas |last2=Schwefel |first2=Hans-Paul |date=1993 |title=An Overview of Evolutionary Algorithms for Parameter Optimization |url=https://direct.mit.edu/evco/article/1/1/1-23/1092 |journal=Evolutionary Computation |language=en |volume=1 |issue=1 |pages=1–23 |doi=10.1162/evco.1993.1.1.1 |issn=1063-6560|url-access=subscription }}</ref><ref>{{Citation |last1=Schwefel |first1=Hans-Paul |last2=Rudolph |first2=Günter |last3=Bäck |first3=Thomas |title=Contemporary Evolution Strategies |date=1995 |url=https://www.researchgate.net/publication/2606149 |work=Proceedings of the Third European Conference on Advances in Artificial Life |pages=893–907 |place=Berlin, Heidelberg |publisher=Springer |isbn=978-3-540-59496-3 |doi=10.1007/3-540-59496-5_351 |editor-last1=Morán |editor-first1=Frederico |editor-last2=Moreno |editor-first2=Alvaro |editor-last3=Merelo |editor-first3=J.J. |editor-last4=Chacón |editor-first4=Pablo }}</ref> pointing out that there are many variants.<ref>{{Citation |last1=Bäck |first1=Thomas |title=A Survey of Evolution Strategies |date=1991 |work=Proceedings of the Fourth International Conference on Genetic Algorithms (ICGA) |editor-last1=Belew |editor-first1=Richard K. |location=San Mateo, Calif |publisher=Morgan Kaufmann |isbn=978-1-55860-208-3 |last2=Hoffmeister |first2=Frank |last3=Schwefel |first3=Hans-Paul |editor-last2=Booker |editor-first2=Lashon B.}}</ref><ref>{{Cite journal |last1=Coelho |first1=V. N. |last2=Coelho |first2=I. M. |last3=Souza |first3=M. J. F. |last4=Oliveira |first4=T. A. |last5=Cota |first5=L. P. |last6=Haddad |first6=M. N. |last7=Mladenovic |first7=N. |last8=Silva |first8=R. C. P. |last9=Guimarães |first9=F. G. |date=2016 |title=Hybrid Self-Adaptive Evolution Strategies Guided by Neighborhood Structures for Combinatorial Optimization Problems |url=https://direct.mit.edu/evco/article/24/4/637-666/1033 |journal=Evolutionary Computation |language=en |volume=24 |issue=4 |pages=637–666 |doi=10.1162/EVCO_a_00187 |pmid=27258842 |issn=1063-6560|url-access=subscription }}</ref><ref name=":1">{{Cite journal |last1=Hansen |first1=Nikolaus |last2=Ostermeier |first2=Andreas |date=2001 |title=Completely Derandomized Self-Adaptation in Evolution Strategies |url=https://direct.mit.edu/evco/article/9/2/159-195/892 |journal=Evolutionary Computation |language=en |volume=9 |issue=2 |pages=159–195 |doi=10.1162/106365601750190398 |pmid=11382355 |issn=1063-6560|url-access=subscription }}</ref><ref>{{Citation |last1=Hansen |first1=Nikolaus |title=Evaluating the CMA Evolution Strategy on Multimodal Test Functions |date=2004 |url=http://link.springer.com/10.1007/978-3-540-30217-9_29 |work=Parallel Problem Solving from Nature - PPSN VIII |volume=3242 |pages=282–291 |editor-last=Yao |editor-first=Xin |access-date= |place=Berlin, Heidelberg |publisher=Springer |doi=10.1007/978-3-540-30217-9_29 |isbn=978-3-540-23092-2 |last2=Kern |first2=Stefan |editor2-last=Burke |editor2-first=Edmund K. |editor3-last=Lozano |editor3-first=José A. |editor4-last=Smith |editor4-first=Jim|url-access=subscription }}</ref> The real-valued chromosome contains, in addition to the <math>n</math> decision variables, <math>n'</math> mutation step sizes <math>{\sigma}_j</math>, where: <math>1\leq j\leq n'\leq n</math>. Often one mutation step size is used for all decision variables or each has its own step size. Mate selection to produce <math>\lambda</math> offspring is random, i.e. independent of fitness. First, new mutation step sizes are generated per mating by [[Crossover (genetic algorithm)#Intermediate recombination|intermediate recombination]] of the parental <math>{\sigma }_{j}</math> with subsequent mutation as follows: : <math> {\sigma}'_j = \sigma_j \cdot e^{(\mathcal{N}(0,1)-\mathcal{N}_j(0,1))} </math> where <math>\mathcal{N}(0,1)</math> is a [[Normal distribution|normally distributed]] random variable with mean <math>0</math> and standard deviation <math>1</math>. <math>\mathcal{N}(0,1)</math> applies to all <math>{\sigma}'_j</math>, while <math>\mathcal{N}_j(0,1)</math> is newly determined for each <math>{\sigma}'_j</math>. Next, [[Crossover (genetic algorithm)#Discrete recombination|discrete recombination]] of the decision variables is followed by a [[Mutation (genetic algorithm)#Mutation without consideration of restrictions|mutation]] using the new mutation step sizes as standard deviations of the normal distribution. The new decision variables <math> x_j' </math> are calculated as follows: :<math>x_j'=x_j+\mathcal{N}_j(0,{\sigma}_j')</math> This results in an evolutionary search on two levels: First, at the problem level itself and second, at the mutation step size level. In this way, it can be ensured that the ES searches for its target in ever finer steps. However, there is also the danger of being able to skip larger invalid areas in the search space only with difficulty. ==Variants== The ES knows two variants of best selection for the generation of the next parent population (<math>\mu</math> - number of parents, <math>\lambda</math> - number of offspring):<ref name=def/> * <math>(\mu,\lambda )</math>: The <math>\mu</math> best offspring are used for the next generation (usually <math>\mu=\frac{\lambda}{2}</math>). * <math>(\mu +\lambda )</math>: The best are selected from a union of <math>\mu</math> parents and <math>\lambda</math> offspring. Bäck and Schwefel recommend that the value of <math>\lambda</math> should be approximately seven times the <math>\mu</math>,<ref name=":0" /> whereby <math>\mu</math> must not be chosen too small because of the strong selection pressure. Suitable values for <math>\mu</math> are application-dependent and must be determined experimentally. The selection of the next generation in evolution strategies is deterministic and only based on the fitness rankings, not on the actual fitness values. The resulting algorithm is therefore invariant with respect to monotonic transformations of the objective function. The simplest and oldest<ref name=overview/> evolution strategy <math>\mathit{(1+1)}</math> operates on a population of size two: the current point (parent) and the result of its mutation. Only if the mutant's fitness is at least as good as the parent one, it becomes the parent of the next generation. Otherwise the mutant is disregarded. More generally, <math>\lambda</math> mutants can be generated and compete with the parent, called ''<math>(1+\lambda )</math>''. In <math>(1,\lambda )</math> the best mutant becomes the parent of the next generation while the current parent is always disregarded. For some of these variants, proofs of [[Rate of convergence|linear convergence]] (in a [[stochastic]] sense) have been derived on unimodal objective functions.<ref>{{cite journal |last1=Auger |first1=Anne |title=Convergence results for the ( 1 , λ ) -SA-ES using the theory of ϕ -irreducible Markov chains |journal=Theoretical Computer Science |date=April 2005 |volume=334 |issue=1–3 |pages=35–69 |doi=10.1016/j.tcs.2004.11.017}}</ref><ref>{{cite journal |last1=Jägersküpper |first1=Jens |title=How the (1+1) ES using isotropic mutations minimizes positive definite quadratic forms |journal=Theoretical Computer Science |date=August 2006 |volume=361 |issue=1 |pages=38–56 |doi=10.1016/j.tcs.2006.04.004}}</ref> Individual step sizes for each coordinate, or correlations between coordinates, which are essentially defined by an underlying [[covariance matrix]], are controlled in practice either by self-adaptation or by covariance matrix adaptation ([[CMA-ES]]).<ref name=":1" /> When the mutation step is drawn from a [[multivariate normal distribution]] using an evolving [[covariance matrix]], it has been hypothesized that this adapted matrix approximates the inverse [[Hessian matrix|Hessian]] of the search landscape. This hypothesis has been proven for a static model relying on a quadratic approximation.<ref>{{cite journal |last1=Shir |first1=Ofer M. |last2=Yehudayoff |first2=Amir |title=On the covariance-Hessian relation in evolution strategies |journal=Theoretical Computer Science |date=January 2020 |volume=801 |pages=157–174 |doi=10.1016/j.tcs.2019.09.002|arxiv=1806.03674 }}</ref> In 2025, Chen et.al.<ref>{{Cite journal |last1=Chen |first1=Tai-You |last2=Chen |first2=Wei-Neng |last3=Hao |first3=Jin-Kao |last4=Wang |first4=Yang |last5=Zhang |first5=Jun |date=2025 |title=Multi-Agent Evolution Strategy With Cooperative and Cumulative Step Adaptation for Black-Box Distributed Optimization |url=https://ieeexplore.ieee.org/document/10824905 |journal=IEEE Transactions on Evolutionary Computation |pages=1 |doi=10.1109/TEVC.2025.3525713 |issn=1941-0026|url-access=subscription }}</ref> proposed a multi-agent evolution strategy for consensus-based distributed optimization, where a novel step adaptation method is designed to help multiple agents control the step size cooperatively. ==See also== * [[CMA-ES|Covariance matrix adaptation evolution strategy (CMA-ES)]] * [[Derivative-free optimization]] * [[Evolutionary computation]] * [[Genetic algorithm]] * [[Natural evolution strategy]] * [[Evolutionary game theory]] ==References== {{Reflist}} ==Bibliography== * [[Ingo Rechenberg]] (1971): ''Evolutionsstrategie – Optimierung technischer Systeme nach Prinzipien der biologischen Evolution'' (PhD thesis). Reprinted by Frommann-Holzboog (1973). {{ISBN|3-7728-1642-8}} * [[Hans-Paul Schwefel]] (1974): ''Numerische Optimierung von Computer-Modellen'' (PhD thesis). Reprinted by Birkhäuser (1977). * [[Hans-Paul Schwefel]]: ''[https://www.researchgate.net/publication/220690578_Evolution_and_Optimum_Seeking Evolution and Optimum Seeking]''. New York: Wiley & Sons 1995. {{ISBN|0-471-57148-2}} * H.-G. Beyer and H.-P. Schwefel. ''[https://journals.openedition.org/trivium/3664 Evolution Strategies: A Comprehensive Introduction]''. Journal Natural Computing, 1(1):3–52, 2002. * Hans-Georg Beyer: ''The Theory of Evolution Strategies''. Springer, April 27, 2001. {{ISBN|3-540-67297-4}} * Ingo Rechenberg: ''Evolutionsstrategie '94''. Stuttgart: Frommann-Holzboog 1994. {{ISBN|3-7728-1642-8}} * J. Klockgether and H. P. Schwefel (1970). [https://www.researchgate.net/profile/Hans-Paul_Schwefel/publication/236373493_TWO-PHASE_NOZZLE_AND_HOLLOW_CORE_JET_EXPERIMENTS/links/544bd4db0cf2d6347f43a164.pdf ''Two-Phase Nozzle And Hollow Core Jet Experiments'']. AEG-Forschungsinstitut. MDH Staustrahlrohr Project Group. Berlin, Federal Republic of Germany. Proceedings of the 11th Symposium on Engineering Aspects of Magneto-Hydrodynamics, Caltech, Pasadena, Cal., 24.–26.3. 1970. * M. Emmerich, O.M. Shir, and H. Wang: [https://link.springer.com/referenceworkentry/10.1007/978-3-319-07153-4_13-1 ''Evolution Strategies'']. In: Handbook of Heuristics, 1-31. Springer International Publishing (2018). ==Research centers== * [https://web.archive.org/web/20180425010001/http://www.bionik.tu-berlin.de/institut/xstart.htm Bionics & Evolutiontechnique at Technische Universität Berlin] * [http://ls11-www.cs.uni-dortmund.de/ Chair of Algorithm Engineering (Ls11) – TU Dortmund University] * [https://archive.today/20130106090846/http://sfbci.cs.uni-dortmund.de/ Collaborative Research Center 531 – TU Dortmund University] {{Evolutionary computation}} {{DEFAULTSORT:Evolution Strategy}} [[Category:Evolution strategy| ]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Evolutionary algorithms
(
edit
)
Template:Evolutionary computation
(
edit
)
Template:For
(
edit
)
Template:ISBN
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)