Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Evolution strategy
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Variants== The ES knows two variants of best selection for the generation of the next parent population (<math>\mu</math> - number of parents, <math>\lambda</math> - number of offspring):<ref name=def/> * <math>(\mu,\lambda )</math>: The <math>\mu</math> best offspring are used for the next generation (usually <math>\mu=\frac{\lambda}{2}</math>). * <math>(\mu +\lambda )</math>: The best are selected from a union of <math>\mu</math> parents and <math>\lambda</math> offspring. Bäck and Schwefel recommend that the value of <math>\lambda</math> should be approximately seven times the <math>\mu</math>,<ref name=":0" /> whereby <math>\mu</math> must not be chosen too small because of the strong selection pressure. Suitable values for <math>\mu</math> are application-dependent and must be determined experimentally. The selection of the next generation in evolution strategies is deterministic and only based on the fitness rankings, not on the actual fitness values. The resulting algorithm is therefore invariant with respect to monotonic transformations of the objective function. The simplest and oldest<ref name=overview/> evolution strategy <math>\mathit{(1+1)}</math> operates on a population of size two: the current point (parent) and the result of its mutation. Only if the mutant's fitness is at least as good as the parent one, it becomes the parent of the next generation. Otherwise the mutant is disregarded. More generally, <math>\lambda</math> mutants can be generated and compete with the parent, called ''<math>(1+\lambda )</math>''. In <math>(1,\lambda )</math> the best mutant becomes the parent of the next generation while the current parent is always disregarded. For some of these variants, proofs of [[Rate of convergence|linear convergence]] (in a [[stochastic]] sense) have been derived on unimodal objective functions.<ref>{{cite journal |last1=Auger |first1=Anne |title=Convergence results for the ( 1 , λ ) -SA-ES using the theory of ϕ -irreducible Markov chains |journal=Theoretical Computer Science |date=April 2005 |volume=334 |issue=1–3 |pages=35–69 |doi=10.1016/j.tcs.2004.11.017}}</ref><ref>{{cite journal |last1=Jägersküpper |first1=Jens |title=How the (1+1) ES using isotropic mutations minimizes positive definite quadratic forms |journal=Theoretical Computer Science |date=August 2006 |volume=361 |issue=1 |pages=38–56 |doi=10.1016/j.tcs.2006.04.004}}</ref> Individual step sizes for each coordinate, or correlations between coordinates, which are essentially defined by an underlying [[covariance matrix]], are controlled in practice either by self-adaptation or by covariance matrix adaptation ([[CMA-ES]]).<ref name=":1" /> When the mutation step is drawn from a [[multivariate normal distribution]] using an evolving [[covariance matrix]], it has been hypothesized that this adapted matrix approximates the inverse [[Hessian matrix|Hessian]] of the search landscape. This hypothesis has been proven for a static model relying on a quadratic approximation.<ref>{{cite journal |last1=Shir |first1=Ofer M. |last2=Yehudayoff |first2=Amir |title=On the covariance-Hessian relation in evolution strategies |journal=Theoretical Computer Science |date=January 2020 |volume=801 |pages=157–174 |doi=10.1016/j.tcs.2019.09.002|arxiv=1806.03674 }}</ref> In 2025, Chen et.al.<ref>{{Cite journal |last1=Chen |first1=Tai-You |last2=Chen |first2=Wei-Neng |last3=Hao |first3=Jin-Kao |last4=Wang |first4=Yang |last5=Zhang |first5=Jun |date=2025 |title=Multi-Agent Evolution Strategy With Cooperative and Cumulative Step Adaptation for Black-Box Distributed Optimization |url=https://ieeexplore.ieee.org/document/10824905 |journal=IEEE Transactions on Evolutionary Computation |pages=1 |doi=10.1109/TEVC.2025.3525713 |issn=1941-0026|url-access=subscription }}</ref> proposed a multi-agent evolution strategy for consensus-based distributed optimization, where a novel step adaptation method is designed to help multiple agents control the step size cooperatively.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)