Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Evolutionary algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== No free lunch theorem === The [[no free lunch theorem]] of optimization states that all optimization strategies are equally effective when the set of all optimization problems is considered. Under the same condition, no evolutionary algorithm is fundamentally better than another. This can only be the case if the set of all problems is restricted. This is exactly what is inevitably done in practice. Therefore, to improve an EA, it must exploit problem knowledge in some form (e.g. by choosing a certain mutation strength or a [[Genetic representation|problem-adapted coding]]). Thus, if two EAs are compared, this constraint is implied. In addition, an EA can use problem specific knowledge by, for example, not randomly generating the entire start population, but creating some individuals through [[Heuristic (computer science)|heuristics]] or other procedures.<ref name="Davis91">{{Cite book |last=Davis |first=Lawrence |url=https://www.worldcat.org/oclc/23081440 |title=Handbook of genetic algorithms |date=1991 |publisher=Van Nostrand Reinhold |isbn=0-442-00173-8 |location=New York |oclc=23081440}}</ref><ref>{{Citation |last1=Lienig |first1=Jens |title=An evolutionary algorithm for the routing of multi-chip modules |date=1994 |url=http://link.springer.com/10.1007/3-540-58484-6_301 |work=Parallel Problem Solving from Nature β PPSN III |volume=866 |pages=588β597 |editor-last=Davidor |editor-first=Yuval |place=Berlin, Heidelberg |publisher=Springer |doi=10.1007/3-540-58484-6_301 |isbn=978-3-540-58484-1 |access-date=2022-10-18 |last2=Brandt |first2=Holger |editor2-last=Schwefel |editor2-first=Hans-Paul |editor3-last=MΓ€nner |editor3-first=Reinhard|url-access=subscription }}</ref> Another possibility to tailor an EA to a given problem domain is to involve suitable heuristics, [[Local search (optimization)|local search procedures]] or other problem-related procedures in the process of generating the offspring. This form of extension of an EA is also known as a [[memetic algorithm]]. Both extensions play a major role in practical applications, as they can speed up the search process and make it more robust.<ref name="Davis91" /><ref>{{Cite book |url=http://link.springer.com/10.1007/978-3-642-23247-3 |title=Handbook of Memetic Algorithms |date=2012 |publisher=Springer Berlin Heidelberg |isbn=978-3-642-23246-6 |editor-last=Neri |editor-first=Ferrante |series=Studies in Computational Intelligence |volume=379 |location=Berlin, Heidelberg |language=en |doi=10.1007/978-3-642-23247-3 |editor-last2=Cotta |editor-first2=Carlos |editor-last3=Moscato |editor-first3=Pablo}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)