Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Simulated annealing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Related methods== * [[Metropolis–Hastings algorithm|Interacting Metropolis–Hasting algorithms]] (a.k.a. [[sequential Monte Carlo]]<ref name=":3">{{Cite journal|title = Sequential Monte Carlo samplers| doi=10.1111/j.1467-9868.2006.00553.x|volume=68| issue=3|journal=Journal of the Royal Statistical Society, Series B|pages=411–436|arxiv=cond-mat/0212648|year = 2006|last1 = Del Moral|first1 = Pierre| last2=Doucet| first2=Arnaud| last3=Jasra| first3=Ajay| s2cid=12074789}}</ref>) combines simulated annealing moves with an acceptance-rejection of the best-fitted individuals equipped with an interacting recycling mechanism. * [[Quantum annealing]] uses "quantum fluctuations" instead of thermal fluctuations to get through high but thin barriers in the target function. * [[Stochastic tunneling]] attempts to overcome the increasing difficulty simulated annealing runs have in escaping from local minima as the temperature decreases, by 'tunneling' through barriers. * [[Tabu search]] normally moves to neighbouring states of lower energy, but will take uphill moves when it finds itself stuck in a local minimum; and avoids cycles by keeping a "taboo list" of solutions already seen. * [[Dual-phase evolution]] is a family of algorithms and processes (to which simulated annealing belongs) that mediate between local and global search by exploiting phase changes in the search space. * [[LIONsolver|Reactive search optimization]] focuses on combining machine learning with optimization, by adding an internal feedback loop to self-tune the free parameters of an algorithm to the characteristics of the problem, of the instance, and of the local situation around the current solution. * [[Genetic algorithms]] maintain a pool of solutions rather than just one. New candidate solutions are generated not only by "mutation" (as in SA), but also by "recombination" of two solutions from the pool. Probabilistic criteria, similar to those used in SA, are used to select the candidates for mutation or combination, and for discarding excess solutions from the pool. * [[Memetic algorithms]] search for solutions by employing a set of agents that both cooperate and compete in the process; sometimes the agents' strategies involve simulated annealing procedures for obtaining high-quality solutions before recombining them.<ref>{{cite journal |last1=Moscato |first1=Pablo |title=An introduction to population approaches for optimization and hierarchical objective functions: A discussion on the role of tabu search |journal=Annals of Operations Research |date=June 1993 |volume=41 |issue=2 |pages=85–121 |doi=10.1007/BF02022564 |s2cid=35382644 }}</ref> Annealing has also been suggested as a mechanism for increasing the diversity of the search.<ref name=martial_arts>{{cite journal|last=Moscato|first=P.|year=1989|title=On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms|journal=Caltech Concurrent Computation Program|issue=report 826}}</ref> * [[Graduated optimization]] digressively "smooths" the target function while optimizing. * [[Ant colony optimization]] (ACO) uses many ants (or agents) to traverse the solution space and find locally productive areas. * The [[cross-entropy method]] (CE) generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration. * [[Harmony search]] mimics musicians in improvisation where each musician plays a note to find the best harmony together. * [[Stochastic optimization]] is an umbrella set of methods that includes simulated annealing and numerous other approaches. * [[Particle swarm optimization]] is an algorithm modeled on swarm intelligence that finds a solution to an optimization problem in a search space, or models and predicts social behavior in the presence of objectives. * The [[runner-root algorithm]] (RRA) is a [[Metaheuristic|meta-heuristic]] optimization algorithm for solving unimodal and multimodal problems inspired by the runners and roots of plants in nature. * [[Intelligent water drops algorithm]] (IWD) which mimics the behavior of natural water drops to solve optimization problems * [[Parallel tempering]] is a simulation of model copies at different temperatures (or [[Hamiltonian (quantum mechanics)|Hamiltonian]]s) to overcome the potential barriers. * Multi-objective simulated annealing algorithms have been used in [[multi-objective optimization]].<ref>{{cite journal |last1=Deb |first1=Bandyopadhyay |title=A Simulated Annealing-Based Multiobjective Optimization Algorithm: AMOSA |journal=IEEE Transactions on Evolutionary Computation|date=June 2008 |volume=12 |issue=3 |pages=269–283 |doi=10.1109/TEVC.2007.900837 |s2cid=12107321 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)