Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Particle swarm optimization
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Convergence === In relation to PSO the word ''convergence'' typically refers to two different definitions: * Convergence of the sequence of solutions (aka, stability analysis, [[convergent sequence|converging]]) in which all particles have converged to a point in the search-space, which may or may not be the optimum, * Convergence to a local optimum where all personal bests '''p''' or, alternatively, the swarm's best known position '''g''', approaches a local optimum of the problem, regardless of how the swarm behaves. Convergence of the sequence of solutions has been investigated for PSO.<ref name=bergh01thesis/><ref name=clerc02explosion/><ref name=trelea03particle/> These analyses have resulted in guidelines for selecting PSO parameters that are believed to cause convergence to a point and prevent divergence of the swarm's particles (particles do not move unboundedly and will converge to somewhere). However, the analyses were criticized by Pedersen<ref name=pedersen08simplifying/> for being oversimplified as they assume the swarm has only one particle, that it does not use stochastic variables and that the points of attraction, that is, the particle's best known position '''p''' and the swarm's best known position '''g''', remain constant throughout the optimization process. However, it was shown<ref>{{cite book|last1=Cleghorn|first1=Christopher W|title=Swarm Intelligence |chapter=Particle Swarm Convergence: Standardized Analysis and Topological Influence |volume=8667|pages=134β145|date=2014|doi=10.1007/978-3-319-09952-1_12|series=Lecture Notes in Computer Science|isbn=978-3-319-09951-4}}</ref> that these simplifications do not affect the boundaries found by these studies for parameter where the swarm is convergent. Considerable effort has been made in recent years to weaken the modeling assumption utilized during the stability analysis of PSO,<ref name=Liu2015/> with the most recent generalized result applying to numerous PSO variants and utilized what was shown to be the minimal necessary modeling assumptions.<ref name=Cleghorn2018/> Convergence to a local optimum has been analyzed for PSO in<ref>{{cite journal|last1=Van den Bergh|first1=F|title=A convergence proof for the particle swarm optimizer |journal=Fundamenta Informaticae|url=https://repository.up.ac.za/bitstream/handle/2263/17262/VanDenBergh_Convergence(2010).pdf?sequence=1}}</ref> and.<ref name=Bonyadi2014/> It has been proven that PSO needs some modification to guarantee finding a local optimum. This means that determining the convergence capabilities of different PSO algorithms and parameters still depends on [[empirical]] results. One attempt at addressing this issue is the development of an "orthogonal learning" strategy for an improved use of the information already existing in the relationship between '''p''' and '''g''', so as to form a leading converging exemplar and to be effective with any PSO topology. The aims are to improve the performance of PSO overall, including faster global convergence, higher solution quality, and stronger robustness.<ref name=zhan10OLPSO/> However, such studies do not provide theoretical evidence to actually prove their claims.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)