Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Particle swarm optimization
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Inner workings == There are several [[schools of thought]] as to why and how the PSO algorithm can perform optimization. A common belief amongst researchers is that the swarm behaviour varies between exploratory behaviour, that is, searching a broader region of the search-space, and exploitative behaviour, that is, a locally oriented search so as to get closer to a (possibly local) optimum. This school of thought has been prevalent since the inception of PSO.<ref name=shi98modified/><ref name=kennedy97particle/><ref name=shi98parameter/><ref name=clerc02explosion/> This school of thought contends that the PSO algorithm and its parameters must be chosen so as to properly balance between exploration and exploitation to avoid [[premature convergence]] to a [[local optimum]] yet still ensure a good rate of [[Convergent sequence|convergence]] to the optimum. This belief is the precursor of many PSO variants, see [[#Variants|below]]. Another school of thought is that the behaviour of a PSO swarm is not well understood in terms of how it affects actual optimization performance, especially for higher-dimensional search-spaces and optimization problems that may be discontinuous, noisy, and time-varying. This school of thought merely tries to find PSO algorithms and parameters that cause good performance regardless of how the swarm behaviour can be interpreted in relation to e.g. exploration and exploitation. Such studies have led to the simplification of the PSO algorithm, see [[#Simplifications|below]]. === Convergence === In relation to PSO the word ''convergence'' typically refers to two different definitions: * Convergence of the sequence of solutions (aka, stability analysis, [[convergent sequence|converging]]) in which all particles have converged to a point in the search-space, which may or may not be the optimum, * Convergence to a local optimum where all personal bests '''p''' or, alternatively, the swarm's best known position '''g''', approaches a local optimum of the problem, regardless of how the swarm behaves. Convergence of the sequence of solutions has been investigated for PSO.<ref name=bergh01thesis/><ref name=clerc02explosion/><ref name=trelea03particle/> These analyses have resulted in guidelines for selecting PSO parameters that are believed to cause convergence to a point and prevent divergence of the swarm's particles (particles do not move unboundedly and will converge to somewhere). However, the analyses were criticized by Pedersen<ref name=pedersen08simplifying/> for being oversimplified as they assume the swarm has only one particle, that it does not use stochastic variables and that the points of attraction, that is, the particle's best known position '''p''' and the swarm's best known position '''g''', remain constant throughout the optimization process. However, it was shown<ref>{{cite book|last1=Cleghorn|first1=Christopher W|title=Swarm Intelligence |chapter=Particle Swarm Convergence: Standardized Analysis and Topological Influence |volume=8667|pages=134β145|date=2014|doi=10.1007/978-3-319-09952-1_12|series=Lecture Notes in Computer Science|isbn=978-3-319-09951-4}}</ref> that these simplifications do not affect the boundaries found by these studies for parameter where the swarm is convergent. Considerable effort has been made in recent years to weaken the modeling assumption utilized during the stability analysis of PSO,<ref name=Liu2015/> with the most recent generalized result applying to numerous PSO variants and utilized what was shown to be the minimal necessary modeling assumptions.<ref name=Cleghorn2018/> Convergence to a local optimum has been analyzed for PSO in<ref>{{cite journal|last1=Van den Bergh|first1=F|title=A convergence proof for the particle swarm optimizer |journal=Fundamenta Informaticae|url=https://repository.up.ac.za/bitstream/handle/2263/17262/VanDenBergh_Convergence(2010).pdf?sequence=1}}</ref> and.<ref name=Bonyadi2014/> It has been proven that PSO needs some modification to guarantee finding a local optimum. This means that determining the convergence capabilities of different PSO algorithms and parameters still depends on [[empirical]] results. One attempt at addressing this issue is the development of an "orthogonal learning" strategy for an improved use of the information already existing in the relationship between '''p''' and '''g''', so as to form a leading converging exemplar and to be effective with any PSO topology. The aims are to improve the performance of PSO overall, including faster global convergence, higher solution quality, and stronger robustness.<ref name=zhan10OLPSO/> However, such studies do not provide theoretical evidence to actually prove their claims. === Adaptive mechanisms === Without the need for a trade-off between convergence ('exploitation') and divergence ('exploration'), an adaptive mechanism can be introduced. Adaptive particle swarm optimization (APSO) <ref name=zhan09adaptive/> features better search efficiency than standard PSO. APSO can perform global search over the entire search space with a higher convergence speed. It enables automatic control of the inertia weight, acceleration coefficients, and other algorithmic parameters at the run time, thereby improving the search effectiveness and efficiency at the same time. Also, APSO can act on the globally best particle to jump out of the likely local optima. However, APSO will introduce new algorithm parameters, it does not introduce additional design or implementation complexity nonetheless. Besides, through the utilization of a scale-adaptive fitness evaluation mechanism, PSO can efficiently address computationally expensive optimization problems.<ref>{{cite journal |last1=Wang |first1=Ye-Qun |last2=Li |first2=Jian-Yu |last3=Chen |first3=Chun-Hua |last4=Zhang |first4=Jun |last5=Zhan |first5=Zhi-Hui |title=Scale adaptive fitness evaluation-based particle swarm optimisation for hyperparameter and architecture optimisation in neural networks and deep learning |journal=CAAI Transactions on Intelligence Technology |date=September 2023 |volume=8 |issue=3 |page=849-862 |doi=10.1049/cit2.12106 |doi-access=free }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)