Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Particle swarm optimization
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Variants == <!-- Please provide complete references to journal / conference papers, tech reports, msc/phd theses, etc. when adding PSO variants. Please only include major research contributions and keep the descriptions short - there are probably hundreds of PSO variants in existence and Wikipedia is not the proper place to list them all. --> Numerous variants of even a basic PSO algorithm are possible. For example, there are different ways to initialize the particles and velocities (e.g. start with zero velocities instead), how to dampen the velocity, only update '''p'''<sub>i</sub> and '''g''' after the entire swarm has been updated, etc. Some of these choices and their possible performance impact have been discussed in the literature.<ref name=carlisle01offtheshelf/> A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances."<ref name=bratton2007/> The latest is Standard PSO 2011 (SPSO-2011).<ref name=Zambrano-Bigiarini2013/> In addition, some PSO variants have been developed to solve large-scale global optimization (LSGO) problems with more than 1000 dimensions. Representative variants include competitive swarm optimizer (CSO) and level-based learning swarm optimizer (LLSO).<ref name=llso/> Recently, PSO has also been extended to solve multi-agent consensus-based distributed optimization problems, e.g., multi-agent consensus-based PSO with adaptive internal and external learning (MASOIE),<ref name=masoie/> etc. === Hybridization === New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance. There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers,<ref name=lovbjerg02lifecycle/><ref name=niknam10efficient/><ref name=zx03depso/> e.g., combined PSO with biogeography-based optimization,<ref>{{cite journal|last1=Zhang|first1=Y.|last2=Wang|first2=S.|title=Pathological Brain Detection in Magnetic Resonance Imaging Scanning by Wavelet Entropy and Hybridization of Biogeography-based Optimization and Particle Swarm Optimization|journal=Progress in Electromagnetics Research|date=2015|volume=152|pages=41β58|doi=10.2528/pier15040602|doi-access=free}}</ref> and the incorporation of an effective learning method.<ref name=zhan10OLPSO/> === Alleviate premature convergence=== Another research trend is to try to alleviate premature convergence (that is, optimization stagnation), e.g. by reversing or perturbing the movement of the PSO particles,<ref name=evers09thesis/><ref name=lovbjerg02extending/><ref name=xinchao10perturbed/><ref name=xzy02dpso/> another approach to deal with premature convergence is the use of multiple swarms<ref>{{cite journal | last1 = Cheung | first1 = N. J. | last2 = Ding | first2 = X.-M. | last3 = Shen | first3 = H.-B. | year = 2013 | title = OptiFel: A Convergent Heterogeneous Particle Sarm Optimization Algorithm for Takagi-Sugeno Fuzzy Modeling | journal = IEEE Transactions on Fuzzy Systems | volume = 22| issue = 4| pages = 919β933 | doi = 10.1109/TFUZZ.2013.2278972 | s2cid = 27974467 }}</ref> ([[multi-swarm optimization]]). The multi-swarm approach can also be used to implement multi-objective optimization.<ref name=nobile2012 /> Finally, there are developments in adapting the behavioural parameters of PSO during optimization.<ref name=zhan09adaptive/><ref name=nobile2017/> === Simplifications === Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as [[Occam's razor]]. Simplifying PSO was originally suggested by Kennedy<ref name=kennedy97particle/> and has been studied more extensively,<ref name=bratton08simplified/><ref name=pedersen08thesis/><ref name=pedersen08simplifying/><ref name=yang08nature/> where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems. Another argument in favour of simplifying PSO is that [[metaheuristic]]s can only have their efficacy demonstrated [[empirical]]ly by doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot be [[Program correctness|proven correct]] and this increases the risk of making errors in its description and implementation. A good example of this<ref name=tu04robust/> presented a promising variant of a [[genetic algorithm]] (another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.<ref name=tu04corrections/> ==== Bare Bones PSO ==== Initialization of velocities may require extra inputs. The Bare Bones PSO variant<ref>{{Cite book|last=Kennedy|first=James|title=Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No.03EX706) |chapter=Bare bones particle swarms |date=2003|pages=80β87|doi=10.1109/SIS.2003.1202251|isbn=0-7803-7914-4|s2cid=37185749}}</ref> has been proposed in 2003 by James Kennedy, and does not need to use velocity at all. In this variant of PSO one dispences with the velocity of the particles and instead updates the positions of the particles using the following simple rule, :<math> \vec x_i = G\left(\frac{\vec p_i+\vec g}{2},||\vec p_i-\vec g||\right) \,, </math> where <math>\vec x_i</math>, <math>\vec p_i</math> are the position and the best position of the particle <math>i</math>; <math>\vec g</math> is the global best position; <math>G(\vec x,\sigma)</math> is the [[normal distribution]] with the mean <math>\vec x</math> and standard deviation <math>\sigma</math>; and where <math>||\dots||</math> signifies the norm of a vector. ==== Accelerated Particle Swarm Optimization ==== Another simpler variant is the accelerated particle swarm optimization (APSO),<ref>X. S. Yang, S. Deb and S. Fong, [https://arxiv.org/abs/1203.6577 Accelerated particle swarm optimization and support vector machine for business optimization and applications], NDT 2011, Springer CCIS 136, pp. 53-66 (2011).</ref> which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.<ref>{{Cite web | url=http://www.mathworks.com/matlabcentral/fileexchange/?term=APSO | title=Search Results: APSO - File Exchange - MATLAB Central}}</ref> In this variant of PSO one dispences with both the particle's velocity and the particle's best position. The particle position is updated according to the following rule, :<math> \vec x_i \leftarrow (1-\beta)\vec x_i + \beta \vec g + \alpha L \vec u \,, </math> where <math>\vec u</math> is a random uniformly distributed vector, <math>L</math> is the typical length of the problem at hand, and <math>\beta\sim 0.1-0.7</math> and <math>\alpha\sim 0.1-0.5</math> are the parameters of the method. As a refinement of the method one can decrease <math>\alpha</math> with each iteration, <math>\alpha_n=\alpha_0\gamma^n</math>, where <math>n</math> is the number of the iteration and <math>0 < \gamma < 1</math> is the decrease control parameter. ===Multi-objective optimization=== PSO has also been applied to [[multi-objective optimization|multi-objective problems]],<ref name=parsopoulos02particle/><ref name=coellocoello02MOPSO/><ref name=MASON2017188/> in which the objective function comparison takes [[Pareto efficiency|Pareto dominance]] into account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front. ===Binary, discrete, and combinatorial=== As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result. Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.<ref>Roy, R., Dehuri, S., & Cho, S. B. (2012). [http://sclab.yonsei.ac.kr/publications/Papers/IJ/A%20Novel%20Particle%20Swarm%20Optimization%20Algorithm%20for%20Multi-Objective%20Combinatorial%20Optimization%20Problem.pdf A Novel Particle Swarm Optimization Algorithm for Multi-Objective Combinatorial Optimization Problem]. 'International Journal of Applied Metaheuristic Computing (IJAMC)', 2(4), 41-57</ref> However, it can be noted that the equations of movement make use of operators that perform four actions: *computing the difference of two positions. The result is a velocity (more precisely a displacement) *multiplying a velocity by a numerical coefficient *adding two velocities *applying a velocity to a position Usually a position and a velocity are represented by ''n'' real numbers, and these operators are simply -, *, +, and again +. But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems (or more generally discrete ones), or even combinatorial ones.<ref>Kennedy, J. & Eberhart, R. C. (1997). [http://ahmetcevahircinar.com.tr/wp-content/uploads/2017/02/A_discrete_binary_version_of_the_particle_swarm_algorithm.pdf A discrete binary version of the particle swarm algorithm], Conference on Systems, Man, and Cybernetics, Piscataway, NJ: IEEE Service Center, pp. 4104-4109</ref><ref>Clerc, M. (2004). [https://link.springer.com/chapter/10.1007/978-3-540-39930-8_8 Discrete Particle Swarm Optimization, illustrated by the Traveling Salesman Problem], New Optimization Techniques in Engineering, Springer, pp. 219-239</ref><ref>Clerc, M. (2005). Binary Particle Swarm Optimisers: toolbox, derivations, and mathematical insights, [http://hal.archives-ouvertes.fr/hal-00122809/en/ Open Archive HAL]</ref><ref>{{cite journal|doi=10.1016/j.amc.2007.04.096|title=A combinatorial particle swarm optimization for solving multi-mode resource-constrained project scheduling problems|journal=Applied Mathematics and Computation|volume=195|pages=299β308|year=2008|last1=Jarboui|first1=B.|last2=Damak|first2=N.|last3=Siarry|first3=P.|last4=Rebai|first4=A.}}</ref> One approach is to redefine the operators based on sets.<ref name=Chen10SPSO />
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)