Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Particle swarm optimization
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Use American English|date=January 2019}} {{Short description|Iterative simulation method}} [[File:ParticleSwarmArrowsAnimation.gif|thumb|A particle swarm searching for the [[global minimum]] of a function]] In [[computational science]], '''particle swarm optimization''' ('''PSO''')<ref name=bonyadi16survey/> is a computational method that [[Mathematical optimization|optimizes]] a problem by [[iterative method|iteratively]] trying to improve a [[candidate solution]] with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed [[Point particle|particle]]s, and moving these particles around in the [[Optimization (mathematics)#Concepts and notation|search-space]] according to simple [[formula|mathematical formulae]] over the particle's [[Position (vector)|position]] and [[velocity]]. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions. PSO is originally attributed to [[James Kennedy (social psychologist)|Kennedy]], [[Russell C. Eberhart|Eberhart]] and [[Yuhui Shi|Shi]]<ref name=kennedy95particle/><ref name=shi98modified/> and was first intended for [[computer simulation|simulating]] [[social behaviour]],<ref name=kennedy97particle/> as a stylized representation of the movement of organisms in a bird [[Flocking (behavior)|flock]] or [[fish school]]. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart<ref name=kennedy01swarm/> describes many philosophical aspects of PSO and [[swarm intelligence]]. An extensive survey of PSO applications is made by [[Riccardo Poli|Poli]].<ref name=poli07analysis/><ref name=poli08analysis/> In 2017, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.<ref name=bonyadi16survey/> PSO is a [[metaheuristic]] as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. Also, PSO does not use the [[gradient]] of the problem being optimized, which means PSO does not require that the optimization problem be [[Differentiable function|differentiable]] as is required by classic optimization methods such as [[gradient descent]] and [[quasi-newton methods]]. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found. == Algorithm == A basic variant of the PSO algorithm works by having a population (called a swarm) of [[candidate solution]]s (called particles). These particles are moved around in the search-space according to a few simple formulae.<ref>{{cite journal|last1=Zhang|first1=Y.|title=A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications|journal=Mathematical Problems in Engineering|date=2015|volume=2015|page=931256|url=http://www.hindawi.com/journals/mpe/2015/931256}}</ref> The movements of the particles are guided by their own best-known position in the search-space as well as the entire swarm's best-known position. When improved positions are being discovered these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered. Formally, let ''f'': β<sup>''n''</sup> β β be the cost function which must be minimized. The function takes a candidate solution as an argument in the form of a [[Row vector|vector]] of [[real number]]s and produces a real number as output which indicates the objective function value of the given candidate solution. The [[gradient]] of ''f'' is not known. The goal is to find a solution '''a''' for which ''f''('''a''') β€ ''f''('''b''') for all '''b''' in the search-space, which would mean '''a''' is the global minimum. Let ''S'' be the number of particles in the swarm, each having a position '''x'''<sub>i</sub> β β<sup>''n''</sup> in the search-space and a velocity '''v'''<sub>i</sub> β β<sup>''n''</sup>. Let '''p'''<sub>i</sub> be the best known position of particle ''i'' and let '''g''' be the best known position of the entire swarm. A basic PSO algorithm to minimize the cost function is then:<ref name=clerc12spso/> <!-- Please see discussion page why this particular PSO variant was chosen. --> '''for''' each particle ''i'' = 1, ..., ''S'' '''do''' Initialize the particle's position with a [[Uniform distribution (continuous)|uniformly distributed]] random vector: '''x'''<sub>i</sub> ~ ''U''('''b<sub>lo</sub>''', '''b<sub>up</sub>''') Initialize the particle's best known position to its initial position: '''p'''<sub>i</sub> β '''x'''<sub>i</sub> '''if''' ''f''('''p'''<sub>i</sub>) < ''f''('''g''') '''then''' update the swarm's best known position: '''g''' β '''p'''<sub>i</sub> Initialize the particle's velocity: '''v'''<sub>i</sub> ~ ''U''(-|'''b<sub>up</sub>'''-'''b<sub>lo</sub>'''|, |'''b<sub>up</sub>'''-'''b<sub>lo</sub>'''|) '''while''' a termination criterion is not met '''do''': '''for''' each particle ''i'' = 1, ..., ''S'' '''do''' '''for''' each dimension ''d'' = 1, ..., ''n'' '''do''' Pick random numbers: ''r''<sub>p</sub>, ''r''<sub>g</sub> ~ ''U''(0,1) Update the particle's velocity: '''v'''<sub>i,d</sub> β w '''v'''<sub>i,d</sub> + Ο<sub>p</sub> ''r''<sub>p</sub> ('''p'''<sub>i,d</sub>-'''x'''<sub>i,d</sub>) + Ο<sub>g</sub> ''r''<sub>g</sub> ('''g'''<sub>d</sub>-'''x'''<sub>i,d</sub>) Update the particle's position: '''x'''<sub>i</sub> β '''x'''<sub>i</sub> + '''v'''<sub>i</sub> '''if''' ''f''('''x'''<sub>i</sub>) < ''f''('''p'''<sub>i</sub>) '''then''' Update the particle's best known position: '''p'''<sub>i</sub> β '''x'''<sub>i</sub> '''if''' ''f''('''p'''<sub>i</sub>) < ''f''('''g''') '''then''' Update the swarm's best known position: '''g''' β '''p'''<sub>i</sub> The values '''b<sub>lo</sub>''' and '''b<sub>up</sub>''' represent the lower and upper boundaries of the search-space respectively. The w parameter is the inertia weight. The parameters Ο<sub>p</sub> and Ο<sub>g</sub> are often called cognitive coefficient and social coefficient. The termination criterion can be the number of iterations performed, or a solution where the adequate objective function value is found.<ref name="bratton2007" /> The parameters w, Ο<sub>p</sub>, and Ο<sub>g</sub> are selected by the practitioner and control the behaviour and efficacy of the PSO method ([[#Parameter selection|below]]). == Parameter selection == [[File:PSO Meta-Fitness Landscape (12 benchmark problems).JPG|thumb|Performance landscape showing how a simple PSO variant performs in aggregate on several benchmark problems when varying two PSO parameters.]] The choice of PSO parameters can have a large impact on optimization performance. Selecting PSO parameters that yield good performance has therefore been the subject of much research.<ref name=taherkhani2016inertia/><ref name=shi98parameter/><ref name=eberhart00comparing/><ref name=carlisle01offtheshelf/><ref name=bergh01thesis/><ref name=clerc02explosion/><ref name=trelea03particle/><ref name=bratton08simplified/><ref name=evers09thesis/> To prevent divergence ("explosion") the inertia weight must be smaller than 1. The two other parameters can be then derived thanks to the constriction approach,<ref name="clerc02explosion" /> or freely selected, but the analyses suggest convergence domains to constrain them. Typical values are in <math>[ 1,3]</math>. The PSO parameters can also be tuned by using another overlaying optimizer, a concept known as [[meta-optimization]],<ref name="meissner06optimized" /><ref name="pedersen08thesis" /><ref name="pedersen08simplifying" /><ref name="mason2017meta" /> or even fine-tuned during the optimization, e.g., by means of fuzzy logic.<ref name="nobile2017" /><ref name="nobile2015" /> Parameters have also been tuned for various optimization scenarios.<ref name=cazzaniga2015/><ref name=pedersen10good-pso/> ==Neighbourhoods and topologies== The topology of the swarm defines the subset of particles with which each particle can exchange information.<ref name=kennedy2002population/> The basic version of the algorithm uses the global topology as the swarm communication structure.<ref name=bratton2007/> This topology allows all particles to communicate with all the other particles, thus the whole swarm share the same best position '''g''' from a single particle. However, this approach might lead the swarm to be trapped into a local minimum,<ref>Mendes, R. (2004). [https://pdfs.semanticscholar.org/d224/80b09d1f0759fb20e0fb0bd2de205457c8bc.pdf Population Topologies and Their Influence in Particle Swarm Performance] (PhD thesis). Universidade do Minho.</ref> thus different topologies have been used to control the flow of information among particles. For instance, in local topologies, particles only share information with a subset of particles.<ref name=bratton2007/> This subset can be a geometrical one<ref>Suganthan, Ponnuthurai N. "[https://ieeexplore.ieee.org/abstract/document/785514/ Particle swarm optimiser with neighbourhood operator]." Evolutionary Computation, 1999. CEC 99. Proceedings of the 1999 Congress on. Vol. 3. IEEE, 1999.</ref> β for example "the ''m'' nearest particles" β or, more often, a social one, i.e. a set of particles that is not depending on any distance. In such cases, the PSO variant is said to be local best (vs global best for the basic PSO). A commonly used swarm topology is the ring, in which each particle has just two neighbours, but there are many others.<ref name=bratton2007/> The topology is not necessarily static. In fact, since the topology is related to the diversity of communication of the particles,<ref name=oliveira2016communication/> some efforts have been done to create adaptive topologies (SPSO,<ref>SPSO [http://www.particleswarm.info Particle Swarm Central]</ref> APSO,<ref> Almasi, O. N. and Khooban, M. H. (2017). A parsimonious SVM model selection criterion for classification of real-world data sets via an adaptive population-based algorithm. Neural Computing and Applications, 1-9. [https://link.springer.com/article/10.1007/s00521-017-2930-y https://doi.org/10.1007/s00521-017-2930-y]</ref> stochastic star,<ref>Miranda, V., Keko, H. and Duque, Γ. J. (2008). [https://repositorio.inesctec.pt/bitstream/123456789/1561/1/PS-05818.pdf Stochastic Star Communication Topology in Evolutionary Particle Swarms (EPSO)]. International Journal of Computational Intelligence Research (IJCIR), Volume 4, Number 2, pp. 105-116</ref> TRIBES,<ref>Clerc, M. (2006). Particle Swarm Optimization. ISTE (International Scientific and Technical Encyclopedia), 2006</ref> Cyber Swarm,<ref>Yin, P., Glover, F., Laguna, M., & Zhu, J. (2011). [http://leeds-faculty.colorado.edu/glover/fred%20pubs/428%20-%20A_complementary_cyber_swarm_algorithm_pub%20version%20w%20pen%20et%20al.pdf A Complementary Cyber Swarm Algorithm]. International Journal of Swarm Intelligence Research (IJSIR), 2(2), 22-41</ref> and C-PSO<ref name=elshamy07sis/>) By using the ring topology, PSO can attain generation-level parallelism, significantly enhancing the evolutionary speed.<ref>{{cite journal |last1=Jian-Yu |first1=Li |title=Generation-Level Parallelism for Evolutionary Computation: A Pipeline-Based Parallel Particle Swarm Optimization |journal=IEEE Transactions on Cybernetics |date=2021 |volume=51 |issue=10 |pages=4848β4859 |doi=10.1109/TCYB.2020.3028070 |pmid=33147159 |url=https://ieeexplore.ieee.org/document/9248594|url-access=subscription }}</ref> == Inner workings == There are several [[schools of thought]] as to why and how the PSO algorithm can perform optimization. A common belief amongst researchers is that the swarm behaviour varies between exploratory behaviour, that is, searching a broader region of the search-space, and exploitative behaviour, that is, a locally oriented search so as to get closer to a (possibly local) optimum. This school of thought has been prevalent since the inception of PSO.<ref name=shi98modified/><ref name=kennedy97particle/><ref name=shi98parameter/><ref name=clerc02explosion/> This school of thought contends that the PSO algorithm and its parameters must be chosen so as to properly balance between exploration and exploitation to avoid [[premature convergence]] to a [[local optimum]] yet still ensure a good rate of [[Convergent sequence|convergence]] to the optimum. This belief is the precursor of many PSO variants, see [[#Variants|below]]. Another school of thought is that the behaviour of a PSO swarm is not well understood in terms of how it affects actual optimization performance, especially for higher-dimensional search-spaces and optimization problems that may be discontinuous, noisy, and time-varying. This school of thought merely tries to find PSO algorithms and parameters that cause good performance regardless of how the swarm behaviour can be interpreted in relation to e.g. exploration and exploitation. Such studies have led to the simplification of the PSO algorithm, see [[#Simplifications|below]]. === Convergence === In relation to PSO the word ''convergence'' typically refers to two different definitions: * Convergence of the sequence of solutions (aka, stability analysis, [[convergent sequence|converging]]) in which all particles have converged to a point in the search-space, which may or may not be the optimum, * Convergence to a local optimum where all personal bests '''p''' or, alternatively, the swarm's best known position '''g''', approaches a local optimum of the problem, regardless of how the swarm behaves. Convergence of the sequence of solutions has been investigated for PSO.<ref name=bergh01thesis/><ref name=clerc02explosion/><ref name=trelea03particle/> These analyses have resulted in guidelines for selecting PSO parameters that are believed to cause convergence to a point and prevent divergence of the swarm's particles (particles do not move unboundedly and will converge to somewhere). However, the analyses were criticized by Pedersen<ref name=pedersen08simplifying/> for being oversimplified as they assume the swarm has only one particle, that it does not use stochastic variables and that the points of attraction, that is, the particle's best known position '''p''' and the swarm's best known position '''g''', remain constant throughout the optimization process. However, it was shown<ref>{{cite book|last1=Cleghorn|first1=Christopher W|title=Swarm Intelligence |chapter=Particle Swarm Convergence: Standardized Analysis and Topological Influence |volume=8667|pages=134β145|date=2014|doi=10.1007/978-3-319-09952-1_12|series=Lecture Notes in Computer Science|isbn=978-3-319-09951-4}}</ref> that these simplifications do not affect the boundaries found by these studies for parameter where the swarm is convergent. Considerable effort has been made in recent years to weaken the modeling assumption utilized during the stability analysis of PSO,<ref name=Liu2015/> with the most recent generalized result applying to numerous PSO variants and utilized what was shown to be the minimal necessary modeling assumptions.<ref name=Cleghorn2018/> Convergence to a local optimum has been analyzed for PSO in<ref>{{cite journal|last1=Van den Bergh|first1=F|title=A convergence proof for the particle swarm optimizer |journal=Fundamenta Informaticae|url=https://repository.up.ac.za/bitstream/handle/2263/17262/VanDenBergh_Convergence(2010).pdf?sequence=1}}</ref> and.<ref name=Bonyadi2014/> It has been proven that PSO needs some modification to guarantee finding a local optimum. This means that determining the convergence capabilities of different PSO algorithms and parameters still depends on [[empirical]] results. One attempt at addressing this issue is the development of an "orthogonal learning" strategy for an improved use of the information already existing in the relationship between '''p''' and '''g''', so as to form a leading converging exemplar and to be effective with any PSO topology. The aims are to improve the performance of PSO overall, including faster global convergence, higher solution quality, and stronger robustness.<ref name=zhan10OLPSO/> However, such studies do not provide theoretical evidence to actually prove their claims. === Adaptive mechanisms === Without the need for a trade-off between convergence ('exploitation') and divergence ('exploration'), an adaptive mechanism can be introduced. Adaptive particle swarm optimization (APSO) <ref name=zhan09adaptive/> features better search efficiency than standard PSO. APSO can perform global search over the entire search space with a higher convergence speed. It enables automatic control of the inertia weight, acceleration coefficients, and other algorithmic parameters at the run time, thereby improving the search effectiveness and efficiency at the same time. Also, APSO can act on the globally best particle to jump out of the likely local optima. However, APSO will introduce new algorithm parameters, it does not introduce additional design or implementation complexity nonetheless. Besides, through the utilization of a scale-adaptive fitness evaluation mechanism, PSO can efficiently address computationally expensive optimization problems.<ref>{{cite journal |last1=Wang |first1=Ye-Qun |last2=Li |first2=Jian-Yu |last3=Chen |first3=Chun-Hua |last4=Zhang |first4=Jun |last5=Zhan |first5=Zhi-Hui |title=Scale adaptive fitness evaluation-based particle swarm optimisation for hyperparameter and architecture optimisation in neural networks and deep learning |journal=CAAI Transactions on Intelligence Technology |date=September 2023 |volume=8 |issue=3 |page=849-862 |doi=10.1049/cit2.12106 |doi-access=free }}</ref> == Variants == <!-- Please provide complete references to journal / conference papers, tech reports, msc/phd theses, etc. when adding PSO variants. Please only include major research contributions and keep the descriptions short - there are probably hundreds of PSO variants in existence and Wikipedia is not the proper place to list them all. --> Numerous variants of even a basic PSO algorithm are possible. For example, there are different ways to initialize the particles and velocities (e.g. start with zero velocities instead), how to dampen the velocity, only update '''p'''<sub>i</sub> and '''g''' after the entire swarm has been updated, etc. Some of these choices and their possible performance impact have been discussed in the literature.<ref name=carlisle01offtheshelf/> A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances."<ref name=bratton2007/> The latest is Standard PSO 2011 (SPSO-2011).<ref name=Zambrano-Bigiarini2013/> In addition, some PSO variants have been developed to solve large-scale global optimization (LSGO) problems with more than 1000 dimensions. Representative variants include competitive swarm optimizer (CSO) and level-based learning swarm optimizer (LLSO).<ref name=llso/> Recently, PSO has also been extended to solve multi-agent consensus-based distributed optimization problems, e.g., multi-agent consensus-based PSO with adaptive internal and external learning (MASOIE),<ref name=masoie/> etc. === Hybridization === New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance. There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers,<ref name=lovbjerg02lifecycle/><ref name=niknam10efficient/><ref name=zx03depso/> e.g., combined PSO with biogeography-based optimization,<ref>{{cite journal|last1=Zhang|first1=Y.|last2=Wang|first2=S.|title=Pathological Brain Detection in Magnetic Resonance Imaging Scanning by Wavelet Entropy and Hybridization of Biogeography-based Optimization and Particle Swarm Optimization|journal=Progress in Electromagnetics Research|date=2015|volume=152|pages=41β58|doi=10.2528/pier15040602|doi-access=free}}</ref> and the incorporation of an effective learning method.<ref name=zhan10OLPSO/> === Alleviate premature convergence=== Another research trend is to try to alleviate premature convergence (that is, optimization stagnation), e.g. by reversing or perturbing the movement of the PSO particles,<ref name=evers09thesis/><ref name=lovbjerg02extending/><ref name=xinchao10perturbed/><ref name=xzy02dpso/> another approach to deal with premature convergence is the use of multiple swarms<ref>{{cite journal | last1 = Cheung | first1 = N. J. | last2 = Ding | first2 = X.-M. | last3 = Shen | first3 = H.-B. | year = 2013 | title = OptiFel: A Convergent Heterogeneous Particle Sarm Optimization Algorithm for Takagi-Sugeno Fuzzy Modeling | journal = IEEE Transactions on Fuzzy Systems | volume = 22| issue = 4| pages = 919β933 | doi = 10.1109/TFUZZ.2013.2278972 | s2cid = 27974467 }}</ref> ([[multi-swarm optimization]]). The multi-swarm approach can also be used to implement multi-objective optimization.<ref name=nobile2012 /> Finally, there are developments in adapting the behavioural parameters of PSO during optimization.<ref name=zhan09adaptive/><ref name=nobile2017/> === Simplifications === Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as [[Occam's razor]]. Simplifying PSO was originally suggested by Kennedy<ref name=kennedy97particle/> and has been studied more extensively,<ref name=bratton08simplified/><ref name=pedersen08thesis/><ref name=pedersen08simplifying/><ref name=yang08nature/> where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems. Another argument in favour of simplifying PSO is that [[metaheuristic]]s can only have their efficacy demonstrated [[empirical]]ly by doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot be [[Program correctness|proven correct]] and this increases the risk of making errors in its description and implementation. A good example of this<ref name=tu04robust/> presented a promising variant of a [[genetic algorithm]] (another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.<ref name=tu04corrections/> ==== Bare Bones PSO ==== Initialization of velocities may require extra inputs. The Bare Bones PSO variant<ref>{{Cite book|last=Kennedy|first=James|title=Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No.03EX706) |chapter=Bare bones particle swarms |date=2003|pages=80β87|doi=10.1109/SIS.2003.1202251|isbn=0-7803-7914-4|s2cid=37185749}}</ref> has been proposed in 2003 by James Kennedy, and does not need to use velocity at all. In this variant of PSO one dispences with the velocity of the particles and instead updates the positions of the particles using the following simple rule, :<math> \vec x_i = G\left(\frac{\vec p_i+\vec g}{2},||\vec p_i-\vec g||\right) \,, </math> where <math>\vec x_i</math>, <math>\vec p_i</math> are the position and the best position of the particle <math>i</math>; <math>\vec g</math> is the global best position; <math>G(\vec x,\sigma)</math> is the [[normal distribution]] with the mean <math>\vec x</math> and standard deviation <math>\sigma</math>; and where <math>||\dots||</math> signifies the norm of a vector. ==== Accelerated Particle Swarm Optimization ==== Another simpler variant is the accelerated particle swarm optimization (APSO),<ref>X. S. Yang, S. Deb and S. Fong, [https://arxiv.org/abs/1203.6577 Accelerated particle swarm optimization and support vector machine for business optimization and applications], NDT 2011, Springer CCIS 136, pp. 53-66 (2011).</ref> which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.<ref>{{Cite web | url=http://www.mathworks.com/matlabcentral/fileexchange/?term=APSO | title=Search Results: APSO - File Exchange - MATLAB Central}}</ref> In this variant of PSO one dispences with both the particle's velocity and the particle's best position. The particle position is updated according to the following rule, :<math> \vec x_i \leftarrow (1-\beta)\vec x_i + \beta \vec g + \alpha L \vec u \,, </math> where <math>\vec u</math> is a random uniformly distributed vector, <math>L</math> is the typical length of the problem at hand, and <math>\beta\sim 0.1-0.7</math> and <math>\alpha\sim 0.1-0.5</math> are the parameters of the method. As a refinement of the method one can decrease <math>\alpha</math> with each iteration, <math>\alpha_n=\alpha_0\gamma^n</math>, where <math>n</math> is the number of the iteration and <math>0 < \gamma < 1</math> is the decrease control parameter. ===Multi-objective optimization=== PSO has also been applied to [[multi-objective optimization|multi-objective problems]],<ref name=parsopoulos02particle/><ref name=coellocoello02MOPSO/><ref name=MASON2017188/> in which the objective function comparison takes [[Pareto efficiency|Pareto dominance]] into account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front. ===Binary, discrete, and combinatorial=== As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result. Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.<ref>Roy, R., Dehuri, S., & Cho, S. B. (2012). [http://sclab.yonsei.ac.kr/publications/Papers/IJ/A%20Novel%20Particle%20Swarm%20Optimization%20Algorithm%20for%20Multi-Objective%20Combinatorial%20Optimization%20Problem.pdf A Novel Particle Swarm Optimization Algorithm for Multi-Objective Combinatorial Optimization Problem]. 'International Journal of Applied Metaheuristic Computing (IJAMC)', 2(4), 41-57</ref> However, it can be noted that the equations of movement make use of operators that perform four actions: *computing the difference of two positions. The result is a velocity (more precisely a displacement) *multiplying a velocity by a numerical coefficient *adding two velocities *applying a velocity to a position Usually a position and a velocity are represented by ''n'' real numbers, and these operators are simply -, *, +, and again +. But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems (or more generally discrete ones), or even combinatorial ones.<ref>Kennedy, J. & Eberhart, R. C. (1997). [http://ahmetcevahircinar.com.tr/wp-content/uploads/2017/02/A_discrete_binary_version_of_the_particle_swarm_algorithm.pdf A discrete binary version of the particle swarm algorithm], Conference on Systems, Man, and Cybernetics, Piscataway, NJ: IEEE Service Center, pp. 4104-4109</ref><ref>Clerc, M. (2004). [https://link.springer.com/chapter/10.1007/978-3-540-39930-8_8 Discrete Particle Swarm Optimization, illustrated by the Traveling Salesman Problem], New Optimization Techniques in Engineering, Springer, pp. 219-239</ref><ref>Clerc, M. (2005). Binary Particle Swarm Optimisers: toolbox, derivations, and mathematical insights, [http://hal.archives-ouvertes.fr/hal-00122809/en/ Open Archive HAL]</ref><ref>{{cite journal|doi=10.1016/j.amc.2007.04.096|title=A combinatorial particle swarm optimization for solving multi-mode resource-constrained project scheduling problems|journal=Applied Mathematics and Computation|volume=195|pages=299β308|year=2008|last1=Jarboui|first1=B.|last2=Damak|first2=N.|last3=Siarry|first3=P.|last4=Rebai|first4=A.}}</ref> One approach is to redefine the operators based on sets.<ref name=Chen10SPSO /> == See also == <!-- Please only add optimizers that are conceptually related to PSO. The navbox at the bottom contains the most popular optimizers in this field, and the categories lists all optimizers. Spam will be removed. --> * [[Artificial bee colony algorithm]] * [[Bees algorithm]] * [[Derivative-free optimization]] * [[Multi-swarm optimization]] * [[Particle filter]] * [[Swarm intelligence]] * [[Fish School Search]] * [[Dispersive flies optimisation]] * [[Consensus based optimization]] == References == <!-- Please provide complete references to journal / conference papers, tech reports, msc/phd theses, etc. and make sure the reference format is correct. Only include major research contributions - there are hundreds of PSO variants in existence and Wikipedia is not the proper place to list them all. Only add a reference when you use it in a concise description in the main article. --> {{Reflist|refs= <ref name="lovbjerg02lifecycle">{{cite conference | first1=M. | last1=Lovbjerg | title=The LifeCycle Model: combining particle swarm optimisation, genetic algorithms and hillclimbers | last2=Krink | first2=T. | book-title=Proceedings of Parallel Problem Solving from Nature VII (PPSN) | year=2002 | pages=621β630|url=http://www.lovbjerghome.dk/Morten/EvaLife/TK_PPSN2002_LifeCycle_PSO_HC_GA.pdf}}</ref> <ref name="bonyadi16survey">{{cite journal | first1=M. R. | last1=Bonyadi | title=Particle swarm optimization for single objective continuous space problems: a review | last2=Michalewicz | first2=Z. | journal=Evolutionary Computation | year=2017 | volume=25 | issue=1 | pages=1β54 | doi=10.1162/EVCO_r_00180| pmid=26953883 | s2cid=8783143 }}</ref> <ref name="niknam10efficient">{{cite journal | first1=T. | last1=Niknam | title=An efficient hybrid approach based on PSO, ACO and k-means for cluster analysis | last2=Amiri | first2=B. | journal=Applied Soft Computing | year=2010 | volume=10 | issue=1 | pages=183β197 | doi=10.1016/j.asoc.2009.07.001}}</ref> <ref name="lovbjerg02extending">{{cite conference | first1=M. | last1=Lovbjerg | title=Extending Particle Swarm Optimisers with Self-Organized Criticality | last2=Krink | first2=T. | book-title=Proceedings of the Fourth Congress on Evolutionary Computation (CEC) | year=2002 | volume=2 | pages=1588β1593|url=http://www.lovbjerghome.dk/Morten/EvaLife/ML_CEC2002_SOCPSO.pdf}}</ref> <ref name="xinchao10perturbed">{{cite journal | title=A perturbed particle swarm algorithm for numerical optimization | last=Xinchao | first=Z. | journal=Applied Soft Computing | year=2010 | volume=10 | issue=1 | pages=119β124 | doi=10.1016/j.asoc.2009.06.010}}</ref> <ref name="zhan09adaptive">{{cite journal | first1=Z-H. | last1=Zhan | title=Adaptive Particle Swarm Optimization | last2=Zhang | first2=J. | last3=Li | first3=Y | last4=Chung | first4=H.S-H. | journal=IEEE Transactions on Systems, Man, and Cybernetics | year=2009 | volume=39 | issue=6 | pages=1362β1381 | doi=10.1109/TSMCB.2009.2015956 | pmid=19362911 | s2cid=11191625 | url=http://eprints.gla.ac.uk/7645/1/7645.pdf }}</ref> <ref name="zhan10OLPSO">{{cite journal | first1=Z-H. | last1=Zhan | title=Orthogonal Learning Particle Swarm Optimization | last2=Zhang | first2=J. | last3=Li | first3=Y | last4=Shi | first4=Y-H. | journal=IEEE Transactions on Evolutionary Computation | year=2011 | volume=15 | issue=6 | pages=832β847| doi=10.1109/TEVC.2010.2052054 | url=http://eprints.gla.ac.uk/44801/1/44801.pdf }}</ref> <ref name="Bonyadi2014">{{cite journal | first1=Mohammad reza. | last1=Bonyadi | title=A locally convergent rotationally invariant particle swarm optimization algorithm | last2=Michalewicz | first2=Z. | journal=Swarm Intelligence | year=2014 | volume=8 | issue=3 | pages=159β198 | doi=10.1007/s11721-014-0095-1| s2cid=2261683 | url=https://espace.library.uq.edu.au/view/UQ:396054/ERAUQ396054.pdf }}</ref> <!-- not used <ref name="Bonyadi2014Analysis">{{cite journal | first1=Mohammad reza. | last1=Bonyadi | title=An analysis of the velocity updating rule of the particle swarm optimization algorithm | last2=Michalewicz | first2=Z. | journal=Journal of Heuristics | year=2014 | volume=20 | issue=4 | pages=417β452 | doi=10.1007/s10732-014-9245-2}}</ref> --> <ref name="Cleghorn2018">{{cite journal | first1=Christopher W. | last1=Cleghorn | title=Particle Swarm Stability: A Theoretical Extension using the Non-Stagnate Distribution Assumption | last2=Engelbrecht | first2=Andries. | journal=Swarm Intelligence | year=2018 | volume=12 | issue=1 | pages=1β22 | doi=10.1007/s11721-017-0141-x| hdl=2263/62934 | s2cid=9778346 | hdl-access=free }}</ref> <ref name="Liu2015">{{cite journal | first1=Q | last1=Liu | title=Order-2 stability analysis of particle swarm optimization | journal=Evolutionary Computation | year=2015 | volume=23 | issue=2 | pages=187β216 | doi=10.1162/EVCO_a_00129| pmid=24738856 | s2cid=25471827 }}</ref> <ref name="kennedy95particle">{{cite conference | first1=J. | last1=Kennedy | title=Particle Swarm Optimization | last2=Eberhart | first2=R. | book-title=Proceedings of IEEE International Conference on Neural Networks | year=1995 | volume=IV | pages=1942β1948 | doi=10.1109/ICNN.1995.488968}}</ref> <ref name="kennedy97particle">{{cite conference | first1=J. | last1=Kennedy | title=The particle swarm: social adaptation of knowledge | book-title=Proceedings of IEEE International Conference on Evolutionary Computation | year=1997 | pages=303β308| doi=10.1109/ICEC.1997.592326 }}</ref> <ref name="shi98modified">{{cite conference | first1=Y. | last1=Shi | title=A modified particle swarm optimizer | last2=Eberhart | first2=R.C. | book-title=Proceedings of IEEE International Conference on Evolutionary Computation | year=1998 | pages=69β73| doi=10.1109/ICEC.1998.699146 }}</ref> <ref name="shi98parameter">{{cite conference | first1=Y. | last1=Shi | title=Parameter selection in particle swarm optimization | last2=Eberhart | first2=R.C. | book-title=Proceedings of Evolutionary Programming VII (EP98) | year=1998 | pages=591β600}}</ref> <ref name="eberhart00comparing">{{cite conference | first1=R.C. | last1=Eberhart | title=Comparing inertia weights and constriction factors in particle swarm optimization | last2=Shi | first2=Y. | book-title=Proceedings of the Congress on Evolutionary Computation | year=2000 | volume=1 | pages=84β88|url=https://www.researchgate.net/publication/3865142}}</ref> <ref name="carlisle01offtheshelf">{{cite conference | archive-date=2003-05-03 | archive-url=https://web.archive.org/web/20030503203304/http://antho.huntingdon.edu/publications/Off-The-Shelf_PSO.pdf | first1=A. | last1=Carlisle | url=http://antho.huntingdon.edu/publications/Off-The-Shelf_PSO.pdf | title=An Off-The-Shelf PSO | last2=Dozier | first2=G. | book-title=Proceedings of the Particle Swarm Optimization Workshop | url-status=dead | year=2001 | pages=1β6}}</ref> <ref name="zx03depso">Zhang, Wen-Jun; Xie, Xiao-Feng (2003). [http://www.wiomax.com/team/xie/paper/SMCC03.pdf DEPSO: hybrid particle swarm with differential evolution operator]. ''IEEE International Conference on Systems, Man, and Cybernetics'' (SMCC), Washington, DC, USA: 3816-3821.</ref> <ref name="bergh01thesis">{{cite book | title=An Analysis of Particle Swarm Optimizers | publisher=University of Pretoria, Faculty of Natural and Agricultural Science | last=van den Bergh | first=F. | year=2001 | type=PhD thesis|url=http://bee22.com/resources/Bergh%202006.pdf}}</ref> <ref name="trelea03particle">{{cite journal | title=The Particle Swarm Optimization Algorithm: convergence analysis and parameter selection | last=Trelea | first=I.C. | journal=Information Processing Letters | year=2003 | volume=85 | issue=6 | pages=317β325 | doi=10.1016/S0020-0190(02)00447-7}}</ref> <ref name="clerc02explosion">{{cite journal | first1=M. | last1=Clerc | title=The particle swarm - explosion, stability, and convergence in a multidimensional complex space | last2=Kennedy | first2=J. | journal=IEEE Transactions on Evolutionary Computation | year=2002 | volume=6 | issue=1 | pages=58β73 | doi=10.1109/4235.985692| citeseerx=10.1.1.460.6608 }}</ref> <ref name="xzy02dpso">Xie, Xiao-Feng; Zhang, Wen-Jun; Yang, Zhi-Lian (2002). [http://www.wiomax.com/team/xie/paper/CEC02.pdf A dissipative particle swarm optimization]. ''Congress on Evolutionary Computation'' (CEC), Honolulu, HI, USA: 1456-1461.</ref> <ref name="bratton08simplified">{{cite journal | first1=D. | last1=Bratton | url=http://downloads.hindawi.com/archive/2008/654184.pdf | title=A Simplified Recombinant PSO | last2=Blackwell | first2=T. | journal=Journal of Artificial Evolution and Applications | volume=2008 | pages=1β10 | year=2008| doi=10.1155/2008/654184 | doi-access=free }}</ref> <ref name="kennedy01swarm">{{cite book | first1=J. | last1=Kennedy | title=Swarm Intelligence | publisher=Morgan Kaufmann | last2=Eberhart | first2=R.C. | year=2001 | isbn=978-1-55860-595-4}}</ref> <ref name="pedersen08thesis">{{cite book | url=https://pdfs.semanticscholar.org/a5d2/8c26a2e2824170d67b69f14120cf67cabe26.pdf | archive-url=https://web.archive.org/web/20200213130101/https://pdfs.semanticscholar.org/a5d2/8c26a2e2824170d67b69f14120cf67cabe26.pdf | url-status=dead | archive-date=2020-02-13 | title=Tuning & Simplifying Heuristical Optimization | publisher=University of Southampton, School of Engineering Sciences, Computational Engineering and Design Group | last=Pedersen | first=M.E.H. | year=2010 | s2cid=107805461 | format=PhD thesis}}</ref> <ref name="pedersen08simplifying">{{cite journal | title=Simplifying particle swarm optimization | last=Pedersen | first=M.E.H. | author2=Chipperfield, A.J. | journal=Applied Soft Computing | year=2010 | volume=10 | issue=2 | pages=618β628 | doi=10.1016/j.asoc.2009.08.029| citeseerx=10.1.1.149.8300 }}</ref> <ref name="mason2017meta">{{cite journal | title=A Meta Optimisation Analysis of Particle Swarm Optimisation Velocity Update Equations for Watershed Management Learning | last=Mason | first=Karl | author2=Duggan, Jim | author3=Howley, Enda | journal=Applied Soft Computing | year=2018 | volume=62 | pages=148β161 | doi=10.1016/j.asoc.2017.10.018}}</ref> <ref name="pedersen10good-pso">{{cite journal | title=Good parameters for particle swarm optimization | last=Pedersen | first=M.E.H. | journal=Technical Report HL1001 | year=2010|citeseerx = 10.1.1.298.4359}}</ref> <ref name="poli07analysis">{{cite journal | url=http://cswww.essex.ac.uk/technical-reports/2007/tr-csm469.pdf | title=An analysis of publications on particle swarm optimisation applications | last=Poli | first=R. | journal=Technical Report CSM-469 | year=2007 | access-date=2010-05-03 | archive-url=https://web.archive.org/web/20110716231935/http://cswww.essex.ac.uk/technical-reports/2007/tr-csm469.pdf | archive-date=2011-07-16 | url-status=dead }}</ref> <ref name="poli08analysis">{{cite journal | url=http://downloads.hindawi.com/archive/2008/685175.pdf | title=Analysis of the publications on the applications of particle swarm optimisation | last=Poli | first=R. | journal=Journal of Artificial Evolution and Applications | year=2008 | volume=2008 | pages=1β10 | doi=10.1155/2008/685175| doi-access=free }}</ref> <ref name="evers09thesis">{{cite book | url=http://www.georgeevers.org/publications.htm | title=An Automatic Regrouping Mechanism to Deal with Stagnation in Particle Swarm Optimization | publisher=The University of Texas - Pan American, Department of Electrical Engineering | last=Evers | first=G. | year=2009 | format=Master's thesis | access-date=2010-05-05 | archive-date=2011-05-18 | archive-url=https://web.archive.org/web/20110518164430/http://www.georgeevers.org/publications.htm | url-status=dead }}</ref> <ref name="tu04robust">{{cite journal | first1=Z. | last1=Tu | title=A robust stochastic genetic algorithm (StGA) for global numerical optimization | last2=Lu | first2=Y. | journal=IEEE Transactions on Evolutionary Computation | year=2004 | volume=8 | issue=5 | pages=456β470 | doi=10.1109/TEVC.2004.831258| s2cid=22382958 }}</ref> <ref name="tu04corrections">{{cite journal | first1=Z. | last1=Tu | title=Corrections to "A Robust Stochastic Genetic Algorithm (StGA) for Global Numerical Optimization" | last2=Lu | first2=Y. | journal=IEEE Transactions on Evolutionary Computation | year=2008 | volume=12 | issue=6 | pages=781 | doi=10.1109/TEVC.2008.926734| s2cid=2864886 }}</ref> <ref name="meissner06optimized">{{cite journal | first1=M. | last1=Meissner | title=Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training | last2=Schmuker | first2=M. | last3=Schneider | first3=G. | journal=BMC Bioinformatics | pmc=1464136 | year=2006 | volume=7 | issue=1 | pages=125 | doi=10.1186/1471-2105-7-125 | pmid=16529661 | doi-access=free }}</ref> <ref name="yang08nature">{{cite book | first1=X.S. | last1=Yang | title=Nature-Inspired Metaheuristic Algorithms | publisher=Luniver Press | year=2008 | isbn=978-1-905986-10-1}}</ref> <ref name="parsopoulos02particle">{{cite conference | first1=K. | last1=Parsopoulos | title=Particle swarm optimization method in multiobjective problems | last2=Vrahatis | first2=M. | book-title=Proceedings of the ACM Symposium on Applied Computing (SAC) | year=2002 | pages=603β607| doi=10.1145/508791.508907 }}</ref> <ref name="MASON2017188">{{cite journal | title=Multi-objective dynamic economic emission dispatch using particle swarm optimisation variants | last=Mason | first=Karl | author2=Duggan, Jim | author3=Howley, Enda | journal=Neurocomputing | year=2017 | volume=270 | pages=188β197 | doi=10.1016/j.neucom.2017.03.086}}</ref> <!-- not used <ref name="BonyadiSPSO2014">{{cite journal | first1=Mohammad reza | last1=Bonyadi | title=SPSO 2011 analysis of stability; local convergence; and rotation sensitivity. | last2=Michalewicz | first2=Z. | journal=GECCO2014 (the best paper award in the track ACSI) | year=2014 | pages=9β16}}</ref>-->f <ref name="coellocoello02MOPSO">{{cite conference | first1=C. | last1=Coello Coello | url=http://portal.acm.org/citation.cfm?id=1252327 | title=MOPSO: A Proposal for Multiple Objective Particle Swarm Optimization | last2=Salazar Lechuga | first2=M. | book-title=Congress on Evolutionary Computation (CEC'2002) | year=2002 | pages=1051β1056}}</ref> <ref name="Chen10SPSO">{{cite journal | first1=Wei-neng | last1=Chen | title=A novel set-based particle swarm optimization method for discrete optimization problem | last2=Zhang | first2=Jun | journal=IEEE Transactions on Evolutionary Computation | year=2010 | volume=14 | issue=2 | pages=278β300 | doi=10.1109/tevc.2009.2030331| citeseerx=10.1.1.224.5378 | s2cid=17984726 }}</ref> <ref name="elshamy07sis">{{cite conference | first1=W. | last1=Elshamy | url=http://people.cis.ksu.edu/~welshamy/pubs/ieee_sis07.pdf | title=Clubs-based Particle Swarm Optimization | last2=Rashad | first2=H. | last3=Bahgat | first3=A. | book-title=IEEE Swarm Intelligence Symposium 2007 (SIS2007) | year=2007 | location=Honolulu, HI | pages=289β296 | access-date=2012-04-27 | archive-url=https://web.archive.org/web/20131023025232/http://people.cis.ksu.edu/~welshamy/pubs/ieee_sis07.pdf | archive-date=2013-10-23 | url-status=dead }}</ref> <ref name="nobile2012">{{cite conference | first1=M. | last1=Nobile | title=A GPU-Based Multi-Swarm PSO Method for Parameter Estimation in Stochastic Biological Systems Exploiting Discrete-Time Target Series | last2=Besozzi | first2=D. | last3=Cazzaniga | first3=P. | last4=Mauri | first4=G. | last5=Pescini | first5=D. | book-title=Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics. Lecture Notes in Computer Science. | year=2012 | volume=7264 | pages=74β85| doi=10.1007/978-3-642-29066-4_7 }}</ref> <ref name="clerc12spso">{{cite journal | url=http://hal.archives-ouvertes.fr/docs/00/76/49/96/PDF/SPSO_descriptions.pdf | title=Standard Particle Swarm Optimisation | last=Clerc | first=M. | journal=HAL Open Access Archive | year=2012}}</ref> <ref name="taherkhani2016inertia">{{cite journal | first1=M. | last1=Taherkhani | title=A novel stability-based adaptive inertia weight for particle swarm optimization | last2=Safabakhsh | first2=R. | journal=Applied Soft Computing | year=2016 | volume=38 | pages=281β295 | doi=10.1016/j.asoc.2015.10.004}}</ref> <ref name="bratton2007">{{cite book | first1=Daniel | last1=Bratton | url=http://www.cil.pku.edu.cn/resources/pso_paper/src/2007SPSO.pdf | last2=Kennedy | first2=James | title=2007 IEEE Swarm Intelligence Symposium | chapter=Defining a Standard for Particle Swarm Optimization | pages=120β127 | year=2007| doi=10.1109/SIS.2007.368035 | isbn=978-1-4244-0708-8 | s2cid=6217309 }}</ref> <ref name="Zambrano-Bigiarini2013">{{cite book | first1=M. | last1=Zambrano-Bigiarini | last2=Clerc | first2=M. | last3=Rojas | first3=R. | title=2013 IEEE Congress on Evolutionary Computation | chapter=Standard Particle Swarm Optimisation 2011 at CEC-2013: A baseline for future PSO improvements | publisher=Evolutionary Computation (CEC), 2013 IEEE Congress on | pages=2337β2344 | year=2013| doi=10.1109/CEC.2013.6557848 | isbn=978-1-4799-0454-9 | s2cid=206553432 }}</ref> <ref name="kennedy2002population">{{cite book | first1=J. | last1=Kennedy | last2=Mendes | first2=R. | date=2002 | title=Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600) | chapter=Population structure and particle swarm performance | volume=2 | pages=1671β1676 vol.2 | doi=10.1109/CEC.2002.1004493| isbn=978-0-7803-7282-5 | citeseerx=10.1.1.114.7988 | s2cid=14364974 }}</ref> <ref name="oliveira2016communication">{{cite book | first1=M. | last1=Oliveira | last2=Pinheiro | first2=D. | last3=Andrade | first3=B. | last4=Bastos-Filho | first4=C. | last5=Menezes | first5=R. | title=Swarm Intelligence | chapter=Communication Diversity in Particle Swarm Optimizers | volume=9882 | year=2016 | pages=77β88 | doi=10.1007/978-3-319-44427-7_7| series=Lecture Notes in Computer Science | isbn=978-3-319-44426-0 | s2cid=37588745 }}</ref> <ref name="cazzaniga2015">{{cite conference | first1=P. | last1=Cazzaniga | title=The impact of particles initialization in PSO: parameter estimation as a case in point, (Canada) | last2=Nobile | first2=M.S. | last3=Besozzi | first3=D. | book-title=Proceedings of IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology | year=2015| doi=10.1109/CIBCB.2015.7300288 }}</ref> <ref name="nobile2017">{{cite journal | first1=M.S | last1=Nobile | title=Fuzzy Self-Tuning PSO: a settings-free algorithm for global optimization | last2=Cazzaniga | first2=P. | last3=Besozzi | first3=D. | last4=Colombo | first4=R. | last5=Mauri | first5=G. | last6=Pasi | first6=G. | journal=Swarm and Evolutionary Computation | volume=39 | pages=70β85 | year=2018 | doi=10.1016/j.swevo.2017.09.001| hdl=10446/106467 | hdl-access=free }}</ref> <ref name="nobile2015">{{cite conference | first1=M.S | last1=Nobile | title=Proactive particles in swarm optimization: a self-tuning algorithm based on fuzzy logic | last2=Pasi | first2=G. | last3=Cazzaniga | first3=P. | last4=Besozzi | first4=D. | last5=Colombo | first5=R. | last6=Mauri | first6=G. | book-title=Proceedings of the 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2015), Istanbul (Turkey) | year=2015 | pages=1β8| doi=10.1109/FUZZ-IEEE.2015.7337957 }}</ref> <ref name=llso>{{cite journal |last1=Yang |first1=Q. |last2=CHEN |first2=W-N. |last3=Deng |first3=J-D. |last4=Li |first4=Y. |last5=Gu |first5=T. |last6=Zhang |first6=J. |title=A Level-based Learning Swarm Optimizer for Large Scale Optimization |journal=IEEE Transactions on Evolutionary Computation |year=2018 |volume=22 |issue=4 |pages=578β594 |doi=10.1109/TEVC.2017.2743016 |url=https://ieeexplore.ieee.org/document/8025724 }}</ref> <ref name=masoie>{{cite journal |last1=Chen |first1=T-Y. |last2=Chen |first2=W-N. |last3=Wei |first3=F-F. |last4=Hu |first4=X-M. |last5=Zhang |first5=J. |title=Multi-Agent Swarm Optimization With Adaptive Internal and External Learning for Complex Consensus-Based Distributed Optimization |year=2024 |journal=IEEE Transactions on Evolutionary Computation |page=1 |doi=10.1109/TEVC.2024.3380436 |url=https://ieeexplore.ieee.org/document/10477458 |url-access=subscription }}</ref> }} ==External links== {{Commonscat|Particle swarm optimization}} <!-- Please see discussion page before adding / removing links. --> *[http://www.particleswarm.info Particle Swarm Central] is a repository for information on PSO. Several source codes are freely available. *[http://vimeo.com/17407010 A brief video] of particle swarms optimizing three benchmark functions. *[http://www.mathworks.com/matlabcentral/fileexchange/11559-particle-swarm-optimization-simulation Simulation of PSO convergence in a two-dimensional space (Matlab).] *[http://www.vocal.com/particle-swarm-optimization/ Applications] of PSO. *{{cite journal|doi=10.1016/j.eswa.2008.10.086|title=Automatic calibration of a rainfallβrunoff model using a fast and elitist multi-objective particle swarm algorithm|journal=Expert Systems with Applications|volume=36|issue=5|pages=9533β9538|year=2009|last1=Liu|first1=Yang}} *[http://www.adaptivebox.net/research/bookmark/psocodes_link.html Links to PSO source code] {{Major subfields of optimization}} {{swarming}} {{DEFAULTSORT:Particle swarm optimization}} [[Category:Nature-inspired metaheuristics]] [[Category:Optimization algorithms and methods]] [[Category:Multi-agent systems]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Commonscat
(
edit
)
Template:Major subfields of optimization
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Swarming
(
edit
)
Template:Use American English
(
edit
)