Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Particle swarm optimization
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Simplifications === Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as [[Occam's razor]]. Simplifying PSO was originally suggested by Kennedy<ref name=kennedy97particle/> and has been studied more extensively,<ref name=bratton08simplified/><ref name=pedersen08thesis/><ref name=pedersen08simplifying/><ref name=yang08nature/> where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems. Another argument in favour of simplifying PSO is that [[metaheuristic]]s can only have their efficacy demonstrated [[empirical]]ly by doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot be [[Program correctness|proven correct]] and this increases the risk of making errors in its description and implementation. A good example of this<ref name=tu04robust/> presented a promising variant of a [[genetic algorithm]] (another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.<ref name=tu04corrections/> ==== Bare Bones PSO ==== Initialization of velocities may require extra inputs. The Bare Bones PSO variant<ref>{{Cite book|last=Kennedy|first=James|title=Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No.03EX706) |chapter=Bare bones particle swarms |date=2003|pages=80β87|doi=10.1109/SIS.2003.1202251|isbn=0-7803-7914-4|s2cid=37185749}}</ref> has been proposed in 2003 by James Kennedy, and does not need to use velocity at all. In this variant of PSO one dispences with the velocity of the particles and instead updates the positions of the particles using the following simple rule, :<math> \vec x_i = G\left(\frac{\vec p_i+\vec g}{2},||\vec p_i-\vec g||\right) \,, </math> where <math>\vec x_i</math>, <math>\vec p_i</math> are the position and the best position of the particle <math>i</math>; <math>\vec g</math> is the global best position; <math>G(\vec x,\sigma)</math> is the [[normal distribution]] with the mean <math>\vec x</math> and standard deviation <math>\sigma</math>; and where <math>||\dots||</math> signifies the norm of a vector. ==== Accelerated Particle Swarm Optimization ==== Another simpler variant is the accelerated particle swarm optimization (APSO),<ref>X. S. Yang, S. Deb and S. Fong, [https://arxiv.org/abs/1203.6577 Accelerated particle swarm optimization and support vector machine for business optimization and applications], NDT 2011, Springer CCIS 136, pp. 53-66 (2011).</ref> which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.<ref>{{Cite web | url=http://www.mathworks.com/matlabcentral/fileexchange/?term=APSO | title=Search Results: APSO - File Exchange - MATLAB Central}}</ref> In this variant of PSO one dispences with both the particle's velocity and the particle's best position. The particle position is updated according to the following rule, :<math> \vec x_i \leftarrow (1-\beta)\vec x_i + \beta \vec g + \alpha L \vec u \,, </math> where <math>\vec u</math> is a random uniformly distributed vector, <math>L</math> is the typical length of the problem at hand, and <math>\beta\sim 0.1-0.7</math> and <math>\alpha\sim 0.1-0.5</math> are the parameters of the method. As a refinement of the method one can decrease <math>\alpha</math> with each iteration, <math>\alpha_n=\alpha_0\gamma^n</math>, where <math>n</math> is the number of the iteration and <math>0 < \gamma < 1</math> is the decrease control parameter.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)