Description
Particle swarm optimization (PSO) is a very popular, randomized, nature-inspired meta-heuristic for solving continuous black box optimization problems. The main idea is to mimic the behavior of natural swarms like, e. g., bird flocks and fish swarms that find pleasant regions by sharing information. The movement of a particle is influenced not only by its own experience, but also by the experiences of its swarm members.
In this thesis, we study the convergence process in detail. In order to measure how far the swarm at a certain time is already converged, we define and analyze the potential of a particle swarm. This potential analysis leads to the proof that in a 1-dimensional situation, the swarm with probability 1 converges towards a local optimum for a comparatively wide range of objective functions. Additionally, we apply drift theory in order to prove that for unimodal objective functions the result of the PSO algorithm agrees with the actual optimum in k digits after time O(k).
In the general D-dimensional case, it turns out that the swarm might not converge towards a local optimum. Instead, it gets stuck in a situation where some dimensions have a potential that is orders of magnitude smaller than others. Such dimensions with a too small potential lose their influence on the behavior of the algorithm, and therefore the respective entries are not optimized. In the end, the swarm stagnates, i. e., it converges towards a point in the search space that is not even a local optimum. In order to solve this issue, we propose a slightly modified PSO that again guarantees convergence towards a local optimum.
Reviews
There are no reviews yet.