Tuesday, January 18, 2011

A New Optimizer Using Particle Swarm Theory

Russell Eberhart
Purdue School of Engineering and Technology
Indianapolis, IN 46202-5 160
eberhart@ engr.iupui.edu

 James Kennedy
Bureau of Labor Statistics
Washington, DC 20212
kennedyj @pol.ocsp.bls.gov

 ABSTRACT

The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed. Relationships between particle swarm optimization and both artificial life and evolutionary computation are reviewed.



1. INTRODUCTION
A new method for optimization of continuous nonlinear functions was recently introduced [6]. This paper reviews the particle swarm optimization concept. Discussed next are two paradigms that implement the concept, one globally oriented (GBEST), and one locally oriented (1,131 IST), followed by results obtained from applications and tests upon which the paradigms have been shown to perform successfully.Particle swarm optimization has roots in two main component methodologies. Perhaps more obvious are its ties to artificial life (A-life) in general, and to bird flocking, fish schooling, and swarming theory in particular. It is also related, however, to evolutionary Computation, and has ties to both genetic algorithms and evolution strategies [ 11.I’armlc swarm optimization comprises a very simple concept, and paradigms are implemented in a few lines of computer code. It requires only primitive mathematical operators, and is computationally inexpensive in terms of both memory requirements and speed. Early testing has found the implementation to be effective with several kinds of problems [6]. This paper discusses application of the algorithm to the training of artificial neural network  weights. Particle swarm optimization has also been demonstrated to perform well on genetic algorithm test functions, and it appears to be a promising approach for robot task learning.Particle swarm optimization can be used to solve many of the same kinds of problems as genetic algorithms (GAS) [6]. This optimization technique does not scffer, however, from some of GA’s difficulties; interaction in the group enhances rather than detracts from progress toward the solution. Further, a particle swarm system has memory, which the genetic algorithm does not have. Change in genetic populations results in destruction of previous knowledge of the problem, except when elitism is employed, in which case usually one or a small number of individuals retain their “identities.” In particle swarm optimization, individuals who fly past optima are tugged to return toward them; knowledge of good solutions is retained by all particles.  

2. THE PARTICLE SWARM OPTIMIZATION
    CONCEPT
Particle swarm optimization is similar to a genetic algorithm [2] in that the system is initialized with a population of random solutions. It is unlike a genetic algorithm, however, in that each potential solution is also assigned a randomized velocity, and the potential solutions, called particles, are then “flown” through hyperspace. Each particle keeps track of its coordinates in hyperspace which are associated with the best solution (fitness) it has
achieved so far. (The value of that fitness is also stored.) This value is called pbest. Another “best” value is also tracked. The “global” version of the particle swarm optimizer keeps track of the overall best value, and its location, obtained thus far by any particle in the population; this is called gbest.The particle swarm optimization concept consists of, at each time step, changing the velocity (accelerating) each particle toward its pbest and gbest (global version). Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward pbest and gbest.This paper introduces a “local” version of the optimizer in which, in addition to pbest, each particle keeps track of tlie best solution, called Zbest, attained within a local topological neighborhood of particles. Both the global and local versions are described in more detail below. ’flw only variable that must be determined by the user is the maximum velocity to which the particles are limited.An acceleration constant is also specified, but in the expcrience of the authors, is not usually varied among applications.

3. CONCLUSIONS

This paper introduces a new form of the particle swarm optimizer, examines how changes in the paradigm affect the number of iterations required to meet an error criterion, and the frequency with which models cycle
interminably around a nonglobal optimum. Three versions were tested: the “GBEST” model, in which every agent has information about the group’s best evaluation, and two variations of the “LBEST” version, one with a neighborhood of six, and one with a neighborhood of two.It appears that the original GBEST version performs best in terms of median number of iterations to convergence,while the LBEST version with a neighborhood of two ismost resistant to local minima.Particle swarm optimization is an extremely simplealgorithm that seems to be effective for optimizing a widerange of functions. We view it as a mid-level form of Alifeor biologically derived algorithm, occupying the spacein nature between evolutionary search, which requireseons, and neural processing, which occurs on the order ofmilliseconds. Social optimization occurs in the timeframe of ordinary experience - in fact, it is ordinaryexpcrience. In addition to its ties with A-life, particleswarm optimization has obvious ties with evolutionarycomputation. Conceptually, it seems to lie somewherebetween genetic algorithms and evolutionaryprogramming. It is highly dependent on stochasticprocesses, like evolutionary programming. Theadjustment toward pbest and gbest by the particle swarmoplirnizer is conceptually similar to the crossoveroperation utilized by genetic algorithms. It uses theconcept of fitness, as do all evolutionary computation paradigms.rlniquc to the concept of particle swarm optimization isflying potential solutions through hyperspace, acceleratingtoward “better” solutions. Other evolutionarycomputation schemes operate directly on potentialsolutions which are represented as locations in
hypcrspace. Much of the success of particle swarms seems to lie in the agents’ tendency to hurtle past theirtarget. Holland’s chapter on the “optimum allocation of trials” [5] reveals the delicate balance between onscrvativc testing of known regions versus risky exploration of the unknown. It appears that the current version of the paradigm allocates trials nearly optimally.The stochastic factors allow thorough search of spaces
bcrwcen regions that have been found to be relatively good, and the momentum effect caused by modifying the extant velocities rather than replacing them results in ovcrshooling, or exploration of unknown regions of the problem domain.Much further research remains to be conducted on this simple new concept and paradigm. The goals in tlcvcloping it have been to keep it simple and robust, and wc seem to have succeeded at that. The algorithm is written in a very few lines of code, and requires only spccification of the problem and a few parameters in order to solve it.

No comments:

Post a Comment