Elsevier

Information Sciences

Volume 181, Issue 18, 15 September 2011, Pages 3749-3765
Information Sciences

An improved differential evolution algorithm with fitness-based adaptation of the control parameters

https://doi.org/10.1016/j.ins.2011.03.010Get rights and content

Abstract

Differential Evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through the similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike the traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used, which makes the scheme self-organizing in this respect. Scale Factor (F) and Crossover Rate (Cr) are two very important control parameters of DE since the former regulates the step-size taken while mutating a population member in DE and the latter controls the number of search variables inherited by an offspring from its parent during recombination. This article describes a very simple yet very much effective adaptation technique for tuning both F and Cr, on the run, without any user intervention. The adaptation strategy is based on the objective function value of individuals in the DE population. Comparison with the best-known and expensive variants of DE over fourteen well-known numerical benchmarks and one real-life engineering problem reflects the superiority of proposed parameter tuning scheme in terms of accuracy, convergence speed, and robustness.

Introduction

The Differential Evolution (DE) [21], [29], [27], [28] algorithm emerged as a very competitive form of evolutionary computing more than a decade ago. The first written article on DE appeared as a technical report by Storn and Price in 1995 [27]. One year later, the success of DE was demonstrated at the First International Contest on Evolutionary Optimization in May 1996, which was held in conjunction with the 1996 IEEE International Conference on Evolutionary Computation [28] (the first two places were given to non-evolutionary algorithms, which are not universally applicable but solved the test-problems faster than DE). Comprehensive surveys on the recent research on and with DE can be found in [6], [19].

In the DE community, the individual trial solutions (which constitute a population) are called parameter vectors or genomes. DE operates through the same computational steps as employed by a standard EA. However, unlike traditional EAs, DE employs difference of the parameter vectors to explore the objective function landscape. In this respect, it owes a lot to its two ancestors namely – the Nelder–Mead algorithm [17], and the Controlled Random Search (CRS) algorithm [22], which also relied heavily on the difference vectors to perturb the current trial solutions. Like other population-based search techniques, DE generates new points (trial solutions) that are perturbations of existing points, but these deviations are neither reflections like those in the CRS and Nelder–Mead methods, nor samples from a predefined probability density function, like those in Evolutionary Strategies (ES) [25]. Instead, DE perturbs current generation vectors with the scaled difference of two randomly selected population vectors. In its simplest form, DE adds the scaled, random vector difference to a third randomly selected population vector to create a donor vector corresponding to each population vector (also known as target vector). Next the components of the target and donor vectors are mixed through a crossover operation to produce a trial vector. In the selection stage, the trial (or offspring) vector competes against the population vector of the same index, i.e. the parent vector. Once the last trial vector has been tested the survivors of all the pair wise competitions become parents for the next generation in the evolutionary cycle.

The performance of DE is severely dependent on two of its most important control parameters: The crossover rate (Cr) and scale factor (F) [21]. Over the past decade many claims and counter-claims have been reported regarding the tuning and adaptation strategies of these control parameters. Some objective functions are very sensitive to the proper choice of the parameter settings in DE [13]. Therefore, researchers naturally started to consider some techniques to automatically find an optimal set of control parameters for DE [23], [2], [11], [12], [26], [1]. The most recent trend in this direction is the use of self-adaptive strategies like the ones reported in [23], [2]. However, self-adaptation schemes usually make the programming fairly complex and run the risk of increasing the number of function evaluations. Recently in [15], Mininno and Neri suggested an efficient adaptation scheme for the control parameters of DE. In their scheme, the randomized scale factor and crossover rate values are sampled from truncated Gaussian probability distribution functions. In [32], Weber et al. proposed to regulate the choice of the scale factors for each individual of DE in a self-adaptive manner, based on their success on structured populations.

This article suggests a novel automatic tuning method for the scale factor and crossover rate of population members in DE, based on their individual objective function values. The key sense of this adaptation mechanism is that if a search-agent (DE-vector) moves near to the optimum, its mutation step-size decreases and during crossover, it passes more genetic information to its offspring (trial vector in DE terminology) in order to favor exploitation. However, if the agent moves away from the optima, then it is more perturbed and during DE-type crossover, the offspring inherits lesser genetic information from the parent, so that the agent may be able to explore alternate regions quickly. Our fitness-based parametric adaptation schemes attempt to achieve a better trade-off between the explorative and the exploitative tendencies of DE. From this perspective they are conceptually related to the fitness diversity adaptation used in Memetic Algorithms (MAs). Some significant research works that need mention in this context are [3], [4], [20], [30]. The fitness-adaptive DE algorithm has been compared with some of the recently developed and best-known DE-variants like jDE [2], SaDE [23], and JADE [36] over 14 numerical benchmarks taken from [10] and one engineering optimization problem involving spread-spectrum radar polyphase code design [16]. The results of such comparison indicate that DE with this very simple fitness-based adaptation scheme is able to meet or beat its nearest competitors in a statistically meaningful way.

The rest of the paper is organized in the following way: Section 2 outlines the conventional DE family of algorithms and introduces its control parameters. The new fitness value based adaptation strategy has been described in sufficient details under Section 3. Section 4 explains the experimental settings, presents the experimental results of comparison between the competitive DE-variants and discusses the implications of the results. Finally Section 5 concludes the paper.

Section snippets

Classical DE

DE is a simple real-coded evolutionary algorithm. It works through a simple cycle of stages, which are detailed below.

Fitness-based adaptations in DE

The successful functioning of DE is critically dependent upon its two major control parameters: F and Cr. As can be perceived from the literature, several claims and counter-claims have been reported concerning the choice of suitable values for these parameters such that the resulting DE may enjoy a statistically superior performance over a wide variety of objective functions. Price et al.. [21] defined two new terms: jitter and dither in context to the randomization of F. The practice of

Benchmark functions used

We have used a test-bed of fourteen well-known boundary-constrained benchmark functions [10], [23] to evaluate the performance of the new adaptive DE variant. Among the functions, f1  f4 are unimodal and functions f5  f14 are multimodal. These 14 test functions are dimension wise scalable. The functions are briefly described below:

  • (1)

    Shifted Sphere Function: f1(X)=i=1Dzi2,Z=X-O,O=[o1,o2,oD] :

  • (2)

    Shifted Schwefel’s Problem 1.2: f2(X)=i=1Dj=1izj2,Z=X-O,O=[o1,o2,oD]

  • (3)

    Rosenbrock’s Function: f3(X

Conclusions

Over the past decade research on and with DE has reached an impressive state. However, the most competitive variants of DE are not as simple as the basic DE family proposed by Storn and Price, and require considerable programming efforts for implementation. In a few cases such DE-variants also demand huge storage capacity, e.g. over an optimization problem involving 1000 variables, the archive size in JADE may become very large to manage with limited memory and computational time. Keeping such

References (36)

  • S. Dasgupta et al.

    On stability and convergence of the population-dynamics in differential evolution

    AI Communications

    (2009)
  • S. Garcı´a et al.

    A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behavior: a case study on the CEC’2005 special session on real parameter optimization

    Journal of Heuristics

    (2009)
  • J.J. Liang, P.N. Suganthan, K. Deb, Novel composition test functions for numerical global optimization, in: Proceedings...
  • J. Liu et al.

    A Fuzzy adaptive differential evolution algorithm

    Soft Computing – A Fusion of Foundations, Methodologies and Applications

    (2005)
  • J. Liu, J. Lampinen, Adaptive parameter control of differential evolution, in: R. Matoušek, P. Ošmera (Eds.),...
  • J. Liu, J. Lampinen, On setting the control parameters of the differential evolution method, in: R. Matoušek, P. Ošmera...
  • E. Mininno et al.

    Estimation distribution differential evolution

    (2010)
  • J.A. Nelder et al.

    A simplex method for function minimization

    Computer Journal

    (1965)
  • Cited by (147)

    View all citing articles on Scopus
    View full text