Self-Adaptive Differential Evolution Algorithm in Constrained
Real-Parameter Optimization
Janez Brest,Member,IEEE,ViljemˇZumer,Member,IEEE,and Mirjam Sepesy Mauˇc ec
Abstract—Differential Evolution(DE)has been shown to be a powerful evolutionary algorithm for global optimization in many real problems.Self-adaptation has been found to be high beneficial for adjusting control parameters during evolutionary process,especially when done without any user interaction.In this paper we investigate a self-adaptive differential evolution algorithm where more DE strategies are used and control parameters and are self-adapted.The performance of the self-adaptive differential evolution algorithm is evaluated on the set of24benchmark functions provided for the CEC2006 special session on constrained real parameter optimization.
I.I NTRODUCTION
The general nonlinear programming problem an optimiza-
tion algorithm is concerned with is tofind so as to
optimize.is the dimensionality of the function.Domains of the variables are defined by their
lower and upper bounds:.The feasible region is defined by a set of additional constraints ():
Differential Evolution(DE)is afloating-point encoding evolutionary algorithm for global optimization over contin-uous spaces[20],[18],[11],[12],[10],[22].Although the DE algorithm has been shown to be a simple yet powerful evolutionary algorithm for optimizing continuous functions, users are still faced with the problem of preliminary testing and hand-tuning of the evolutionary parameters prior to commencing the actual optimization process[22].As a solution,self-adaptation has been found to be highly benefi-cial in automatically and dynamically adjusting evolutionary parameters such as crossover rates and mutation rates.Self-adaptation allows an evolution strategy to adapt itself to any general class of problems by reconfiguring itself accordingly, and to do this without any user interaction[2],[3],[6].In literature,self-adaptation is usually applied to the and control parameters.The efficiency and robustness of the DE algorithm are much more sensitive to the setting of and control parameters than to the setting of the third DE parameter,that is.
Teo in[22]proposed an attempt at self-adapting the popu-lation size parameter in addition to self-adapting crossover and mutation rates.
Janez Brest,ViljemˇZumer,and Mirjam Sepesy Mauˇc ec are with the Faculty of Electrical Engineering
and Computer Science,University of Maribor,Smetanova ul.17,2000Maribor,Slovenia,(email:janez.brest, zumer,mirjam.sepesy@uni-mb.si).
The main objective of this paper is a performance eval-uation of our self-adaptive jDE-2[4]algorithm,which uses self-adapting control parameter mechanism on the control parameters and.The performance of the algorithm is evaluated on the set of24benchmark functions provided for the CEC2006special session on real parameter optimiza-tion[9].
The article is structured as follows.Section II gives an overview of work dealt with the DE.Section III shortly summarize the differential evolution.Section IV describes constrain-handling differential evolution.In Section V dif-ferential evolution jDE-2algorithm,which use self-adaptive adjusting control parameters,is described.We conclude this section by proposing some improvements of our self-adapting jDE algorithm.In Section VI experimental results our self-adaptive jDE-2algorithm on CEC2006benchmark functions are presented.Detailed performance analysis of the algorithm is given.Section VII concludes the paper with somefinal remarks.
II.W ORK R ELATED WITH THE D IFFERENTIAL
E VOLUTION
The DE[20],[19]algorithm was proposed by Storn and Price,and since then the DE algorithm has been used in many practical cases.The original DE was modified,and many new versions proposed.Liu and Lampinen[11]reported that the effectiveness,efficiency and robustness of the DE algorithm are sensitive to the settings of the control parameters.The best settings for the control parameters depends on the func-tion and requirements for consumption time and accuracy. Quite different conclusions were reported about the rules for choosing the control parameters of DE.In[15]it is stated that the control parameters of DE are not difficult to choose. On the other hand,G¨a mperle et al.[7]reported that choosing the proper control parameters for DE is more difficult than expected.
Ali and T¨o rn in[1]proposed new versions of the DE algorithm,and also suggested some modifications to the classical DE in order to improve its efficiency and robustness. They introduced an auxiliary population of individuals alongside the original population(noted in[1],a notation using sets is used–population set-based methods).Next they proposed a rule for calculating the control parameter automatically.
Sun et al.[21]proposed a combination of the DE algo-rithm and the estimation of distribution algorithm(EDA), which tries to guide its search towards a promising area by sampling new solutions from a probability model.Based
2006 IEEE Congress on Evolutionary Computation
Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006
on experimental results it has been demonstrated that the DE/EDA algorithm outperforms both the DE and the EDA
algorithms.
Qin and Suganthan in[17]proposed Self-adaptive Dif-ferential Evolution algorithm(SaDE),where the choice of
learning strategy and the two control parameters and are not required to be pre-defined.During evolution, the suitable learning strategy and parameter settings are
gradually self-adapted according to the learning experience. J.Brest et al.[4]compared version of a self-adaptive jDE algorithm with others adaptive and self-adaptive algo-rithms:FADE[12],DESAP[22],SaDE[17],and jDE[5]. The reported results show,that the jDE algorithm performs better than FADE and DESAP algorithms and self-adaptive jDE-2algorithm gives comparable results on benchmark functions as the SaDE algorithm,which uses local search procedu
re,additionally.The comparison of jDE and jDE-2 algorithms was shown,the jDE-2algorithm,which uses two DE strategies,performed better than jDE algorithm,which uses only one strategy.Comparative study exposed the jDE-2 as prospective algorithm for a global optimization.
III.T HE D IFFERENTIAL E VOLUTION A LGORITHM DE creates new candidate solutions by combining the parent individual and several other individuals of the same population.A candidate replaces the parent only if it has betterfitness value.DE has three parameters:amplification factor of the difference vector–,crossover control para-meter–and population size–
Original DE algorithm keeps all three control parameters fixed during the optimization process.However,there still exists a lack of knowledge of how tofind reasonably good values for the control parameters of DE for a given function [12].
The population of the original DE algorithm[19],[20],
[18]contains-dimensional vectors:
denotes the generation.During one generation for each vector,DE employs the mutation and crossover operations to produce a trial vector:
Then a selection operation is used to choose vectors for the next generation().
The initial population is selected uniformly randomly between the lower()and upper()bounds defined for each variable.These bounds are specified by the user according to the nature of the problem.After initialization, DE performs several vector transforms(operations),in a process called evolution.
A.Mutation operation
Mutation for each population vector creates a mutant vector:Mutant vector can be created by using one of the mutation strategies.The most useful strategies are:
”rand/1”:
”best/1”:
”current to best/1”:
”best/2”:
”rand/2”:
where the indexes represent the ran-
dom and mutually different integers generated within range and also different from index.is a mutation scale factor within the range,usually less than.is
the best vector in generation.
B.Crossover operation
After mutation,a”binary”crossover operation forms the final trial vector,according to the-th population vector and its corresponding mutant vector.
if or
otherwise
is a crossover parameter or factor within the range
and presents the probability of creating parameters for trial vector from a mutant vector.Index is a randomly chosen integer within the range.It is responsible for the trial vector containing at least one parameter from the mutant vector.
If the control parameters from the trial vector are out of
bounds,the proposed solutions in literature[20],[19],[18], [16]are:they are reflected into bounds,set on bounds or used as they are(out of bounds).
C.Selection operation
The selection operation selects according to thefitness
value of the population vector and its corresponding trial vector,which vector will survive to be a member of the next generation.For example,if we have a minimization problem, we will use the following selection rule:
if
otherwise
IV.C ONSTRAINT-HANDLING
During the last few years several methods were proposed
for handling constraints by genetic algorithms for parameter optimization problems.These methods were grouped by Michalewicz et al.[14],[8]into four categories:
methods based on preserving feasibility of solutions.
The idea behind the method is based on specialized
operators which transform feasible individuals into fea-sible individuals.The method assumes linear constrains only and a feasible starting point on feasible initial population.
methods based on penalty functions.Many evolutionary algorithms incorporate a constraint-handling method based on the concept of exterior penalty functions which penalize infeasible solutions.These methods differ in important details,how the penalty function is designed and applied to infeasible solutions.
methods,which make a clear distinction between fea-sible and infeasible solutions.There are few methods which emphasize the distinction between feasible and infeasible solutions in the search space.One of those method distinguishes between feasible and infeasible individuals:for any feasible individual and any in-feasible individually:,i.e.any feasible solution is better than any infeasible one.
other hybrid methods.These methods combine evolu-tionary computation techniques with deterministic pro-cedures for numerical optimization problems.
No special extensions of the DE algorithm are necessary to make it suitable for handling constraints[23].Most constraint problems can be handled by the penalty method.A measure of the constraint violation is often useful when handling constraints.A solution is regarded as feasible if
where equality constraints are transformed into inequalities. In CEC2006[9]special section is set to.Mean violations is defined:
The sum of all constraint violations is zero for feasible solutions and positive when at least one constraint is violated. An obvious application of the constraint violation is to use it to guide the search towards feasible areas of the search space.There was quite a lot of work on such ideas and other constraint techniques in the EA-community during the 1990’s.A summary of these techniques can be found in Michalewicz’s and Fogel’s book[13],which also contains information on many other stochastic techniques.
V.T HE S ELF-ADAPTIVE D IFFERENTIAL E VOLUTION
A LGORITHMS
Let usfirst describe self-adaptive jDE algorithm[5],[4]. It uses self-adapting mechanism on the control parameters and.
In[5]the self-adapting control parameter mechanism of ”rand/1/bin”strategy was used.This strategy is the most often used in practice[20],[21],[7],[11].
In[5]a self-adaptive control mechanism was used to change the control parameters and during the run. The third control parameter was not changed during the run.Each individual in population was extended with parameter values.The control parameters that were adjusted by means of evolution are and(see Figure1).Both of them were applied at individual levels.The better values of these(encoded)control parameters lead to better individuals which,in turn,are more likely to survive and produce offspring and,hence,propagate these better parameter
values.
Fig.1.Self-adapting:encoding aspect.
New control parameters and were calcu-lated as follows:
and they produce control parameters and in a new par-ent vector.are uniform random values within the range.and represent probabilities to ad-just control parameters and,respectively.
were takenfixed values,respectively.The new takes a value from,and the new from in a random manner.and are obtained before the mutation is performed.So they influence the mutation,crossover and selection operations of the new vector
.
Fig.2.Self-adapting:encoding aspect of two strategies. Some ideas,how to improve jDE algorithm,are reported in[4].In the rest of this section we will outline them.
To keep solution of bound-constrained problems feasible, trial parameters that volatile boundary constraints are set to
FES
Best 2.0505(0)0.4070(0)0.7406(0)20.9966(0)15.1226(7)60.7180(0)
Median 5.1047(0)0.5030(0)0.9976(0)63.3410(0)201.3292(8)251.4440(0)
Worst7.2914(0)0.5411(0)0.9932(0)161.2217(0)976.1121(9)831.1240(0)
0,0,00,0,00,0,00,0,02,3,30,0,0
0.00000.00000.00000.0000 1.08340.0000
Mean 5.3034 4.9680e-019.4358e-01 6.3542e+01 3.0997e+02 2.8087e+02
Std 1.0670 3.0675e-027.4900e-02 2.8478e+01 3.3878e+02 1.7917e+02
Best 4.7410e-05(0)0.0262(0)0.4738(0) 6.5150e-07(0)9.0323(0) 2.3646e-11(0)
Median0.0001(0)0.0834(0)0.7231(0) 4.6348e-06(0)155.0791(0) 2.3101e-10(0)
Worst0.0005(0)0.1560(0)0.9155(0) 1.4717e-05(0) 5.4202(2) 1.0268e-09(0)
0,0,00,0,00,0,00,0,00,0,00,0,0
0.00000.00000.00000.00000.00000.0000
Mean 1.2610e-048.5485e-027.1313e-01 5.2805e-06 1.8287e+02 2.7077e-10
Std9.3563e-05 2.8747e-02 1.3929e-01 3.0574e-06 1.3900e+02 2.0464e-10
Best0(0)7.7521e-11(0)0.0890(0)0(0)0(0) 1.1823e-11(0)
Median0(0) 3.3051e-09(0)0.3481(0)0(0)0(0) 1.1823e-11(0)
Worst0(0)0.0110(0)0.5117(0)0(0)9.0697(0) 1.1823e-11(0)
0,0,00,0,00,0,00,0,00,0,00,0,0
0.00000.00000.00000.00000.00000.0000
Mean0.00008.8089e-04 3.3808e-010.0000 1.0756 1.1823e-11
Std0.0000 3.0488e-03 1.0140e-010.0000 2.41930.0000
TABLE II
E RROR V ALUES A CHIEVED W HEN FES=,FES=,FES=FOR P ROBLEMS7-12.
FES
Best47.0147(0) 6.5370e-08(0)13.3137(0)2711.5904(0)0.0022(0) 2.5178e-05(0) Median92.1652(0) 2.5660e-06(0)43.1004(0)7322.9075(0)0.1346(0)0.0002(0)
Worst170.4361(0) 1.7141e-05(0)67.6963(0)13759.8095(0)0.2501(0)0.0081(0) 0,0,00,0,00,0,00,0,00,0,00,0,0
0.00000.00000.00000.00000.00000.0000
Mean9.2682e+01 4.8370e-06 4.1988e+017.8258e+03 1.3221e-01 1.0618e-03
Std 3.4874e+01 5.1240e-06 1.2917e+01 2.7236e+038.8471e-02 2.2902e-03
Best0.0454(0) 4.1633e-17(0)9.7979e-05(0) 6.6643(0)0(0)0(0)
Median0.0810(0) 5.5511e-17(0)0.0002(0)14.5927(0) 2.6639e-11(0)0(0)
Worst0.1195(0) 5.5511e-17(0)0.0004(0)79.4762(0)0.1004(0)0(0) 0,0,00,0,00,0,00,0,00,0,00,0,0
0.00000.00000.00000.00000.00000.0000
Mean8.0109e-02 4.8850e-17 2.8265e-04 1.7685e+01 4.1403e-030.0000
Std 2.0492e-027.0763e-189.2069e-05 1.3857e+01 2.0074e-020.0000
Best-1.8829e-13(0) 4.1633e-17(0) 1.1368e-13(0)-1.8189e-12(0)0(0)0(0)
Median-1.8829e-13(0) 4.1633e-17(0) 2.2737e-13(0)-9.0949e-13(0)0(0)0(0)
Worst-1.8474e-13(0) 4.1633e-17(0) 2.2737e-13(0) 4.5274e-07(0)0.0022(0)0(0) 0,0,00,0,00,0,00,0,00,0,00,0,0
0.00000.00000.00000.00000.00000.0000
Mean-1.8701e-13 4.1633e-17 2.0918e-13 2.1288e-089.1079e-050.0000
Std 1.7405e-15 1.2580e-32 4.2538e-149.1279e-08 4.5540e-040.0000
bound values by jDE algorithm[5].R¨o nk¨o nen,Kukkonen and Price[18]suggest the solution that volatile boundary constraints should be reflected back from the bound by the amount of violation:
The jDE-2[4]algorithm uses both solutions for volatile boundary constraints with equal probability in a random manner:
Strategy”rand/1/bin”is used in jDE algorithm and control
FES
Best0.9279(5) 5.7028(6) 1.2410(2)0.0800(0)133.6529(11)0.6759(0)
Median0.9472(6) 4.2986(6)0.3196(4)0.1233(0)103.6969(12)0.8548(4)
Worst0.6896(5)-10.0665(6)0.2469(4)0.2275(0)77.6164(12) 1.1177(13)
0,3,30,3,30,2,20,0,04,4,40,2,2
0.14200.16550.04310.0000 2.53600.0236
Mean 1.0686 2.7371 2.2858 1.3856e-01 1.1166e+028.3581e-01
Std9.4822e-01 5.4970 2.1405 4.3618e-027.3462e+01 1.8863e-01
Best0.8635(0)0.2289(0)0.0001(0)7.6055e-07(0)30.9149(0)0.0020(0)
Median0.9450(2) 1.0538(0)0.8793(0) 1.6831e-06(0)330.3851(1)0.0043(0)
Worst0.9458(4) 2.1280(0) 4.3730(0) 3.6573e-06(0)27.2941(4)0.0189(0)
0,0,20,0,00,0,00,0,00,0,10,0,0
0.00060.00000.00000.00000.00020.0000
Mean9.3799e-01 1.1351 1.2074 1.8758e-06 1.0019e+02 5.2779e-03
Std 6.8257e-02 5.1352e-01 1.2471 6.9733e-078.4818e+01 3.2909e-03
Best0.2970(0) 1.4210e-14(0)0(0) 5.1070e-15(0) 1.2732e-11(0) 3.3306e-16(0)
Median0.6800(0) 2.1316e-14(0)0(0) 5.1070e-15(0)10.4896(0) 4.4408e-16(0)
Worst0.9449(0) 2.1316e-14(0)0.1100(0) 5.1070e-15(0)86.7535(0) 4.4408e-16(0) 0,0,00,0,00,0,00,0,00,0,00,0,0
0.00000.00000.00000.00000.00000.0000
Mean 6.9148e-01 1.8758e-14 4.4040e-03 5.1070e-15 3.9906e+01 4.3077e-16
Std 2.2376e-01 3.4809e-15 2.2020e-020.0000 3.8319e+01 3.6822e-17
TABLE IV
E RROR V ALUES A CHIEVED W HEN FES=,FES=,FES=FOR P ROBLEMS19-24.
FES
adaptiveBest197.9016(0) 3.1215(38)396.3713(8)8137.0973(57)400.0551(0)0.0006(0) Median294.2522(0) 5.1358(40)431.0758(7)8756.7611(55)-249.9385(10)0.0066(0)
Worst537.2860(0)8.1603(42)317.5561(9)19763.5690(51)-411.0136(16)0.0142(0) 0,0,02,18,200,3,417,19,190,5,50,0,0
0.0000 2.00170.27022429365.36840.39740.0000
Mean 3.1332e+02 4.3551 4.7242e+02 1.3564e+04 4.4412e+01 6.6979e-03
Std7.0448e+01 1.3059 1.5794e+02 5.0437e+03 3.4592e+02 3.3507e-03
Best0.6191(0)0.0795(21)0.0498(0)6300.6825(58)223.7870(0) 5.5067e-14(0) Median 1.1097(0)0.0792(24)0.0823(0)14782.0213(57)475.3514(0) 5.5067e-14(0) Worst 2.6947(0)0.0491(26)582.6489(0)5627.0472(57)422.4294(1) 6.0396e-14(0) 0,0,00,4,200,0,019,19,190,0,00,0,0
0.00000.02030.000030001.66770.00000.0000
Mean 1.20447.9528e-02 2.8678e+01 1.0550e+04 4.4242e+02 5.5778e-14
Std 4.4409e-01 1.7970e-02 1.1840e+02 4.4761e+03 1.0704e+02 1.4950e-15
Best 2.8421e-14(0)0.0506(0)-1.7053e-13(0)9151.6956(8)0(0) 5.5067e-14(0) Median 4.2632e-14(0)0.1082(2)-2.8421e-14(0)8033.6537(8) 2.2737e-13(0) 5.5067e-14(0) Worst 5.4711e-13(0)0.1068(5)130.9783(0)19337.2039(16)300.0085(0) 5.5067e-14(0) 0,0,00,1,10,0,02,3,30,0,00,0,0
0.00000.00720.0000 3.51070.00000.0000
Mean7.5033e-14 1.0591e-01 1.0478e+019.6221e+03 1.2102e+01 5.5067e-14
Std 1.0531e-13 1.1510e-02 3.6266e+01 5.1748e+03 5.9983e+01 1.9323e-29
parameters and are encoded in each individual. In[17]authors proposed self-adapting SaDE algorithm which uses two offive original DE’s strategies to be applied to individuals in the current population.
Figure2shows a new solution how the control param-eters of two original DE’s strategies are encoded in each individual.Each strategy uses its own control parameters. The solution to apply more strategies into our algorithm is straight-forward.
In experiments in this paper,the proposed jDE-2algorithm uses three strategies”rand/1/bin”,”current to best/1/bin”and ”rand2/bin”,and thefirst pair of self-adaptive control param-eters and belongs to the”rand/1/bin”strategy and the second pair belongs to”current to best/1/bin”strategy,etc. The population size was set to200.
The jDE-2algorithm replaces worst individuals at every -th generation with parameter values distributed uniformly
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论