Preliminary study on the optimization design method of large-scale engineering based on genetic algorithm

zhaozj2021-02-16  71

Chapter 7, Based on the Optimization Design Method of Large Scale Engineering Optimization Based on Genetic Algorithm

§7.1 Overview

The previous chapter made a more in-depth study of the single target genetic algorithm, and made some improvements, proposed the pertide list target genetic algorithm, and made a general traditional optimization algorithm (including improved MDOD method, constraint composition The study of the combination of the method, the constraining variable scale method and the genetic algorithm has achieved better results.

But if the problem is to be solved is a large-scale optimization problem, these improvement measures are not ideal. When using the planning algorithm to solve the large-scale optimization problem, the problems encountered mainly include the following: (1) The calculation cost is very large, with a commonly used center difference into an example, suppose the problem of dimension is

, Calculate a need to analyze model

Second, for complex structural finite element analysis and CFD calculations, the time for a complete analysis takes time is more. As the number of dimensions increases, the calculation cost of need may exceed our ability, so now there are some approximate sensitivity analysis algorithms based on structural rigid equations. But there is still a long distance from complete solution, and the other is that the planning method often uses one-dimensional search method. Although the method used is efficient, if some reason (such as the reason for the reason, the reason for the maintenance parameters ) The total amount of calculation of one-dimensional search is also quite large. (2) With the increase in problem dimension, for some of the more efficient algorithms, the required storage cost is also relatively large, although it is now reduced to save memory, the shortcomings of memory overhead will make Computer computing time with conventional configuration is significantly increased, because the current computer is analog physical memory using paging files on the hard disk, so even if the system meets the application for large memory, it will also pay the biggest price on the speed. . (3) Due to the increase in dimensions, the calculation error can not be increased, which will be a more serious problem for some optimization algorithms with an error accumulation issue.

At present, there have been some new algorithms for large-scale optimization. The genetic algorithm is one of the branches, but there is currently in the initial stage of the initial stage, due to the increase in dimensions, due to the increase in dimensions Difficult, using the general genetic algorithm to solve the effect, this chapter will do some preliminary work on the genetic algorithm to solve some preliminary work in large-scale numerical optimization issues.

7.2 A new steady genetic algorithm based on real-based coding

§7.2.1 Introduction of Real Coding

There are some advantages in the genetic algorithm in the genetic algorithm:

(1) Binary coding is similar to the composition of biological chromosome, so algorithm is easy to use bioreactic theory.

To explain and make genetic operations such as hybridization, variations are easy to achieve;

(2) use binary coding and algorithm processing mode;

However, if the genetic algorithm with binary encoding is solved by solving continuous variable optimization, there is the following shortcomings:

(1) binary coding of adjacent integers may have a large Hamming distance, such as 15 and 16

The binary representation is 01111 and 1000, so the algorithm is improved from 15 to 16, and all bits must be changed. This defect will reduce the convergence efficiency of the genetic operator. This disadvantage of binary encoding is sometimes referred to as Hamming Cliffs;

(2) When binary encoding, it is generally given to give the accuracy of the solution to determine the series, and when the accuracy is

After the determination, it is difficult to adjust during the execution of the algorithm. Thereby the algorithm lacks the function of fine-turning. If you select a higher precision in the beginning of the algorithm, the string is very large, which will also reduce the efficiency of the algorithm. (3) When solving high-dimensional optimization, the binary coding string will be very long. Thus make algorithms

Search efficiency is very low.

In order to overcome the disadvantages of binary coding, the variable of the problem is the real value, and the real number encoding can be directly adduct. It is also easy to introduce inspiration information related to the problem area to increase search capabilities of the generator algorithm.

§7.2.2 Some common real estimated coding genetic operators

For real-in-case coding, in theory, various genetic operations under binary encoding can also be used, but in practice, other genetic operators are introduced to the characteristics of real coding.

Here are some common real-name hybrid situations:

Assume

with

It is two parents to solve the vector,

,

It is two processes obtained by hybridization.

(1) Discrete Crosssover

Discrete hybridization is also divided into part discrete hybridization and overall discrete hybridization. Some discrete hybridization is a part of the variable (such as a component or all components after a component), then exchange these components, such as selecting exchange

All components after a component, then two descendants

(7-1)

(7-2)

The overall hybrid is

Probability exchange

with

All components, this is a bit like binary encoding homogeneous hybridization. It can also be implemented by generating a template. If a bit is 1, the corresponding position is exchanged, otherwise it will be retained.

(2) Arithmetic hybrids (Arithmetical Crossover)

Moderate hybrids are also divided into partial arithmetic hybridization and overall arithmetic hybridization.

Some arithmetic hybrids are also part of the component in the parent solution, such as the first

All components after the component, then generate

A

Random number

Then, two processes are defined as:

(7-3)

(7-4)

Of course, we can also take it.

There is only one random number.

Overall arithmetic hybrid generation

A

Interval

The combination is the same as partial arithmetic hybridization.

From the perspective of geometries, we can pass

produce

A super cube, the progeny from discrete hybridization is the vertex of this super cube, and the descendants of arithmetic hybrids are in the interior of the cubic. From this point, the search range of arithmetic hybrids is larger than discrete hybridization. At the beginning of the real number, the role of the hybrid operator is far without binary encoding, so that the effect of which hybrid operators on the performance of the algorithm is not great.

The following describes a large variation operator in the real number of genetic algorithms. When the real number encoding, the role of the mutant operator is no longer like binary encoding, it is only a loss of diversity in the group, which has become a major Search operator. To this end, people have designed a lot of variation operators.

(1) Non-consistency variation

In the traditional genetic algorithm, the role and algebra of the operator are not directly related. Therefore, after the algorithm evolves to a certain generation, the traditional genetic operator will be difficult to obtain due to the lack of local search. Based on the above reasons, Z.Michalewicz first linked the results of the variation operator to the evolutionary algebra, making the range of variation operators relatively large, and the range of variations is getting smaller, from the advancement of evolution A fine tuning effect on the evolution system. It is specifically described as follows:

Assume

Is a parent solution, component

Selected as a variation, its definition interval is

, The solution after the variation is,

(7-5)

Note: in the above formula

To produce a function of uniform distribution random number.

function

The specific expression can be taken

(7-6)

Here

Yes

A random number,

For the biggest algebra,

It is a parameter that determines the degree of non-consistency, which is a role in adjusting the local search area, which is typically 2 to 5.

(2) Adaptive variation

Although uniform variation enhances the local search capability of the algorithm, the scope of local search is only related to the evolution algebra, and the quality of the solution is independent, that is, the quality of the question is good, the search range is the same. A more reasonable situation should be to search for individuals with large adaptation value to search within a smaller range and make individual individuals in a larger range in a larger range. Based on this understanding, literature [37] proposes the idea of ​​adaptive variation.

First, the variation temperature of the solution is as follows:

Assume

Is a vector of solution space,

Is its adaptation value,

It is the maximum adaptation value of the solution, and the temperature of the variation can be defined as:

(7-7)

For a lot of problems

It is difficult to determine, we can use a rough upper limit here, generally use the maximum adaptation value in the current group as

.

After the concept of variation temperature, its variation is the same as the non-consistent variation operator, and the function expression is:

(7-8)

Document [37] Think that the variation operator defined will protect better, so that the search is performed in a smaller range, and the field of searches is large.

However, the presence of this assumption is some questions. One of the direct consequences of this is to make the algorithm based on the idea of ​​limited adaptability.

(3) Normalism variation

There are more normal mutation operators in real-encoded genetic algorithms. Specific introduction is as follows:

In the evolutionary strategy, one individual of the group is from a solution

And a perturbation vector

Composition, this perturbation vector is the control vector of the mutant solution, and it is constantly changed, if

It is selected to be selected, and the new individuals after they can be generated in the formula.

:

(7-9)

(7-10)

Here

Two step size control parameters,

It is the independent average value of 0, and the variance is

Combine the random number of normal distribution. Another adjustment

The principle is to use Rechenberg's 1/5 successful rules. That is, the probability of success should be 1/5, and if the actual success ratio is greater than 1/5, then increase

(7-5)

Note: in the above formula

function

(7-6)

Here

(2) Adaptive variation

Although uniform variation enhances the local search capability of the algorithm, the scope of local search is only related to the evolution algebra, and the quality of the solution is independent, that is, the quality of the question is good, the search range is the same. A more reasonable situation should be to search for individuals with large adaptation value to search within a smaller range and make individual individuals in a larger range in a larger range. Based on this understanding, literature [37] proposes the idea of ​​adaptive variation.

First, the variation temperature of the solution is as follows:

Assume

(7-7)

For a lot of problems

After the concept of variation temperature, its variation is the same as the non-consistent variation operator, and the function expression is:

(7-8)

Document [37] believes (3) normality variation

(7-9)

(7-10)

Here

0

The 1/5 success law proposed by Rechenberg. That is: the probability of success should be 1/5. If the proportion of actual success is greater than 1/5, it increases for (k = 0; k

{

Wheel dish in the way

Select two parents;

Generate 0-1 random number R;

IF

Perform copy and variation operation, first insert two fats without changing

To new group

In, then a normal variation is then performed separately;

Adopt the previous operator of the secondary control step;

Note:

For the probability of copy.

ELSE performs hybridization and variation operations, that is, the two fats will be monitored overall integrated hybridization.

Insert the two sons into the new group

In the middle, the variation is performed, and the variation is performed.

The same is the same as before;

}

IF

THEN

{

Correct

Individuals and

of

}

If the current generation is the first generation, you may not get the best individual (ie search group)

All individuals are unable to individuals). In this case, the reference population is

Choose a viable individual as an optimal individual.

Randomly replace one individual in the search population.

}

}

It can be seen from the algorithm process that this algorithm handles search groups.

Different from the penalty function, the feasible solution search method uses only the target function as an adaptive metric without the need to add additional items, and it always returns the verailtic solution. It performs searches through some feasible points. Its algorithm is determined that the neighborhood of the reference point will be the most opportunistic to be "detected".

The following is calculated

{

For each individual

DO

IF

THEN

Direct calculation

Else

{

random selection

Random generation

Given the test upper limit.

{

take

THEN

Replacement probability

}

Else

Calculation of penalty function

}

}

The newly increased control parameters of the algorithm are: reference group

Replace probability

§7.2.5 Case

Case 1 is the most stress-constrained weight reduction using the ten-pole truss in front. The purpose of this example is to prove the effectiveness of the algorithm, and although the main purpose of this algorithm is to calculate a large-scale optimization problem, when the large-scale optimization problem is generally very good, it is not suitable for initial testing. Examples.

100

20

0.6

10

500

0.2

0.1

0.2

1

1769.333258

1758.151105

2

1950.279207

1921.299365

3

1922.325182

1895.743759

4

1665.495266

1651.889819

5

1750.583766

1730.877040

6

1663.430915

1650.183034

Seduce

1636.119745

1625.753858

8

1721.621357

1718.238496

9

1982.158213

1968.391852

10

1825.880554

1823.848546

1788.72275

1774.43769

1636.119745

1625.753858

The reference group is reduced to 10, the other parameters are unchanged, and the result is as follows:

1

1611.539457

1609.797957

2

1762.332584

1754.502170

3

1624.950820

1622.669394

4

1702.037140

1695.211785

5

1776.785150

1771.267678

6

1663.548214

1655.143873

Seduce

1729.877825

1722.938619

8

1732.291197

1725.324981

9

1666.098366

1654.351463

10

1654.919162

1648.088944

1692.43799

1685.92969

1611.539457

1609.797957

The calculated effect is very amazing, and it does reflect the superiority of the real number coding. Moreover, it is found that it does not require an excessive reference group size.

Keep the reference group size is 10, the maximum number of allowed tests (ie, the adaptation to modify the unpasperate individual) is reduced to 10, the other parameters are unchanged, and the random operation is as follows:

1

1811.546895

1809.474277

2

1744.998829

1737.641204

3

1817.052017

1802.090705

4

1616.249943

1617.108620

5

1672.453826

1668.948821

6

1944.128791

1931.583628

Seduce

1742.587601

1747.803736

8

1911.012015

1900.174607

9

1883.661044

1869.830386

10

1684.019360

1679.932879

1782.77103

1776.45889

1616.249943

1617.108620

It can be seen that the maximum number of maximum allowable tests has an adverse effect on the average performance of the algorithm.

In the previously mentioned genetic algorithms of real numbers, the operator of the main role is a mutation operator. To this end, the probability of copying the variation is increased to 0.9, and the other parameters remain unchanged, and the result is as follows.

1

1640.981128

1635.687514

2

1606.744433

1602.386323

3

1755.498511

1750.262869

4

1620.793123

1617.742462

5

1597.355384

1596.167686

6

1836.710158

1834.346607

Seduce

1639.255559

1635.242699

8

1616.698132

1614.543106

9

1829.557425

1821.305559

10

1597.992405

1596.892439

1674.15863

1670.45773

1597.355384

1596.167686

The calculation results are very satisfactory. In particular, the best solution and the results of the planning algorithm are very close, confirmed that the role of the mutation operator in the evolution algorithm is much greater than the hybrid operator.

The authors believe that if the value domain of the problem is discrete, a better algorithm is still a genetic algorithm using binary encoding. If the problematic domain is continuous, then a better algorithm is a genetic algorithm that uses a real number. Of course, for the problem that belongs to the mixed variable (i.e., the part variable is discrete value, the part variable is a continuous value), it is possible to consider the use of part variables to adopt binary coding, part variables adopt real number encoding, then we need to consider the genetic operator Appropriate modification.

Figure 7-2 is a convergence process for random operation:

Figure 7-2

It can be seen from the figure that the iterative process is relatively stable, and the genetic algorithm in which the binary encoded genetic algorithm is used dramatically, and the post-improvement becomes very slow.

2200 rod Lightweight design. 200 rod topology is shown in Figure 7.3, material parameters

The structure has the following five working conditions:

11, 6, 115, 20, 229, 34, 43, 48, 57, 62 ,71

Positive load 1000

;

25, 14, 19, 28, 33, 42, 47, 56, 61, 70 ,75

Negative load

1000

;

31, 3, 3, 4, 5, 6, 8, 10, 12, 10, 73, 74 ,75 have Y

Negative load 10000

;

(4) Combination of working conditions (1) and working conditions (3);

(5) Combination of working conditions (2) and working conditions (3);

Figure 7.3 200 rod topology

This example has a total of 200 independent design variables, and the constraint is limited to 550 (200 stress constraints, 150, and 200 dimensional lower limit constraints). It is integrated here to calculate the unit stress response for different working conditions, and the node displacement response, the most dangerous result is the response of the unit or the node.

Due to the large size of the problem, the calculation cost of sensitivity analysis is large. And it may introduce a large error. The author uses the improved constraint scale method mentioned earlier and has not been successfully solved. After the use of the author improved sequence planning and inverted political strategies, the results of the theoretical optimization are as follows:

元 号 号

元 号 号

元 号 号

元 号 号

1

0.210111

51

0.332489

101

0.117325

151

0.156433

2

0.100000

52

4.714133

102

10.832509

152

11.396937

3

0.100000

53

0.100000

103

0.132075

153

1.130866

4

0.202961

54

0.267782

104

0.464708

154

0.313048

5

4.625652

55

7.884743

105

7.097006

155

0.307205

6

0.255035

56

0.100000

106

0.406070

156

1.112113

Seduce

0.100000

57

0.100000

107

0.428955

157

10.670323

8

2.517969

58

0.100000

108

6.614382

158

1.887312

9

0.138652

59

0.100000

109

0.446958

159

0.156433

10

0.133123

60

0.100000

110

0.407249

160

11.004767

11

2.678647

61

0.100000

111

7.092706

161

1.364080

12

0.115250

62

0.100000

112

0.461859

162

0.375055

13

0.132279

63

0.100000

113

0.124650

163

8.309459

14

2.522145

64

9.091965

114

10.832217

164

0.384715

15

0.100000

65

0.100000

115

0.879140

165

1.315923

16

0.254432

66

0.348950

116

0.727912

166

10.970353

In one

4.630455

67

5.436994

117

0.721261

167

0.156433

18

0.113690

68

0.255224

118

0.871439

168

1.872591

19

0.104213

69

0.381524

119

10.594874

169

10.691177

20

0.104550

70

5.333691

120

1.526672

170

0.156433

twenty one

0.100000

71

0.388906

121

0.117325

171

0.156433

twenty two

0.100000

72

0.260952

122

8.243121

172

0.156433

twenty three

0.100121

73

5.429994

123

0.322219

173

0.156433

twenty four

0.100121

74

0.348614

1240.759840

174

0.156433

25

0.112353

75

0.100000

125

7.068655

175

0.156433

26

6.549132

76

9.096941

126

0.706100

176

0.156433

27

0.100000

77

0.157768

127

0.323981

177

0.156433

Twist

0.344073

78

0.100000

128

8.250747

178

11.342563

29

3.579805

79

0.100000

129

0.117325

179

0.168929

30

0.170422

80

0.171449

130

1.508217

180

1.984326

31

0.217004

81

9.889715

131

10.611878

181

11.846449

32

3.846152

82

0.374440

132

0.117325

182

0.420454

33

0.195036

83

0.100000

133

0.117325

183

1.411970

34

0.148202

84

6.509815

134

0.117325

184

9.309460

35

3.585978

85

0.363901

135

0.127500

185

1.361184

36

0.344639

86

0.366837

136

0.127500

186

0.428781

37

0.100000

87

6.012130

137

0.117325

187

11.814208

38

6.552405

88

0.410056

138

0.117325

188

1.965139

39

0.100000

89

0.444205

139

0.117325

189

0.183595

40

0.100000

90

6.506427

140

11.379923

190

11.363489

41

0.100000

91

0.100000

141

0.144894

191

7.857562

42

0.100000

92

0.372654

142

1.624329

192

6.776273

43

7.879161

93

9.889707

143

8.760705

193

6.786799

44

0.269327

94

0.100000

144

0.783204

194

7.862092

45

0.100000

95

0.100000

145

0.354403

195

14.188158

46

4.721721

96

0.100000

146

7.593633

196

13.985725

47

0.323962

97

0.100000

147

0.346010

197

8.520397

48

0.215902

98

0.100000

148

0.729926

198

8.553420

49

4.592184

99

0.100000

149

8.767548

1999

13.952599

50

0.217054

100

0.100000

150

1.607218

200

14.195867

Structure weight is 28888.035328

References [26] The results obtained for this question are as follows:

DCOC

Dual

DOC-FSD

28903.41

28890.30

28906.38

It is to be pointed out that the algorithm of the reference [26] is derived based on some particularities of the lightest weight of the truss structure, and thus does not have universality. Note: The FSD in the above table is the full stress method.

Due to the large size of the problem, it is clear that the size of the search group and the search algebra need to be increased. 200

20

0.9

10

2000

2.0

3.0

0.2

0.130.0

1

36763.111904

36783.179274

2

36297.883100

36307.744050

28.0, the largest genetic algebra increases to 3000 generation, randomly running four times as follows:

1

32164.046678

32164.046678

2

32205.974839

32181.220107

3

33835.051844

33823.328592

4

32407.606855

32400.431731

7.4 §7.3 conclusion

This chapter proposes a genetic algorithm based on the real-scale problem, the algorithm is used in a normal variation, and the feasible solution of the maximum number of tests is used for constraints. Individuals that exceed the number of upper limit tests The penalty function is constructed to construct its adaptation, and different measures have been taken to the search group and the reference group to ensure the steady state of the algorithm. Examples illustrate that this algorithm is suitable for solving large-scale optimization problems, is an algorithm with engineering practical value, worthy of further research.

The author believes that the direction of further research has the following: (1) Distribute the algorithm; (2) Propose a new effective mutation operator; (3) propose a more efficient method for processing nonlinear constraints; (4) Consider Solve the problem with equality constraints.

Figure 7.4

§7.4 Appendix

The schematic flow of the author's improved sequence linear planning method (SLP) is given:

1

2

Meet constraint

3

4

2

4

2

转载请注明原文地址:https://www.9cbs.com/read-15206.html

New Post(0)