Intelligent algorithm learning notes

xiaoxiao2021-03-06  117

Author: hisky (dark green bamboo piano) This is a look at some notes of my own time intelligent algorithm, posted for everyone to see what, if there is to understand the wrong place, do please point out that niche here Xianxie ^ _ ^

One more than one in engineering practice, often comes into contact with some "novel" algorithms or theories, such as simulation annealing, genetic algorithm, taboo search, neural network, etc. These algorithms or theories have some common features (such as simulating natural processes), which is known as "intelligent algorithm". They have a large use of some of the places when solving some complex project issues. What does these algorithms have? First give a partial search, analog annealing, genetic algorithm, and taboo search, in order to find the highest mountain on the earth, a group of people who have ambiguous rabbit began ways. 1. The rabbit jumped towards a high place. They found the highest mountains that were not far away. But this mountain is not necessarily Zhu Mount. This is partial search, it does not guarantee the local optimal value is the global optimal value. 2. The rabbit was drunk. He jumped random a long time. During this time, it may go high, or it may step into the ground. However, he gradually woke up and jumped toward the highest direction. This is an annealing. 3. The rabbit eats the dissemination of the pills and is launched to space, and then fell randomly to certain places on the earth. They don't know what their mission is. However, if you kill a part of the rabbit, the rabbits they will find Mount Everest in a few years. This is the genetic algorithm. 4. The rabbits know that the power of a rabbit is small. They told each other, wherever the mountains have been found, and every mountain I have found, they have a rabbit to make a mark. They have developed the following steps to find the strategy. This is a taboo search. Overview of Intelligent Optimization Algorithm

Intelligent optimization algorithms should be solved by general optimization issues. The optimization problem can be divided into (1) to solve a function, so that the function optimization problem of the index value of the smallest value of the function value and (2) in a solution space, find the optimal solution, so that the smallest combination optimization of the target function value problem. Typical combined optimization issues are: Traveling Salesman Problem, TSP, Probled Problem, 0-1 Backpack Problem, and Bin Packing Problem, etc. There are many optimized algorithms, including: cable planning, dynamic planning, etc .; improved local search algorithms include climbing methods, speed drop methods, etc., the simulation annealing, genetic algorithm, and taboo search are called guided search methods. The neural network, chaotic search is a system dynamic evolution method. The optimization idea is often mentioned to the neighborhood function, and its role is to point out how to solve a new solution from the current solution. Its specific implementation should be determined according to specific problems. In general, local search is based on the greedy ideology to search for search using neighborhood functions. If you find a better solution than the existing value, it takes the latter. However, it can only be "local very small solution", that is, this rabbit may be "Deng Taishan and Xiaoxiao", but did not find Everest. The analog annealing, genetic algorithm, taboo search, neural network, etc. have improved from different angles and strategies, achieve better "global minimum".

Simulated annealing algorithm (SIMULATED Annealing, SA)

The simulation annealing algorithm is based on the similarity between solid material annealing processes and combined optimization issues. When the material is heated, the brown motion between the particles is enhanced. After reaching a certain strength, the solid material is converted to a liquid state. At this time, the thermal movement is reduced, and the heat movement is gradually ordered, and finally reaches stable. The solution of analog annealing is no longer dependent on the initial point as the last result of local search. It introduces an acceptor P. If the target function F (PN) of the new point (set to PN) is better, P = 1, indicates the selected new point; otherwise, the acceptor P is the target function F (PC) of the current point (set to PC), The new point target function f (PN) and another function of another control parameter "temperature" T. That is, the simulation annealing is not like a partial search, which is more than the present, and the point of the target function is also acceptable. As the algorithm is executed, the system temperature T gradually decreases, and finally terminates at a low temperature, at which the system is no longer accepted. Simulated annealing, in addition to accepting the improvement of the target function, it also accepts an attenuation limit. When T is large, the large attenuation is received, and when T gradually changes, the smaller attenuation is accepted, when T is 0, It will no longer receive attenuation. This feature means that analog annealing is reversed in terms of local search, which avoids local minimal and maintains the versatility and simplicity of local search. Physically, first heat, make the molecules collide with each other, becoming disordered, and then increase, then cool down, the last molecular order will be more ordered, and the internal energy is smaller before heating. Just like the rabbit, after it is drunk, it will pass the relatively near mountain, and it is more likely to jump a large circle, but it is more likely to find Everest. It is worth noting that when T is 0, analog annealing becomes a special case of local search. Pseudocode expression Simulated Annealing: procedure simulated annealingbegint: = 0; initialize temperature Tselect a current string vc at random; evaluate vc; repeat repeat select a new string vn in the neighborhood of vc; (1) if f (vc)

In the above program, the key is (1) New State Generation Function, (2) New State Accept Function, (3) Sampling Stability Guidelines, (4) Anti-Rush Function, (5) Annealing End Guidelines (referred to as three function two guidelines ) The main link directly affecting the optimization results. Although the experimental results prove that the initial value has no effect on the final result, the higher the initial temperature, the greater the probability of high quality solution. Therefore, try to select a relatively high primary temperature. The selection strategy of the key link above: (1) Status generation function: candidate is determined by the current solution, insert, insert, reverse order, etc., then select new solution according to the probability distribution, the probability can be taken Uniform distribution, normal distribution, Gaussian distribution, Cori distribution, etc. (2) Status acceptance function: The most critical, but experiments show that the acceptance function is not much affected by the last results. Therefore, MIN [1, Exp ((f (Vn) -f (Vc)) / T) is generally selected. (3) Sampling Stability Guidelines: Generally, the mean value of the test target function is stable; the target value of the continuous step is small; the number of steps is specified; (4) Anti-temperature: If the temperature must be required The ratio decreases, the SA algorithm can be adopted, but the temperature drop is slow; in fast SA, it is generally used. Currently, it is often used, it is a changing value. (5) Annealing end guidelines: Treat: Setting the temperature; set the number of iterations; the optimal value searched continuously unchanged continuously; check whether the system entropy is stable. In order to ensure that there is a relatively excellent solution, the algorithm often takes slow cooling, more sampling, and the relatively low mode of "terminating temperature", resulting in a relatively long time of operation, which is also the biggest shortcoming of simulation annealing. When people are drunk, there is a thing that is not worthwhile, let alone rabbit?

Genetics Algorithm, GA)

"Survival, survival of people" is the basic idea of ​​evolution. Genetic algorithms are what they want to do in nature. Genetic algorithms can be used well for optimization problems. If it looks like a highly ideal simulation of natural processes, it can show its own elegance - although survival competition is cruel. The genetic algorithm is subject to all individuals in a group, and uses a randomized technology to guide a high-efficiency search for a coded parameter space. Among them, the selection, crossing and variation constitute the genetic operation of the genetic algorithm; the parameter coding, the initial group setting, the design of the adaptation function, the genetic operation design, the control parameter setting five elements constitute the core content of the genetic algorithm. As a new global optimization search algorithm, the genetic algorithm is widely used in various fields with its simple and universal, strong and strong, suitable for parallel processing and efficient, practical and other sectors, and has achieved good results, and gradually became an important One of the intelligent algorithms. Pseudo code of genetic algorithm:

Procedure Genetic Algorithmbegin Initialize a group and evataled the fitness value; (1) while not conve confrgent (2) begin select; (3) if random [0,1]

Just like the variation of nature and any species, the genetic algorithm encoded in the variable does not consider whether the function itself can be guided, whether continuous, etc., the applicability is very strong; and it starts to operate, hidden Contains parallelism, it is easy to find "global optimal solution". Tabu Search, TS

In order to find "global optimal solution", it should not be attached to a particular area. The disadvantage of local search is that it is too greedy to search for a local area and its neighborhoods, causing an impercure barrier, not seeing Taishan. Taboo search is a partially optimal solution to which it is found, it consciously avoids it (but not completely isolated), thereby obtaining more search ranges. The rabbits found Taishan, and one of them would stay here, and other other places are looking for. In this way, after a large circle, the few mountain peaks found were stood out. When the rabbits are looking for, there will be consciously avoided Taishan because they know that there is a rabbit to look there. This is the meaning of "tabu list" in taboo search. The rabbit who left in Taishan generally won't be there, it will return to the big army to find the highest peak after a certain time, because this time has many new news, Taishan has a good height, Need to be re-considered, this return time is called "Tabu Length" inside the taboo search; if you are searching, the rabbit left-behind Taishan has not returned, but the place found is all the low places such as North China Plains. The rabbits have to consider the choice of Taishan again, that is, when a place with rabbits left behind is too high, it exceeds the state of "Best to Far", you can do this, you can leave this place, Consider coming in, this is called "Aspiration criterion". These three concepts are contraindications and general search guidelines. The optimization of the algorithm is also the key here.

Pseudo-code expression: procedure tabu search; begin initialize a string vc at random, clear up the tabu list; cur: = vc; repeatselect a new string vn in the neighborhood of vc; if va> best_to_far then {va is a string in the Tabu List} begin cur: = va; let va Take place of the Oldest string in the Tabu list; best_to_far: = va; end elsebegin cur: = vn; let vn take place; endest string in the tabu list; end; uns (termination-condition);

There are several points in the above programs: (1) Taboo object: You can choose the current value (CUR) as a tabu List, or put and of course the value on the same "contour" on the Tabu. List. (2) In order to reduce the amount of calculation, taboo lengths and contraindications are not too big, but the contraindications are too small to cycle search, and the contraindications are too small to fall into the "local high-quality". (3) The operation of BEST_TO_FAR in the above block is the "lifting candidate" "directly assigned to the optimal", sometimes there is no "dead lock" state that is not greater than BEST_TO_FAR, the candidate is also banned, this time, The best to solve the best in candidate solutions to continue. (4) Termination: and analog annealing, genetic algorithm is similar, commonly used: given an iterative number; set and estimate the distance of the estimated optimal solution is less than a range, termination of the search; When the distance is continuous, the search is terminated; the taboo search is an simulation of the human thinking process itself. It has achieved a part of the contrast part of the contraindication of some local best solutions (can also be memorized). Thus the purpose of jump out of the local search. Artificial Neural NetWork, ANN

The neural network is known from the name being analog to the human brain. Its neuron structure, its composition and mode of action are mimics the human brain, but it is just a rough imitation, far from the perfect point. Different from von Norman, neural networks calculate non-figures, non-precise, highly parallel, and have self-learning. In life science, nerve cells are generally referred to as neurons, which is the most basic unit of the entire neural structure. Each nerve cell is like an arm, which contains a nuclear core like a palm, called a cell body, called a finger called a dendritic, is an input path of information, and is called an axis of the arm, and is an output path of information; The neuron is intricularly connected together, transmitting signals between each other, and the signal transmitted can cause changes in the neuronal potential, once the potential is high, the neuron is excited, and the neuron will pass through the shaft The electrical signal is passed. And if you want to mimic the biological nerve, there are three elements that require artificial neural networks: (1) definition artificial neurons; (2) gives artificial neurons, or gives a network structure; (3) A definition of signal strength between artificial neurons is given. The first artificial neural network model in history is called the M-P model. It is very simple: where the neuron I is in the T time, 1 means the excited state, and 0 means suppressing state; it is neuron I and J. Connectivity; represents the threshold of neuron I, exceeding this value of god dollars to excite. This model is the simplest neuron model. But the function is very powerful: the inventors of this model McCulloch and Pitts have proven that there is no job of current digital computers regardless of the complexity of speed and implementation. The above M-P model is just a layer of network. If it is considered from the aspect of segmenting a plane, the M-P network can only divide a plane into a half plane, but it is not possible to select a specific part. The solution to the solution is "Multi-Layer Before".

Figure 2 is a schematic diagram of a multilayer front network. The bottom is called an input layer, and the uppermost layer is called an output layer, and any intermediate layer accepts all the input from the top layer, and the processed is incorporated. There is no connection between the neurons of each layer, and there is no direct contact between the input and output layers, and is only one-way contact, without feedback. Such a network is referred to as "multi-layer front network". After the data is input, the result of the weighting and final output of each layer. Figure 3 Figure 3, using a capable surface to illustrate the function of multi-layer network: single layer network can only be divided into two parts, and the double layer network can split any convex domain, and the multi-layer network can split any area. In order to make this network have the right weight, you must have some motivation to the network, let it learn, adjust it yourself. One method is called "Back Propagation, BP)", and its basic idea is to investigate the difference between the final output and understanding, adjust the weight, and start this adjustment from the output layer to the post-rear, after passing The intermediate layer reaches the input layer. It can be seen that the neural network is to achieve the problem of solving problems by learning. It has not changed the structure and working mode of a single neuron, and there is no direct contact between the characteristics of individual neurons and the problems to be solved. The role of learning is based on nerves The relationship between incentives and inhibition between the elements, changing their role strength. The information of any sample in the learning sample is included in each weight of the network. In the BP algorithm, there is a process of investigating the output solution and ideal solution. Assume that the gap is W, the purpose of the adjustment weight is to minimize W to minimize W. This also contains the "minimum" problem mentioned above. The general BP algorithm uses local search, such as the most flexible, Newton, etc., of course, if you want to get the global optimal solution, analog annealing, genetic algorithm, etc. When the simulation annealing algorithm is used to the network as a learning method, it is generally "Portzman Network" and belongs to the random neural network. During learning the BP algorithm learning, there is a need for a part of the determined value as an ideal output, which is like a middle school student when studying, there is a teacher's supervision. If there is no supervision, how do artificial neural networks learn? Just like there is no macroeconomic regulation, the free market introduces competition, there is a learning method called "learning without supervision competition". After a number of neurons of the input neuron I, after competition, only one neuron is 1, and the other is 0, and for the failed neurons, adjustment makes it moving to the competitive direction, and finally It may be victorious in a competition; artificial neural networks also have feedback networks such as Hopfield networks, and its neuron's signal transmission direction is two-way, and introduces an energy function, which is constantly impact each other through neurons, and the energy function value is constantly Finally, it can finally give a solution of a relatively low energy. This idea is similar to simulation annealing.

When an artificial neural network is applied to an algorithm, its correct rate and speed and software have little contact, and the key is its constant learning. This kind of thinking has been very different from von Nogan model.

转载请注明原文地址:https://www.9cbs.com/read-100895.html

New Post(0)