Dijkstra algorithm for shortest path:
The Dijkstra algorithm is a very effective algorithm for solving the shortest path problem. The time complexity is O (│V│2), the following is a precise description (this paragraph is from MIT's curriculum ", not translated, keep the original) The Chinese description general books will be:
1. SET I = 0, S0 = {U0 = S}, L (U0) = 0, AND L (V) = Infinity for V <> u0. IF │v│ = 1 THEN STHERWISE Go To Step 2.
2. for Each V IN V / Si, Replace L (V) BY min {L (V), L (UI) DVUI}. If L (v) is Replaced, PUT A Label (L (V), UI) ON V.
3. Find a Vertex v which minimizes {l (v): v in v / si}, SAY UI 1.
4. Let Si 1 = Si CUP {UI 1}.
5. Replace I BY I 1. If i = │V? 1 THEN STOP, OtherWise Go To Step 2.
6. SET I = 0, S0 = {U0 = S}, L (U0) = 0, AND L (V) = Infinity for V <> u0. IF │ = 1 THEN STOP, OtherWise Go To Step 2.
7. For Each V IN V / Si, Replace L (V) BY min {L (V), L (UI) DVUI}. IF L (V) IS Replaced, PUT A Label (L (V), UI) ON V.
8. Find a Vertex V Which Minimizes {L (V): V IN V / Si}, SAY UI 1.
9. Let Si 1 = Si CUP {UI 1}.
10. Replace I BY i 1. If i = │v│-1 Then Stop, OtherWise Go To Step 2.
Hungarian algorithm for the maximum match (assignment problem):
Talking about the Hungarian algorithm naturally avoids the Hall theorem, that is, for the two figures g, there is a matching M so that all vertices of X for M saturation is: for any of the X, any subset A, and A adjacency Point set is T (a), Hengsheng: │T (a) │> = │A│
The Hungarian algorithm is the idea of adequate proven in the Hall theorem, and its basic steps are:
1. RMB initial match M;
2. If x is saturated, the third step is performed;
3. Find a unsaturated vertex X0 in X, made V1 ← {x0}, V2 ← φ;
4. If T (V1) = V2 is stopped because it cannot match, it will optionally be a little y ∈T (V1) / V2;
5. If Y is saturated, turn to 6, otherwise it will be made from X0 → Y, the increased road P, M ← M⊕e (P), and 2;
6. Since Y is saturated, there is a side (Y, Z) in m, which is V1 ← V1 ∪ {z}, V2 ← V2 ∪ {y}, turn 4;
About requesting network maximum stream and minimum score algorithm:
A given one direction map G = (V, E), regarding the edges in the figure as a pipe, there is a weight on each side, indicating the flow rate of the pipe. Given the source point S and the discharge point T, now suppose there is a water source at S, there is a water storage in t, asking the maximum water flow from S to T. This is called a network stream problem. Description Use the mathematical language is: Set g = (v, e) is a stream network, set C (u, v)> = 0 to indicate the flow rate of the flow from U to V. Set S is the source, T is the exchange. G The stream is a function f: V × V → R, and meets the following three characteristics:
1. Capacity restriction: For all U, V ∈ V, require f (u, v) <= C (u, v)
2. Abcayness: For all U, V ∈ V, require f (u, v) = - f (v, u)
3. Circular session: For all u V - {s, t}, required σ f (u, v) = 0; v∈V
f (u, v) is called a network stream from the nodes U to V, which can be positive or negative. The value of stream F is defined as:
│F│ = F (S, V)
V∈V
That is, the sum of all flows from the source.
The maximum current problem is to find the maximum stream of the fixed network. Network flow issues can be summed up as a special linear planning issue.
Finding the basic way to find the maximum stream is the Ford-Fulkerson method, which has a variety of implementations. Its basic idea is from a possible stream F. Find a way to improve the stream P, then follow the P, Attempting to find new feasible streams to find the improved road, so repeated until the maximum flow. Now you have to find the minimum flow of the minimum cost, you can prove that if f is the smaller of the flow rate of V (f), and P is the minimum cost of all the cost of f, along the P To adjust f, the obtained active stream F 'must be the minimum cost stream in all the active streams of V (f'). In this way, when F is the maximum flow, he is the minimum flow of the minimum cost.
Note that the unit traffic costs B (I, J) of each side are ≥0, so f = 0 must be the minimum cost stream of traffic 0, which is always
From f = 0, you will find the maximum flow of minimum costs. In general, it is known that f is the minimum cost stream of traffic V (f), and the remaining problem is how to find the minimum cost of F. To this end, we will put each arc in the original network
The arc of two directions:
1. Forward arc
, Capacity C and cost B constant, flow f is 0;
2. Backward arc
, The capacity C is 0, the cost is -b, the flow rate is 0;
Set a parameter CT on each vertex, which represents the cost of the source point to the path to the vertex. If we draw a minimum fee about F. The CT value of each vertex on the road is minimal to other modified roads. Every time you find the minimum cost, you can improve the road, the source point CT is 0, the CT of the other vertices is ∞.
Set the ship's transportation fee, due to F = 0, then COST = 0, we find a minimum fee about F. Cost ← COST σB (E) * D, (where E∈P, D is the improved amount of P to cumulate flow of transportation
Add amount. Obviously, when seeking the minimum flow of the minimum cost, Cost has become the maximum flow of transportation.
In addition, the Boolean variable Break can improve the extension flag of the road, after searching for every vertex in the network.
If BREAK = TRUE indicates that the road can be improved, it is also necessary to re-search the vertex in the network; otherwise explain the minimum fee.
The improved path has been found or the maximum stream has been obtained.
Here is the pseudo code of the algorithm: COST ← 0;
Repeat
Improve the road withdrawal;
The CT value of the source point is 0 and enters the improved path;
Repeat
Break ← false;
For u ← 1 to n DO
Begin
Analysis of all arcs starting
;
IF
Flow can be improved) AND (source point to U with path) AND (CT value )
Cost Begin Break ← True; V 's CT value ← U of CT value cost of; V Enter the label for improved the path; END IF End for Until Break = FALSE IF Meeting Number THEN Begin Departure from the coming point to fix the traffic of the road; COST ← COST σB (E) * D (where E∈P, D is the improved amount of P); END IF Until setpoint unbailed; It can be seen that the above algorithm is almost the same as the maximum streaming Edmonds-Karp label algorithm, because both algorithms use width priority search to find an increase in the channel, so complexity is the same, all O (VE), where V Is the number of nodes, E is the number of edges. Other will not be detailed, everyone is interested in consulting the TAOCP or the relevant content of the algorithm. Piggy and array: The greedy method is a simple algorithm for solving independent system combination optimization, and the Kruskal algorithm for the smallest spanning tree is a kind of greedy algorithm. But greedy does not always find the best independence. The full essential condition for greedy to seek optimal independence is L is a quote. In fact, the maximum spanning tree is about the combined optimization of the array, and the independent system U of all matching composed of the two diagrams is not a quote. The basic idea of greedy is: gradually approaching a given goal from a certain initial solution of the problem to make better solutions as fast as possible. When a certain step in a certain algorithm cannot be moved again, the algorithm stops. The algorithm has problems: 1. The final solution that cannot be guaranteed is the best; 2. Cannot be used to seek maximum or minimize problems; 3. Only the feasible solution of certain constraints can only be satisfied. The process of implementing this algorithm: From a certain initial solution of the problem; WHILE can further do DO before giving a total target Asking for a solution of the problem of solving; A feasible solution made from all solution elements into a problem; Extreme search: The problem to be solved by the combination algorithm is only limited, and can always be solved with the way to search for the algorithm, that is, the possibility of checking one by one. It can be imagined, and this method is extremely time consuming. In fact, there is no need for mechanical examination, often, it is often possible to determine some cases in advance, it is impossible to get the best solution, so that these situations can be discarded. This is also implicitly checked all possible situations, which have reduced search quantity and guaranteed that there is no outgoing solution. In this regard, the anger of the anger has written the article, I think there is no need to say. Branch limit method: This is a search algorithm for solving the exclusion of combined optimization problems. Similar to the backtracking method, the branching level method often uses a tree structure to tissue space frequently. However, different from the backtracking method is that the retractable algorithm searches the tree structure using a depth priority method, and the branching delimitation is generally searched by the width priority or minimum method. Therefore, it is easy to compare the similarities and differences between the backtracking method and the branching level. Relatively, the decapsulation between the branching boundary algorithm is much larger than the backtracking method, so when the memory capacity is limited, the probability of retrospective is greater. Algorithm idea: Branch and Bound is another method for searching for solution space, with a main difference between the backtracking method lies in the expansion of the E-node. Each live node is there and only one chance becomes an E-node. When a node becomes an E-node, all new nodes that can be reached from the node will arrive. In the generated node, abandon those nodes that are not possible to export (optimal), and the remaining nodes add a living node table, then select a node from the table as the next E-node. Take the selected node from the living node table and expand until the solution or activity is empty, the expansion process ends. There are two common methods to choose the next E-node (although there may be other methods): 1) Advanced first out (F i f o) is the same as the order in which the node is removed from the living node table, so living The nature of the node table is the same as the queue. 2) The minimum cost or maximum gain method is in this mode, each node has a corresponding cost or income. If you look for A minimum consumption solution, the living node table can be established by the minimum pile, the next E-node is the minimum cost The living node; if you want to search for a solution with the biggest benefit, you can use the largest pile to construct a live node table, the next The E-node is a living node with the largest income. Dynamic planning: Dynamic Programming is a branch of operation, is a solution process (Decision) Optimized Mathematical Method. American mathematician R.E.Bellman et al. In the early 1950s When the multiStep decision process is optimized, a famous optimization principle is proposed to convert multi-stage processes into a series of single-stage issues, and solve such a process optimization problem. New Method - Dynamic Planning. In 1957, his masterpiece Dynamic Programming was published, which is the first book in this field. Since the development of the dynamic plan, it has been widely used in economic management, production scheduling, engineering technology and optimal control. For example, shortest routes, inventory management, resource allocation, device update, sorting, loading, etc., with dynamic planning methods more convenient than other methods. Although dynamic planning is mainly used to solve the optimization of the dynamic process in the time division, some static plans (such as linear planning, non-linear planning), as long as people are introduced, and regard it as multi-stage decisions The process can also be easily solved with a dynamic planning method. In general, as long as the problem can be divided into a smaller child problem, and the best solution of the original problem is included. The optimal solution of the child problem (ie, the most desirable principle), can consider the use of dynamic planning. The essence of dynamic planning is to grants thoughts and solve redundancy. Thus, dynamic planning is a sub-problem that decomposes problem instances into smaller, similar sub-problems, and solves the repetitive child problem. Algorithm strategy for optimization problems. It can be seen that the dynamic planning method is similar to the granted method and greedy method, and they are induced by the problem instance into smaller, similar sub-problems, and produce a global optimal solution by solving the problem. Among them, the current choice of greed can rely on all the options that have been made, but do not rely on the choices and sub-problems they have to do. Therefore, the greedy method is topped downwards, and a greedy choice will be made step by step; while the various sub-issues in the grahe method are independent (ie, the public sub-problem is not included), so once the subsequent problem is obtained. It is possible to solve the solution of the subprocess from the solution. However, it is not enough, if the current choice may have a problem that may rely on sub-problem, it is difficult to achieve the global optimal solution through local greed; if each sub-problem is not independent, the rule of control must do many unnecessary work, Repeat the public subprocess. Solving the above problems is to use dynamic planning. This method is primarily applied to optimization problems, which will have a variety of possible solutions, each solution has a value, and dynamic plan finds the solution of the optimal (maximum or minimum) value. If there are several solutions to take the best value, it only takes one of them. During the solution, the method is also the solution to the global optimal solution by solving the partial problem, but is different from the granted method and greedy. Dynamic planning allows these sub-problems to be not independent, (ie, each child problem can be included) Public sub-issues also allow it to be selected by the solution of their own sub-problems, which is only solved by each child, and saves the results to avoid calculation every time you encounter. Therefore, the problem for the dynamic planning method has a significant feature, that is, the child problems in the sub-problem tree have exhibited a large amount of repetition. The key to the dynamic planning method is that for repeated child problems, only the first encounter is solved, and saves the answer, let the answer directly to the direct reference, do not have to resolve. Design a standard dynamic planning algorithm, usually in the following steps: 1. Division phase: Divide the problem into several phases in accordance with the time or spatial characteristics of the problem. Pay attention to this Stage must be ordered or can be sorted (ie, no rear-pendant), otherwise the problem cannot be used with dynamic regulations. Solution. 2. Select Status: Various objective conditions in which problems develop to each stage are expressed in different states. come out. Of course, the choice of state should be met without post. Determine the decision and write the status transfer equation: The reason why these two steps are placed, because decision-making and state transfer There is a natural connection, the state shift is based on the state and decision in the previous stage to derive the state of this phase. So, if we identify the decision, the status transfer equation is written. But in fact, we are often Conversely, decision decisions are determined according to the relationship between the respective states of the adjacent sections. 3. Write a planning equation (including boundary conditions): The basic equation of dynamic planning is a general form of planning equation Expression. Generally speaking, as long as the stage, state, decision making and state shift is determined, this step is still more simple. The main difficulty of dynamic plan is theoretically design, once design is completed, and the implementation is very simple. Ministry of treatment: For a problem with a size n, if the problem can be easily solved (for example, the scale N is small) Decisive, otherwise it will be broken down into K Sub-scale, which is independent of each of them and is the same as the original problem, and the subprocess is recursively explained, and then the solutions of each sub-problem are obtained. This algorithm design strategy is called grahe method. Any computation time required to solve the problem with a computer is related to its scale. The smaller the problem, the easier directly solve, the less calculation time required for the problem. For example, for the sorting problem of n elements, when n = 1, no calculation is required. When n = 2, it can be arranged in order as long as a comparison is made. When n = 3, as long as it works 3 times, .... When N is large, the problem is not so easy to handle. To solve a larger problem directly, it is sometimes difficult. The design idea of grasping method is to divide a large problem that is difficult to solve, divided into some smaller the same problem, so that all breaks, divide them. If the original problem can be divided into k sub-problems, 1 Its basic steps are: Decomposition: Decompose the original problem into a number of small, independent, independent, with the same form of origin; Solve: If the problem is small, it is easy to solve it directly, otherwise it is recursively solving each child problem; Merge: The solution of each sub-problem is constructed as the solution. Its general algorithm design pattern is as follows: Divide-and-conquide (P) 1. IF │P│ ≤ N00 2. THEN RETURN (Adhoc (P)) 3. Decompose P to a smaller child problem P1, P2, ..., PK 4. For i ← 1 to k 5. Do Yi ← Divide-And-Conquer (PI) △ Recursive Solution PI 6. T ← MERGE (Y1, Y2, ..., YK) △ Mr. 7. Return (T) Where │P│ represent the scale of the problem P; N0 is a threshold, indicating that the problem has been It is easy to solve directly, and it is not necessary to continue to decompose. Adhoc (p) is the basic sub-algorithm in the grahe method for direct Somalable problem P. Therefore, when the size of P does not exceed N0, the algorithm Adhoc (P) is solved directly. algorithm MERGE (Y1, Y2, ..., YK) is the consolidated child algorithm in the grahe method, used to put the P1, P2, ... The corresponding solutions of PK are merged into P's solution. According to the segmentation principle of the grahe method, how many sub-problems should be divided? Regulations for each sub-problem How should the model be appropriate? These problems have been hard to answer. But people found in a lot of practice. When designing the grading method for designing the algorithm, it is best to make the scale of the sub-problem substantially the same. In other words, a problem is divided The treatment method of the K sub-problem that is equal to the size is effective. Many problems can take K = 2. This maker The problem of roughly equalization is the idea of balanced sub-problem, it is almost always It is good to do things that are not equal to the scale. The merge step of the grahe method is the key to the algorithm. Some problems have more complicated mergers, and some problem merger is more complicated, or there are a variety of mergers, or how the merge protocol is not obvious. If there is no uniform mode, the specific problem is required. Other classic algorithms, such as fast Fourier transform, everyone is very familiar, and no longer involve. If you want to learn in-depth, you may wish to refer to the relevant course of San Diego State University http://www.eli.sdsu.edu/courses/fall95/cs660/notes/ The design of the combination algorithm is an art that requires a high degree of skill and inspiration. The task of algorithm analysis is to analyze the advantages and disadvantages of the algorithm. The task of algorithm analysis is to analyze the advantages and disadvantages of the algorithm, mainly to discuss the time complexity and spatial complexity of the algorithm. Its theoretical basis is combined analysis, including counting and enumeration. Calculate complexity theory, especially NP fullness theory, and closely related to the combination algorithm. In order to portray the computational difficulty of computing a large number of combined issues including traveler issues, picture coloring problems, picture coloring problems, integer planning. Computational complexity Theory Research Algorithm The ability of time and spatial limitations and problems, making the combination algorithm have a clearer framework, raising the research of combined algorithms to a new level.