Dynamic network load balancing cluster practice method

xiaoxiao2021-03-06  52

Abstract This article mainly discusses the basic balance algorithm and dynamic load balancing mechanism under the network load balancing cluster system. On the basis of LVS, the polling algorithm has achieved a dynamic negative feedback mechanism of the cluster, which gives a basic dynamic balance model and analyzed. (2003-02-13 12:22:44) -------------------------------------- ------------------------------------------1. Introduction To essentially, network load balancing is an implementation of distributed operation scheduling systems. The balancer is the controller of the network request assignment, and the network service request is assigned to the network service request according to the current processing capability of the cluster node, and the validity of each node is monitored in the lifecycle of each service request. Generally, the balancer has the following feature on the requesting scheduling: The network service request must be the assignment of manageable requests to the user to be transparent to provide the support of the heterogeneous system to dynamically allocate the resources of the cluster node. And adjust the load balancer allocate workload or network traffic in each service node of the cluster. It can be static or based on the current network state to determine which particular node is distributed to which particular node is connected, and the node can be connected to each other, but they must be connected directly or indirectly with the balancer. The network balancer can be considered as a job scheduling system at the network level. Most network load balancers can implement a single system image at the corresponding level of the network. The entire cluster can be embodied as a single IP address being accessed by the user, and the specific The service node is transparent to the user. Here, the balancer can be static or dynamically configured, which node is determined using one or more algorithms to obtain the next network service request. 2. Network Balance Principlet   In the TCP / IP protocol, the data contains the necessary network information, so the information of the packet is important in the specific implementation algorithm of network caching or network balance. However, since the packet is a packet-oriented (IP) and a connection-oriented (TCP), it is often fracted, and there is no complete information related to the application, in particular, state information related to the connection session. Therefore, it is necessary to view the connection of the packet - from the port of the source address to the connection of the destination address port from the connection angle. Another element of balance consideration is the status of the resource usage of the node. Since load balancing is the ultimate goal of such systems, then timely and accurately grasp the node load status, and according to the current resource usage of the current resource using status, it is another key to the network dynamic load balancing cluster system. problem. Under normal circumstances, the cluster's service node can provide network protocol cache, such as processor load, application system load, active users, and other network protocol caches, and other resource information. Information is transmitted to the balancer by an efficient message mechanism, and the balancer monitors all the status of all processing nodes, and actively determines who the next task is passed. The balancer can be a single device or a set of parallel or a tree distribution. 3. Basic network load balancing algorithm                                                                         ] The general balance algorithm is the main task is to determine how to select the next cluster node, then forward the new service request to it. Some simple balancing methods can be used independently, some must be used in combination with other simple or advanced methods. And a good load balancing algorithm is not universal, it is generally only available in some special applications. Therefore, while inspecting the load balancing algorithm, it should also pay attention to the applicable surface of the algorithm itself, and combine different algorithms and techniques based on the characteristics of the cluster itself when adopting a cluster deployment. 3.1 Rotation method: Rotation algorithm is a method of the simplest and easier implementation in all scheduling algorithms. In a task queue, each member of the queue has the same status, and the rotation of the rotation is simple in this group of members.

In the load balancing environment, the equalizer transmits a new request to the next node in the node queue, so continuous, weekly, the node of each cluster is taken under equal position. This algorithm is widely used in the DNS domain polling. The activity of the rotation method is presented, and the opportunity of each node is 1 / N, so it is easy to calculate the load distribution of the node. The rotation method is typically suitable for the same processing capacity and performance of all nodes in the cluster. In practical applications, it is generally effective when used in combination with other simple methods. 3.2 Half Affairs Law is also called Hashfa (Hash), and the network requests the network to the cluster node according to some rules through a single range. Hashifer is not very effective in other types of balanced algorithms, which will show special power. For example, in the case of the UDP session mentioned earlier, due to the rotation method and several other types of connection information-based algorithms, the stop tag of the session cannot be identified, which can cause the application confusion. In addition, the hash mapping based on the data package source address can solve this problem to a certain extent: the data package with the same source address is sent to the same server node, which makes a high-level session-based transaction can run in an appropriate manner. Constly, the hash scales based on the destination address can be used in the Web Cache cluster, and the access requests to the same target site are sent to the same cache service node by the load balancer to avoid pages missing. Update cache problem. 3.3 Less Connection Method   In the least connection method, the balancer record currently all active connections, send the next new request to the node that contains the minimum number of connections. This algorithm is performed for TCP connections, but due to different applications may differ significantly on system resources, and the connection number cannot reflect real application loads, so when using heavyweight web servers as a cluster node service (such as Apache servers), This algorithm is a discount on the effect of balanced loads. In order to reduce this adverse effect, the maximum number of connexies in the maximum connection can be set to each node (referred to by the threshold setting). 3.4 Minimum Loss Method   In the lowest lack method, the balancer has been recorded for the request of each node for a long time, and the next request is sent to the description of the request for the minimum request. Unlike the minimum connection method, the lowest missing records are recorded instead of the current connection. 3.5 The fastest response method    平 平 到 到 到 到 到 节 节 节 节 节 节 节 节 节 节 节 节 节 节 节 节 节 节 节 节 给 给 节 节 给 节 节 给 给 节Special technology to actively detect each node. In most LAN-based clusters, the fastest response algorithm work is not very good, because the ICMP package in the LAN is basically completed within 10ms, reflecting the difference between nodes; if in WAN If the balance is performed, the response time is still a practical meaning for the user to select the server; and the more the cluster's topology is scattered, the more it can reflect the effect. This method is the main method for the topology redirection based on the topology redirection. 3.6 Weighted Law  Weighted methods can only be used in combination with other methods, which is a good supplement. The weighting algorithm constitutes a multi-priority queue of the load balancing according to the priority of the node or the current load condition (ie, the weight), and each waiting process in the queue has the same processing level, so that in the same queue can be followed. The front rotation method or at least a minimum connection method is balanced, and the queue is equalized in the order of priority. Here weight is an estimate based on each node capability. 4. Dynamic feedback load balancing   When customers access cluster resources, the time required by the submitted task and the calculated computing resources to be consumed, depending on many factors. For example: the service type of task request, the current network bandwidth, and the case of current server resource utilization, etc.

Some load comparison tasks require computational intensive queries, database access, long response data streams; and loaded task requests often only need to read a small file or make very simple calculations. The difference between the task request processing time may result in the tilt of the process of processing node utilization, ie the load imbalance of the processing node. There may be such a situation, some nodes have been overloaded, while other nodes are basically idle. At the same time, some nodes have been busy, have a long request queue, and constantly receive new requests. Conversely, this will lead to customers for a long time, and the overall service quality of the cluster decline. Therefore, it is necessary to use a mechanism to enable the balancer to understand the load conditions of each node in real time and can be adjusted according to the change in the load. Specific practices use a dynamic load balancing algorithm based on a negative feedback mechanism, which considers the real-time load and response capability of each node, constantly adjusting the scale of task distribution, and still receive a lot of nodes. Request, thereby increasing the overall throughput rate of a single cluster. In the cluster, run the server monitoring process on the load balancer, the monitoring process is responsible for monitoring and collecting load information of each node within the cluster; and running client processes on each node, responsible for reporting itself to equalizer Load condition. The monitoring process performs synchronization operation based on the load information received by all nodes, and is allocated to the task to be assigned according to the right proportion. The right calculation is mainly calculated based on the CPU utilization rate of each node, the available memory, and the disk I / O situation, if the new weight and the difference of the current weight are greater than the set thigues, the monitor uses new The weight is re-distributed for the task within the cluster range until the next load information is synchronized. The equalizer can match the dynamic weight, and use the weighted polling algorithm to schedule the accepted network service request. 4.1 Weighted Polling Scheduling   Weighted Round-Robin Scheduling Algorithm Decompose the processing performance of the node with the corresponding weight. The algorithm assigns the task request to each node based on the high and low order of the weight. A node ratio with a high weight of the weight is low to process more task requests, and the node of the same weight processes the same share. The basic principles of the weighted polling can be described as: assuming that there is a set of n = {n0, n1, ..., nn-1}, w (ni) represents the weight of the ni Ni, one indicating the variable I represents A server, T (Ni) represents the amount of task currently assigned ni Ni. Σt (ni) represents the total amount of tasks that require processing in the current synchronization cycle. ΣW (ni) represents the sum of the weights of the node. Then: w (ni) / σW (ni) = t (ni) / σT (ni) indicates that the assignment of the task is assigned in accordance with the ratio of the total number of values ​​of each node weight. 4.2 Weight Calculation   When the node of the cluster is used in the system, the system administrator sets an initial weight DW (NI) according to the hardware configuration of the node, and it is usually based on the node. The hardware configuration is defined, the higher the hardware configuration, the higher the default value), and this weight is also used on the load balancer. Then, as the node load changes, the equalizer adjusts the weight. The dynamic weight is calculated from the parameters of each aspect of the node. We selected the most important several in the experiment, including: CPU resources, memory resources, current processes, response time, and other information as the factor of the calculation formula. Combined with the current weight of each node, the size of the new weight can be calculated. The purpose of the dynamic power is to correctly reflect the condition of the junction load to predict the future possible load changes. For different types of system applications, the importance of each parameter is also different.

In a typical web application environment, the available memory resources and response time are very important; if the user is mainly based on a long database transaction, the CPU usage and the available memory are relatively important. In order to facilitate the appropriate adjustment of each parameter in the system operation, we set a constant coefficient RI for each parameter to indicate the importance of each load parameter, where σ ri = 1. Therefore, any of the weight formulas of any niode Ni can be described as: LOAD (NI) = R1 * LCPU (NI) R2 * LMEMORY (NI) R3 * LiO (NI) R4 * LPROCESS (NI) R5 * LRESPONSE (NI) where LF (Ni) represents the load value of the current parameter of the ni Ni, in turn, in turn: CPU usage, memory usage, disk I / O access rate, total number of processes, and response time . For example, in the web server cluster, we use the coefficient {0.1, 0.4, 0.1, 0.1, 0.3}, which is considered to be important than other parameters more than other parameters. If the current coefficient ri does not reflect the application's load, the system administrator can constantly correct the coefficient until a set of coefficients close to the current application. In addition, regarding the cycles of the collection weight, although a short cycle can reflect the load of each node, it is often acquired (such as 1 second or more) will give equalizer and nodes. Bring the burden, it may also increase unnecessary network load. In addition, since the collector is calculated at the time of collection, the experiment has shown that the equalizer reflects that the load information of each node will have severe jitter, and the equalizer cannot accurately capture the real load change trend of nodes. Therefore, these problems are solved, on the one hand, the cycle of collecting load information is appropriately adjusted, generally 5 to 10 seconds; on the other hand, the moving average or sliding window can avoid jitter, so that the load information collected by the equalizer For smoothing curves, this will be better in the adjustment effect of the negative feedback mechanism. The dynamic weight acquisition program of the equalizer periodically runs. If the default weight is not zero, the load parameters of the node are queried, and the dynamic weight load (NI) is calculated. We introduce the following weight calculations, combine the initial weight values ​​of nodes, and collected dynamic weights to calculate the final weight results. Wi = a * dw (ni) b * (LOAD (NI) -dw (ni)) 1/3 In the formula, if the dynamic weight is almost equal to the initial weight, the final weight is unchanged, then the system The load condition just achieves the ideal condition, equal to the initial weight DW (Ni). If the dynamic weight calculation result is higher than the initial weight, the final weight is high, and the system load is light, the equalizer will increase the task ratio assigned to the node. If the dynamic weight is lower than the initial weight, the final weight is low, indicating that the system begins to be overloaded, and the equalizer will reduce the task assigned to the node. In actual use, if the weight of all nodes is less than their DW (ni), then the current cluster is in an overloaded state, then you need to add a new node to the cluster to handle partial loads; contrary, if All the weights of all nodes are much higher than DW (NI), indicating that the load of the current system is relatively light. 5, summary   network load balance is the specific implementation of the cluster job scheduling system. Since its processing work unit is a network connection under the TCP / IP protocol, a centralized basic scheduling algorithm for network connections can be used. Consider the possibility of cluster load imbalance, take the weight of the service node and use a negative feedback mechanism to adjust the distribution of the balancer on the network service request to accommodate the changes in the service node during the running process.

转载请注明原文地址:https://www.9cbs.com/read-111165.html

New Post(0)