Linux server cluster system (1)

xiaoxiao2021-03-06  106

Linux server cluster system (1)

content:

background

Server cluster system

Linux Virtual Server Project

Characteristics of LVS clusters

LVS cluster application

Development progress and feeling of LVS project

LVS project network resources

references

About author

LVS project introduction Zhang Qiwei (wensong@linux-vs.org)

This article describes the background and target of the Linux server cluster system --LVS (Linux Virtual Server) project, and describes the LVS server cluster framework and current software, and lists the features of the LVS cluster system and some practical applications, and finally, this article is discussed Development progress and development in development of LVS projects.

1. Background Today's computer technology has entered the network-centric calculation period. Because of the simplicity of the client / server model, the client / server computing mode is widely adopted online. In the mid-1990s, the World Wide Web has to bring the online information on the text to ordinary Volkswagen with its simple operation mode, and the Web is also a service platform, a large number of services and applications from a content transmit mechanism. Such as news services, online banking, e-commerce, etc.) are all carried out around the web. This promotes the intense growth of Internet users and the internet traffic explosion growth. Figure 1 shows the changes in the number of mains from 1995 to 2000 and the number of internets [1], which can be seen more rapidly. Figure 1: From 1995 to 2000, the number of Internet hosts has developed rapidly to network bandwidth and server brings huge challenges. From the development of network technology, the growth of network bandwidth is higher than the growth of processor speed and memory access, such as 100M Ethernet, ATM, Gigabit Ethernet, etc., 10gigabit Ethernet is about to be ready, on the main network intensive wave points Use (DWDM) will become the mainstream technology of broadband IP [2, 3], Lucent has been introduced in a Wavestar 800gigabit, OLS 800G product [4]. Therefore, we convince more and more bottlenecks will appear on the server. Many studies have shown that the Gigabit Ethernet is difficult to make the throughput reach 1Gb / s because the protocol stack (TCP / IP) and the operating system are inefficient, and the processor is inefficient, this requires the processing method of the protocol, The operation system scheduling and the processing of IO are more in-depth research. On the high-speed network, the network service program on a single server is also an important topic. More popular sites will attract unprecedented access to traffic, such as based on Yahoo's press release, Yahoo has sent 625 million pages per day [5]. Some network services have also received huge traffic, such as the WEB Cache system of American Online to access the Web requests per day, each requested average response length of 5.5kBytes. At the same time, many network services are overwhelmed by the explosion of access, and cannot handle the user's request in time, causing the user to wait for a long time, greatly reduced service quality. How to establish a scalable network service to meet growing load demand has become an urgent problem. Most websites need to provide 24 hours a day, 7 days a week, especially for e-commerce, and any service interruption and critical data loss will cause direct business loss. For example, according to Dell's press release [6], Dell is now a more than $ 1,400,000 per day, and an hour's service interruption will cause average of $ 58 million. Therefore, this is increasingly demanding the reliability of network services. Now more and more CPU-intensive applications such as CGI, dynamic homepage are now higher in Web services. This has a high performance of the server. Future network services will provide richer content, better interactivity, higher security, etc., requires a stronger CPU and I / O processing capabilities. For example, the processing performance required by HTTPS (Secure HTTP) is taken from HTTP, and HTTPS is being used by the e-commerce site. Therefore, network traffic does not explain all issues, and it is necessary to consider the development of the application itself needs to be more and more powerful processing performance. Therefore, the demand for high scalable and high availability network services is highly achieved with hardware and software methods, which can be attempted to come down the following points:

Scalability, when the service is loaded, the system can be expanded to meet the needs and do not reduce service quality. High availability (Availability), although some hardware and software fail, the entire system must be available 24 hours a day.

Manageability, the entire system may be physically large, but it should be easily managed.

Price effectiveness (Cost-Effectiveness), the entire system is economical, easy to pay.

2. Server Cluster System Symmetric Multiple Processing (SMP) is a computer system consisting of multiple symmetry processors, and memory shared by bus sharing and I / O components. SMP is a low-calorime structure, which is what we usually say "tight coupling multiple processing system", its scalable ability is limited, but the advantage of SMP is a single system image, shared memory and I / O, easy programming. Since SMP has limited scalable ability, SMP servers clearly do not meet the long-scale, high-availability network services, grow demand. As the load grows, it will cause the server to constantly upgrade. Such a server upgrade has the following shortcomings: First, the upgrade process is cumbersome, the machine switch will make the service temporarily interrupt and cause waste of the original computing resources; the second is the high-end server, the bigger the cost it takes; three is the SMP server It is a single point point of failure, once the server or application is invalid, it will cause interruption of the entire service. The server cluster interconnected by a high-performance network or a local area network is a valid structure for achieving high scalable, highly available network services. The server cluster system of this loosely coupled structure has the following advantages:

Performance Network Services workload is usually a large number of independent tasks. It can be achieved by a set of servers to get high overall performance.

Performance / price is composed of PC servers or RISC servers and standard network devices that make up the cluster system, because of large-scale production, low price, with the highest performance / price ratio. If the overall performance increases close to the increase in nodes, the system's performance / price ratio is close to the PC server. Therefore, this loosely coupled structure has a better performance / price ratio than tightly coupled multiprocessor systems.

The number of nodes in the scalability cluster system can grow to thousands, or 10,000, which are far more than a single supercomputer.

High availability has redundancy on hardware and software, by detecting faults of hardware and software, providing high availability, high availability, and high availability.

Of course, there are many challenging work in achieving scalable network services with server cluster systems:

Transparency how efficiently consists of a plurality of independent computers constitute a virtual server; when the client application interacts with the cluster system, like a high-performance, high-accessible server interaction Like the client, the client does not require any modifications. Some servers cut and cut out will not interrupt the service, which is also transparent to the user.

Performance performance is close to linear acceleration, which requires a good software and hardware architecture, eliminating bottlenecks where the system may exist. Scheduted loads to each server as balanced.

High availability (AVAILABILE) requires a well-designed and implemented monitoring and processing system for system resources and faults. When a module is found, the service provided on this module is migrated to other modules. This migration is instant, automatic in an ideal condition.

Manageability To make the cluster system become easy to manage, just like a single image system. Under ideal conditions, the insertion of the hardware and software module can do plug and play (PLUG & Play). Programmability is easy to develop applications on a cluster system.

3. Linux Virtual Server project For high scalable, highly available network services, we give the IP layer and the load balancing scheduling solution based on content request distribution, and these methods are implemented in the Linux kernel. The server constitutes a virtual server that implements retractable, highly available network services. The virtual server architecture is shown in Figure 2, a set of servers are connected to each other through a high-speed local area network or a geographically distributed wide area network, and there is a load scaler in their front end. The load scheduler can seamlessly schedule the network request to the real server, so that the structure of the server cluster is transparent to the customer, and the network service provided by the client access cluster system is like accessing a high-performance, highly available servers. The client does not need to be modified without the impact of the server cluster. The system's scalability is achieved by transparently adding and deleting a node in a service chart, and high availability is achieved by detecting a node or a service process failure and the correct reset system. Since our load scheduling technology is implemented in the Linux kernel, we call Linux Virtual Server. Figure 2: The structure of the virtual server in May 1998, I set up the free software project of Linux Virtual Server, and the development of the Linux server cluster. At the same time, Linux Virtual Server project is one of the earliest free software projects in China. Target for Linux Virtual Server project: Using cluster technology and Linux operating systems to implement a high-performance, highly available server, which has good scalability, reliability, and manageability. Currently, the LVS project has provided a Linux Virtual Server framework that implements scalable network services, as shown in Figure 3. In the LVS framework, IP virtual server software IPVs containing three IP load balancing technologies, the core Layer-7 switch KTCPVS and cluster management software based on content request distribution. Highly scalable, highly available web, cache, mail and media can be achieved by LVS framework; on this basis, it is possible to develop, highly scalable, highly available e-commerce applications for huge users. Figure 3: Linux Virtual Server Framework 3.1 IP Virtual Server Software IPVS In the implementation of the scheduler, IP load balancing technology is the highest efficiency. In the existing IP load balancing technology, a network address translation makes a set of servers a high-performance, highly available virtual server, we call VS / NAT technology (Virtual Server Via Network Address Translation). ), Most commercial IP load balancing scheduler products are using this method, such as Cisco's LocalDirector, F5's BIG / IP and Alteon's Acedirector. On the basis of analyzing the disadvantages of VS / NAT and network services, we propose ways to implement virtual servers through IP tunnels VS / Tun (Virtual Server Via IP tunneling), and implement virtual servers through direct routing VS / Virtual Server Via Direct Routing, they can greatly improve the system's scalability. Therefore, IPVS software implements these three IP load balancing technologies, and their approach is as follows (we will describe the working principle on other chapters),

Virtual Server Via Network Address Translation (VS / NAT) via network address conversion, the scheduler rewrites the destination address of the request packet, and assists the request to the rear-end real server according to the preset scheduling algorithm; the response message of the real server is passed When the scheduler, the source address of the message is rewritten, then returned to the customer, complete the entire load scheduling process. Virtual Server Via IP Tunneline When using NAT technology, due to the request and response packets must be rewritten through the scheduler address, when the customer request is increasing, the processing power of the scheduler will become a bottleneck. To solve this problem, the scheduler forwards the request message into the real server through the IP tunnel, and the real server will return to the customer in response, so the scheduler only processes the request message. Since the general network service response ratio requests a lot of request packets, after VS / TUN technology, the maximum throughput of the cluster system can be increased by 10 times. Virtual Server Via Direct Routing (VS / DR) VS / DR Sends the request to the real server by overwriting the MAC address of the request message, and the real server returns the response to the customer. Like VS / TUN technology, VS / DR technology greatly enhances the scalability of the cluster system. This method does not have the overhead of the IP tunnel, and the real server in the cluster must not support the IP tunnel protocol, but requires the scheduler and the real server to have a network card to connect on the same physical network segment. For different network service requirements and server configuration, the IPVS scheduler implements the following eight load scheduling algorithms:

The Round Robin scheduler assigns the external request to the real server in the cluster through the "Wheel" scheduling algorithm, which is equal to each server, regardless of the actual number of connections and system loads on the server. . Weighted Round Robin scheduler dispatched access requests according to different processing capabilities of real servers via the "Weighted Ring" scheduling algorithm. This ensures that the server with strong handling capacity processing more access traffic. The scheduler can automatically ask the load situation of the real server and dynamically adjust its weight. Least Connections Screening Dynamically dynamically schedules the network request to a server that has been established by the "minimum connection" scheduling algorithm. If the real server of the cluster system has similar system performance, the "minimum connection" scheduling algorithm can be better balance the load. Weighted Least Connections In the case where the server performance in the cluster system is large, the scheduler uses "Weighted Local Link" scheduling algorithm to optimize load balancing performance, and servers with higher weight will withstand a large proportion. Active connection load. The scheduler can automatically ask the load situation of the real server and dynamically adjust its weight. Locality-based Least Connections "" Local-based Link "scheduling algorithm is a load balancing of the target IP address, which is currently mainly used for the Cache cluster system. The algorithm finds the server recently used by the target IP address based on the requesting target IP address. If the server is available and no overload, the request is sent to the server; if the server does not exist, or the server is overloaded and the server is half The workload, select an available server with the principle of "minimum link", send the request to the server. Local-based Least Connections With Replication Based Local Local Link Based Local Link Based Local Link "Scheduling Algorithm is also a load balance for the target IP address, mainly for the Cache cluster system. It is different from the LBLC algorithm that it is to maintain mapping from a target IP address to a set of servers, while the LBLC algorithm maintains mapping from a target IP address to a server. The algorithm finds a server group corresponding to the target IP address based on the request target IP address, and selects a server from the server group according to the "minimum connection" principle. If the server is not overloaded, send the request to the server, if the server is overloaded The principle of "minimum connection" is selected from this cluster, and the server is added to the server group and send the request to the server. At the same time, when the server group has not been modified for some time, the most busy server is removed from the server group to reduce the degree of replication. The Destination Hashing "Destination Hash Scheduler identifies the corresponding server as the Hash Key, which is available from the static allocation, according to the requested target IP address. If the server is available and Not overloaded, send the request to the server, otherwise return empty. Source Hashing "source address hashing" scheduling algorithm finds a corresponding server from a static allocated hash table, if the server is available for the serial ip. Not overloaded, send the request to the server, otherwise return empty.

3.2 Kernel Layer-7 Switch KTCPVS In IP Load Scheduling Technology, when a TCP connection initial SYN message arrives, the scheduler selects a server and forwards the message to it. Thereafter, by checking the IP and TCP packet address of the message, it is ensured that the subsequent message of this connection is forwarded to the server. In this way, IPVS cannot check the requested content and select the server, which requires the rear server group to provide the same service, whether the request is sent to which server, the return result is the same. However, in some applications, the backend server features are different. Some provide HTML documents, some provide pictures, and some provide CGI, which requires content-based schedule. Since the overhead of the user space TCP Gateway is too large, we propose a Layer-7 exchange method in the kernel of the operating system to avoid overhead of switching and memory replication of user space and core space. In the kernel of the Linux operating system, we implemented the Layer-7 exchange called KTCPVS (Kernel TCP Virtual Server). Currently, KTCPVS has been able to make content-based scheduling for HTTP requests, but it is not very mature, with a lot of work needs to be done in terms of its scheduling algorithm and functional support of various protocols. Although the application layer exchange processing is complex, its scalability is limited, but the application layer exchange brings the following benefits: the request of the same page is sent to the same server, which can improve the Cache hit rate of the single server.

Some studies [5] indicate localities in the web access stream. The Layer-7 exchange can make full use of localities, send the same type of request to the same server, so that the request received by each server has a better similarity, further increasing the cache hit rate of the single server.

The backend server can run different types of services such as document services, image services, CGI services, and database services.

4. The characteristics of the LVS cluster The characteristics of the LVS cluster can be attributed as follows:

Function has IPvs software that implements three IP load balancing techniques and eight connection scheduling algorithms. In IPVS internal implementation, an efficient Hash function and garbage collection mechanism are used, and ICMP messages related to the scheduled packets (some commercial systems cannot be could not be handled correctly. The number of virtual services has no restrictions, each virtual service has its own server set. It supports persistent virtual services (such as HTTP cookie and HTTPS, etc. need to be supported), and provides detailed statistics, such as connection processing rates, and traffic flow. Three defense strategies were implemented for a large-scale Deni Service attack. There are application layer exchange software KTCPVs based on content request distribution, which is also implemented in the Linux core. There is a relevant cluster management software to monitor resources, and the fault shields can be realized in time to achieve high availability of the system. Lord, from the scheduler to periodically synchronize, thereby achieving higher availability. Applicable backend servers can run any operating system that support TCP / IP, including Linux, a variety of UNIX (such as FreeBSD, Sun Solaris, HP UNIX, etc.), Mac / OS, and Windows NT / 2000. The load scheduler can support the vast majority of TCP and UDP protocols:

protocol

Internal capacity

TCP

HTTP, FTP, PROXY, SMTP, POP3, IMAP4, DNS, LDAP, HTTPS, SSMTP, etc.

UDP

DNS, NTP, ICP, Video, Audio Flow Play Protocol, etc. You can apply most Internet services without any modifications to clients and servers. Performance The LVS server cluster system has good scalability to support millions of concurrent connections. Configure 100M NIC, use vs / tun or vs / DR scheduling technology, the throughput of the cluster system can be up to 1Gbits / s; if the Gigabit NIC is configured, the maximum throughput of the system can be close to 10Gbits / s. Reliability LVS Server Cluster software has been well applying in many large, critical sites, so its reliability is a good confirmation in real applications. There are many scheduler for more than a year, and there is no restart. Software license LVS cluster software is free software issued by GPL (GNU public license) license, which means you can get the source code of the software, have the right to modify it, but you must guarantee your modification is also released in GPL. .

5. The application LVS project of the LVS cluster is set up to now, and the LVS cluster system has been applied to many heavy load sites. I know that the system is already in the United States, Ying, Germany, and MOS. Dozens of sites are officially used. We do not have hundreds of machines and high-speed networks to actually test the ultimate performance of LVS, so raise LVS application examples to illustrate the high performance and stability of LVS. Some large LVS applications we know are as follows:

The British National JaNet Cache Service (wwwcache.ja.net) is a university that provides Web Cache served by the University of the United Kingdom 150. They used 28 nodes of LVS clusters instead of the original 50-independent Cache servers. They used their words that they were the same as summer, because summer was not many people using the network during the holiday.

Linux's portal site (www.linux.com) uses LVS to form a high-performance Web service with a lot of VA Linux SMP servers with LVS.

SourceForge (SourceForge.NET) is a service providing Web, FTP, Mailing List, and CVS for development source projects worldwide, and they also use LVS to scheder to more than ten machines.

One of the world's largest PC manufacturers use two LVS cluster systems, one in the Americas, one in Europe, used for online direct sales systems.

REAL companies (www.real.com), which provide audio video services in RealPlayer, uses a LVS cluster consisting of 20 servers to provide audio video services for their global users. In March 2000, the entire cluster system has received an average request stream of 20,000 connections per second.

Netwalk (www.netwalk.com) uses multiple servers to construct a LVS system, providing 1024 virtual services, with an American mirror site (www.us.linuxvirtualserver.org).

RedHat (www.redhat.com) has a LVS code from its 6.1 release, which developed a LVS cluster management tool called Piranha for controlling the LVS cluster and provides a graphical configuration interface.

Va Linux (www.valinux.com) provides customers with LVS-based server cluster systems and provides relevant services and support.

TurboLinux's "World First-class Linux Cluster Product" TurboCluster is actually based on LVS ideas and code, just forget to thank the news release and product demonstration.

Red Flag Linux and Soft provide LVS-based cluster solutions and show on Linux World CHINA 2000 in September 2000. Here, the performance and reliability of LVS will be further illustrated here again. "We tried virtually all of the commercial load balancers, LVS beats them all for reliability, cost, manageability, you-name-it" Jerry Glomph Black, Director, Internet & Technical Operations, Real Networks, Seattle Washington, USAhttp: // marc .theaimsgroup.com /? 1 = linux-virtual-server & m = 95385809030794 & w = 2 "I can say without a doubt that lvs toasts F5 / BigIP solutions, at least in our real world implementations. I would not trade a good lvs box for a Cisco Local Director either "Drew Streib, Information Architect, VA Linux Systems, USAhttp:? //marc.theaimsgroup.com/ 1 = linux-virtual-server & m = 95385694529750 & w = 2 6. LVS project development project in progress and feelings LVS In May 1998, IPvs issued IPvs on the website, which has been encouraged and support from users and developers from Internet. It should be said that the procedures that have just been released are very simple. Due to the user's use, feedback, and expectation, I think the value of this work is constantly adding new features and perfect for the software, which is also available for other development. The help, so the software has gradually developed into a complete and useful system, which is far exceeded what I originally an imagination. Here, I would like to thank Julian Anastasov's bug fixes and improvements in the system, Dr. Joseph Mack wrote the LVS HOWTO document; I also thank some manufacturers to sponsor me to develop (such as hardware equipment, etc.), and sponsor me many times. Technical Reports. At present, the ongoing LVS project development work includes:

Excoucharge IPVS support for other transport protocols, such as AH (Authentication Header) and ESP (Encapsulating Security PayLoad), so that IPVS scheduler will implement IPSec's server cluster.

Provide a unified, perfect, more flexible LVS cluster management software.

Expanding and improving the scheduling algorithm of KTCPVS and support for multiple protocols, making it complete and more stable.

In terms of TCP bonding and TCP Handoff, some attempts have been made to further improve the application layer scheduling in the LVS cluster.

转载请注明原文地址:https://www.9cbs.com/read-124926.html

New Post(0)