Nice high-end server
The high-end server is one of the hardware products that is the most closely related to the network relationship, is a device that provides online client sharing resources (including query, storage, and calculation, etc.) in the network environment. It has features such as high reliability, high performance, high throughput and large memory capacity, and has a powerful network function and friendly man-machine interface, is a key device for network-centric modern computing environments. With the rapid development of the Internet, the role of high-end servers in the infrastructure of the entire information motorway is more important. High-end server review servers can be subdivided into two parts: Part of the IA (Intel Architecture) server, that is, the PC server or NT server we often say; another part is a machine than the IA server, such as RISC / UNIX server, etc. This server is called a high-end server. There are many kinds of high-end servers, from small machines, large machines to giant machines. The competition in the high-end server field is very intense, foreign brands have strong companies such as IBM, HP, Sun, SGI. Domestic Shuguang Company has rely on national intelligence centers and the research institute of the Chinese Academy of Sciences and is a pilot enterprise of the National 863 plan. Such like "Twilight 1000", "Twilight 2000-I" and "Shuguang 2000-II" are called a milestone product. Dawn products are not limited to high-end products, but the dawn is the only manufacturer of all-series server products, including: Tiannevign PC servers, Tianming Unix Server and Tiantian Super Server. Lenovo launched an 8-way server for the previous period, showing its technology and confidence in the high-end server market. The wave of small machines use symmetrical multi-processor technology, mainly used in the national economic department, and has some sales quantity, and the wave is the leader of domestic mini-machine products. Currently, most high-end servers are RISC / UNIX servers. Therefore, talk about the history of high-end servers, you can't mention the RISC (Reprimanting Instruction Set Computing) technology. In the 1970s, IBM invented RISC technology. In the late 1980s, the RISC structure gradually replaced the CISC (complex instruction set), which became the mainstream microprocessor design structure. Using RISC technology is to optimize instruction systems, speed up the program compilation, improve the running speed. The RISC technology adopts more simple and unified instruction format, fixed instruction length, and optimized addressing, so that the entire computing structure is more reasonable. In general, the RISC processor is 50% ~ 75% faster than the equivalent CISC processor, and the RISC processor is easier to design and correct. On the basis of the RISC architecture, each manufacturer has developed its processor. The main PowerPC processor, SPARC processor, PA-RISC processor, and MIPS processors are currently used. PowerPC Processor In the 1990s, IBM, Apple and Motorola developed the PowerPC chip success and manufactures PowerPC-based multiprocessor computers. The PowerPC architecture is characterized by good scalability, convenient and flexible. The first generation of PowerPC used a 0.6 micron production process, and the integration of the transistor reached 3 million single chips. In 1998, the copper chip came out and created a new historical era. In 2000, IBM started a large number of products using copper chips, such as the RS / 6000 X80 series products. Copper technology has replaced aluminum technology that has been used in 30 years, and the production process of silicon chips has reached a level of 0.20 microns, and single-chip integrates 200 million transistors, which greatly improves operational performance. The 1.85V low voltage operation (original 2.5V) greatly reduces the power consumption of the chip, easy to heat, thereby greatly improves the stability of the system.
Now, 1GHz Power4 processor has completed design. In addition to raising clock frequencies from now on 500MHz to 1GHz, Power will take the lead in the first 0.11 micron process, transistor integration degree of 170 million, and 7-layer copper and SOI (insulating silicon-Silicon on Insulator) technology. These technologies will enable Power4 to jump on the historical steps of the server chip. The SPARC processor has worked with SUN and TI to develop RISC microprocessor-SPARC. The most prominent feature of the SPARC microprocessor is its scalability, which is the first microprocessor that has extensibility functions in the industry. The launch of SPARC has won Sun ahead of the high-end microprocessor market. In June 1999, UltrasParci debuted. It uses advanced 0.18 micron process manufacturing, all with 64-bit structure and VIS instruction sets, clock frequencies from 600MHz, which can be used to work with up to 1000 processors. The application of the UltrasparcII and the Solaris operating system implements 100% binary compatibility, fully supports customers' software investment, and has received many independent software vendors. In the 64-bit Ultrasparc processor, Sun has mainly 3 series. The first is the scalable S series, mainly for high performance, easy to extend multiprocessor systems. The current frequency of Ultrasparci has reached 750GHz. There is also models that will be launched by Ultrasparc IVS and UltrasParc Vs. The frequency of Ultrasparc IVS is 1 GHz, and Ultrasparc Vs is 1.5GHz. Secondly, the integrated i series, which integrates a variety of system functions on one processor, providing a higher benefit to a single processor system. The frequency of Ultrasparc IIII has reached 700 MHz, and the frequency of future UltrasParc IVI will reach 1GHz. PA-RISC processor HP's RISC chip PA-RISC came out in 1986. The first chip model is PA-8000, the frequency is 180MHz, and then the PA-8200, PA-8500 and PA-8600 models are launched. The 64-bit microprocessor PA-8700 developed by HP will be officially put into the server and workstation in the first half of 2001. This new processor has a design frequency of 800 MHz. The process used by the PA-8700 is a 0.18 micron SOI copper CMOS process. The 7-layer copper conductor is used, and the cache on the chip reaches 2.25MB, which is 50% higher than PA-8600. The PA-8800 and PA-8900 processors will also be launched in the future, and its climbers reach 1 GHz and 1.2GHz, respectively. PA-RISC is also the foundation of IA-64. In the future IA-64 chip, it will continue to maintain many of the important features of the PA-RISC chip, including the virtual storage architecture of PA-RISC, unified data format, floating point operation, multimedia, and graphic acceleration. MIPS Processor MIPS Technology is a manufacturer designed to make high performance, high-end and embedded 32-bit and 64-bit processors, accounts for an important role in RISC processors. In 1984, the MIPS computer company was established. In 1992, SGI acquired MIPS computer. In 1998, MIPS was out of SGI and became MIPS Technology. MIPS design RISC processor started in the early 1980s, in 1986, the R2000 processor was launched in 1988, and the R3000 processor was launched in 1988, and the first 64 commercial microprocessor R4000 was launched in 1991. After that, R8000 (in 1994), R10000 (1996) and R12000 (1997) were launched. After that, MIPS's strategy changes, focusing on embedded systems.
In 1999, MIPS issued MIPS 32 and MIPS 64 architectural standards that laid a foundation for future MIPS processors. The new architecture integrates all the original MIPS instruction sets and adds many more powerful features. MIPS has developed high-performance, low-power 32-bit processor core (CORE) MIPS 32 4KC and high-performance 64-bit processor core MIPS 64 5KC. In 2000, MIPS issued a new version of MIPS 32 4KC and the future 64-bit MIPS 64 20KC processor kernel. Now, the RISC chip is still widely used on the UNIX system platform, and the WindowsNT system can be supported. The multiprocessor based on the RISC architecture is still a place in the field of computing capacity such as a database or dedicated server. IA-64 Processors have launched a 486 system, PentiumPro system, PII system, PIII system, XEON system, etc. since the 1993 IA server. The processing capability of the processor system is greatly improved, and the bus structure of the server system is always the IA-32 bus system. After the IA-32 server has developed to the 8th Xeon server, the architecture has begun to become a bottleneck that the restrictive server performance is improved. First, the PCI channel bandwidth bottleneck is now the memory bus bandwidth bottleneck and processor system expansion bottleneck. Therefore, HP and Intel have been working together in 1994 to develop IA-64 architectures, hoping to combine HP to combine HP's decade of work in the RISC field, improve performance on microprocessor levels to increase Parallelism in the instruction level. The IA-64 structure is neither an expansion of Intel's 32-bit X86 structure, nor using HP's 64-bit PA-RISC structure, but a brand new design style. IA-64 Based on EPIC (Optical Parallel Instruction Calculation) Technology. The main characteristics of IA-64 are in several aspects: * The system memory addressing space of IA-64 is larger, can support 32GB of memory, and the maximum memory capacity of the IA-32 server is 16GB. * IA-64 processor addressing, more processing power, faster. The Itanium processor clock start at least 1 GHz, second Cache in 2MB. * IA-64 System Enhanced 128-bit floating point calculation registers greatly increase the floating point computing power of the system. * IA-64 The system will use the INFINIBAND technology-based bus structure. It is the core of the exchange system bus to replace the current shared bus as the core. Two techniques of NGIO and FUTUREIO are one, enable system bus, memory bus bandwidth and I / O bus bandwidth will be greatly improved. The IA-64 system bandwidth is 2Gb / s, and the current SMPIA-32 server system bandwidth is 1.06Gb / s, and the PCI bandwidth is generally 0.4Gb / s. * IA-64 includes a range of built-in features to extend the normal operating time of the computer and reduce downtime. The machine detection system provides error recovery and error correction capabilities in memory and data paths, which allows IA-64 platforms to recover from the error that causing system failures. It is officially announced that there is a MonteRey, Linux64, HP-UX, Solaris, Win2000 and other operating systems that support IA-64 platforms. High-end server technology server performance indicators are represented by system response speed and job throughput. The response speed refers to the time that the user gives the task from the input information to the server. The job throughput is the amount of task completed throughout the server in unit time. Assuming that the user is not interpretially input request, in the case of abundant system resources, the throughput of a single user is inversely proportional, that is, the shorter the response time, the greater the throughput.
In order to shorten the response time of a certain user or service, you can assign more resources. Performance adjustment is based on application requirements and server to run environment and status, changing system resources allocated by each user and service program, and give full play to system capabilities, with as little resources to meet the user's requirements, and reach the purpose of more user services. The high scalability, high availability, easy management, high reliability required by the technical target server is not only the technical goals of manufacturers, but also the needs of users. The scalability is specifically manifested in two aspects: First, there is a surplus chassis available space, and the second is a plenty of I / O bandwidth. As the processor calculation speed increases and the increase in the number of parallel processors, the bottleneck of server performance will be attributed to PCI and their affiliates. High scalability is that users can increase the relevant components at any time as needed to meet the system operation requirements, and protect investment. The availability is the time ratio of the device as a measurement indicator, for example, 99.9% availability indicates that an 8-hour time device does not function properly, and 99.99% availability indicates that a 5-minute time device does not work normally. Component redundancy is the basic method of improving usability, usually adding redundant configurations to those components (such as power, hard drives, fans, fans, and PCI cards) that have hazards to the system, and design convenient replacement structures (such as hot plugging) ), Thereby ensuring that these devices do not affect the normal operation of the system even if the fault occurs. Manageability is designed to use specific technologies and products to increase system reliability, reduce system purchase, use, deployment, and support costs. The most significant role is reflected in reducing the work of maintenance staff and avoiding the loss of system shutdown. The management performance of the server directly affects the ease of use of the server. Manageability is the largest proportion in TCO's various costs. Studies have shown that the deployment and support cost of the system far exceeds the cost of the first time, and paying the management and support personnel is the highest share. In addition, the reduction in work efficiency, the financial loss brought about the loss of business opportunities and the decline in business income can not be ignored. Therefore, the management of the system is both an urgent requirement of the IT department and a very critical role in business efficiency. Manageable products and tools can achieve the purpose of simplifying system management by providing information within the system. Remote management through the network, technical support staff can solve problems on their desktops, do not have to go to the fault site. The system components can automatically monitor their work state. If the failure can be found, it can make a warning at any time, reminding the maintenance personnel to take immediate steps to protect enterprise data assets, and the operation of the faulty component is also very simple and convenient. Speaking of reliability, simply means that the server must run stable, which is low in down. The key is to cooperate with the hardware device. If the resource to be processed is controlled on the CPU and the operating system, it will avoid the system unable to run due to an error in an error, and the server downtime will be great. Reduce, and this is precisely one of the advantages of UNIX / Linux system. The interruption of daily maintenance work is: host upgrade, hardware maintenance, or installation, operating system upgrade, application / file upgrade, or maintenance, file reorganization, full system backup, etc. Accidental disasters include hard disk damage, system failure, software failure, user error, power supply, human damage, and natural disasters. SMP SMP (Symmetrical Multi-Processor) is a symmetrical multiprocessor. In a symmetrical structure, the status of each processor in the machine is the same, and they are connected together to share a memory. There is an operating system in the memory, each computer can run this operating system, can respond to the requirements of the external device, ie the position of each memory is equal, symmetrical. The processor of such models in the domestic market is generally 4 or 8, and there are a small number of 16 processors.
However, in general, the Machine scalability of the SMP structure is poor, it is difficult to do more than 100 multi-processors, and conventional generally 8 to 16, but this is already enough for most users. The advantage of such a machine is that its use method and the different differences of the microcomputer or workstation. The programming changes is relatively small. If the program written by the microcomputer workstation is to be used to use it to the SMP machine, it is relatively easy. . The model availability of the SMP structure is relatively poor. Because 4 or 8 processors share an operating system and a memory, once the operating system has problems, the entire machine is completely paralyzed. And because this machine is scalable, users' investment is not easy to protect users. However, this type of model technology is mature, and the corresponding software is also more, so the parallel machine launched in the domestic market is now this kind. The cluster technology says that the cluster is such a technique: it connects to at least two systems together, so that the two servers can work as one machine or seem like a machine. Using a cluster system is usually in order to improve system stability and data processing capabilities and service capabilities of the network center. Since the 1980s, various forms of cluster technology have emerged. Because the cluster can provide high availability and scalability, it rapidly becomes the pillars of enterprises and ISP calculations. Common Cluster Technology 1. Server Mirroring Technology Server Mirroring Technology is to make two servers to mirror the hard drives of two servers over software or other special network devices (such as mirror cards) over software or other special network devices (such as mirror cards). Among them, a server is designated as the primary server, and the other is from the server. Customers can only read and write on the mirrored volume on the primary server, that is, only the primary server provides services to the user through the network, locked from the server, locked to prevent access of the data. The master / slave servers monitors each other's operating state through heartbeat monitoring lines. When the primary server is downtime, the main server will take over the primary server in a short period of time. Server mirroring techniques are low cost, improve system availability, ensuring that the system is still available in the case of a server downtime, but this technique is limited to clusters of two servers, and the system does not have scalability. 2. Application Error Tube Cluster Technology Error Tube Cluster Technology is to connect two or more servers established in the same network through cluster technology, each server in the cluster node each runs different applications, with its own broadcast Address, providing services for front-end users, while each server monitors the operational state of other servers, providing a hot backup role for the specified server. When a node is downtime, the server specified in the cluster system will take over the data and applications of the faulty machine in a short period of time, and continue to serve the front-end users. Error Tube cluster technology typically requires external storage devices - disk array cabinets, two or more servers are connected to disk array with disk arrays via SCSI cable or fiber, and data is stored on disk array cabinets. In this cluster system, two nodes are typically backed up, rather than several servers simultaneously, and the nodes in the cluster system via the serial port, shared disk partitions or internal networks to monitor each other's heartbeat. Error Take Overcrow cluster technology is often used in a cluster of database servers, Mail servers, and the like. This cluster technology has increased peripheral costs due to shared storage devices. It can realize the cluster of 32 machines, greatly improves the availability and scalability of the system. 3. A typical application of fault-tolerant cluster technology tolerance cluster technology, in fault tolerant machines, each component has redundant design. Each node of the cluster system in the fault-tolerant cluster technology is closely linked to other nodes, they often need to share important subsystems such as memory, hard drives, CPUs, and I / O, and each node in the fault-tolerant cluster system has become a common image. A separate system, and all nodes are part of this image system. In a fault-tolerant cluster system, various switches between various applications can be done well without switching time.
The implementation of fault-tolerant cluster technology often requires special hardware and software design, so the cost is high, but the fault tolerance system maximizes the availability of the system, the best choice for financial, financial and security departments. At present, the availability of the system is used in a wide range of applications that the application error takeover technology, that is, the dual-machine we usually use the cluster technology of the SCSI cable sharing disk array, this technology is currently being trained by various cluster software vendors and operating systems. Software vendors further expanded, forming a market-colored cluster system. Operating system 1. The Unix UNIX operating system has a powerful, technically mature, reliability, network and database function. It has important irrational status and role in computer technology, especially operating system technology. Although UNIX systems have been severely challenged by NT, it is still the only operating system that can run stably on various hardware platforms, and is still leading to NT in terms of technical maturity and stability and reliability. The appearance of the Internet puts higher requirements for the server, how to adapt and meet constant changes, enhanced network application requirements have become an important issue facing server technology. The processor of the server core technology cannot fully rely on the improvement of the frequency of processing, and the processor structure has become a bottleneck that improves server performance. One of UNIX's important vendors, such as HP, IBM, Sun, SGI, etc. is to adopt new technologies, strengthen performance and capacity leadership, mainly including 64-bit processors and 64-bit operating systems, fast and expandable mutual Continuous technology, large memory and high performance cluster and high bandwidth I / O technology. After the IA-64 architecture occurs, UNIX systems turn to IA-64 systems have become a big trend in the industry. Most importantly, many UNIX vendors will break the two major camps of IA-64, and play the openness of Unix and Wintel to the peak, truly implement the inter-platform use of the application system, and provide users with the maximum Flexibility. Intel will try to establish a common standard for different UNIX operating system versions, which is key requirements that Intel makes its high-end servers and next-generation 64-bit MERCED chip markets. This is also actions taken for Microsoft and Intel in the high-end computer field. Intel's development strategy aims to accelerate UNIX systems developed on Intel-based servers. In the process of creating a "unified UNIX", Intel will cooperate with companies such as HP, IBM, Sun and SCO. They have shown that in the high-end "Enterprise Software" market, UNIX will continue to act as a key role, while Microsoft still appears in terms of "scalability". The scalability is an important scale that measures the stability of the operating system in the processing of larger data. 2. Linux Linux applies to the following aspects: RAS (Reliability Reliability, Serviceability) Technology, Redundant Disk Array (RAID) Technology, Cluster Calculation (Cluster) and Parallel Computing Technology. RAS, RAID, Cluster are the top part of the corporate operation, Linux If you try to enter the high-end market such as the bank, large ICP, there is no possibility. Linux This emerging operating system, as its popularity increases and its own rapid development, and the IA-64Linux that is introduced with this year IA-64 this new generation of business computing platforms, support up to 64 CPUs and 64GB memory The release of enterprise core 2.4, Linux will have a growing role in the field of enterprise calculations. In the server operating system market, Linux continues to enhance the original advantages, on the one hand, resisting Windows's erosion of low-end markets, and on the other hand, attacking the high-end server market under UNIX control.
In October 1999, Turbolinux issued the world's first commercial cluster Linux server. This cost is low, with good scalability and flexibility, and has a wide range of applications for Linux to enter enterprise high-end applications and large-scale websites. The road. According to statistics, approximately 20% of the Internet service website uses cluster solutions, and the market will have amazing growth in the future. Due to the progress of cluster performance, Linux is impacted for a high-end server market under UNIX control. High-end server options facing many brands, various professional technology, disparity products, how to make a high-end server for network construction, adapt to demand, often let users feel confusing. In fact, the purchase server has some practical ways to follow such a MAPSS principle, name M-manageability; A- availability; P-performance; S-service; S-cost. I. An important job for M-manageable network administrators is to manage the server. The server's management work is manifested in timely discovery of server problems, timely maintenance, maintenance, avoiding or decreasing the server's downtime causes a comprehensive paralysis of the user system; on the other hand, administrators can promptly understand the server In terms of performance, the timely upgrade is performed on the server in operation. All of this can greatly improve the work efficiency of the company. Second, A-Availability (Availability) high-end server is the center of network, data, many companies (such as finance, postal, securities, etc.) should try to avoid server downtime, the server's dull opportunities cause interruption of internal and external information. The order is not allowed, and the internal business process is terminated, which is the modern enterprise that cannot be allowed. In addition, government agencies, medical institutions must guarantee the normal operation of their work, but also to avoid server downtime. When choosing a product, the user needs to pay attention to whether the server guarantees 24 × 7 × 365 uninterrupted work without downtime, and redundant technology is used. The central main server running in a key environment generally requires multi-power, hot-swappable hard drive, RAID card, and requires two-machine hot backup solution if necessary. There are a variety of reasons to cause server downtime. Hardware: AC power failure, storage fault, memory, power supply, fan failure, processor failure, system board, etc .; software: system software, application software; use: administrator, user misoperation; environment: fire, earthquake, flood, etc. . Since the downtime will bring huge losses to users, there are many downtime, and the user is ready to prepare for the worst case that may happen. 1. UPS is addressed on the AC power failure issue. 2. In terms of memory, the server needs to support ECC-ERROR Checking & Correcting technology, which can correct one of the memory. If you add memory backup technology on the basis of ECC technology, you can effectively avoid the emergence of two errors. 3. The internal power supply needs to support hot-swappable redundant power supply to avoid downtime of servers due to a certain power supply. 4. The server will rise when the server is running. The system temperature is too high to cause crash or even hardware damage. So you need hot-swap redundancy fans to help the server for efficient heat dissipation. 5. For system boards, software, use, system downtime, users can use a higher level of availability solutions such as cluster technology and other solutions. When the user determines the server configuration for the availability, it must be aware of two points: 1. Do not excessively saving expenses to reduce availability, because the downtime thus will often cause greater losses.
2. When selecting a service, the user should learn more about the specific characteristics of the server and whether the facade-oriented environment is consistent with your own needs. From its own actual demand, correctly understand the scale of corporate network, reasonable use of funds, do not blindly pursue high configuration and cause waste of funds. At the same time, users should not be limited to only meet the current needs. To have a long-term vision, take into account the growth of business, choose easy to upgrade, highly scalable server products. Consider protecting your business investment. Third, P-performance (Performance) Since the large amount of data from the company needs to run on the server, the performance of the server directly affects the work efficiency of the enterprise. One refers to server performance, many users immediately think of selecting higher frequency CPUs, larger memory. In fact, these are only part of the overall performance of the server. The improvement of the overall performance of the server is determined by the following aspects: 1. The chipset chipset is used to connect the individual components on the computer to realize communication between components, and the chipset is the core component of the computer system. The chipset directly determines the CPU type supported by the system, supported the number of CPUs, the memory type, the maximum capacity, the system bus type, the system bus speed, etc., and the final performance of a computer system should be determined by the chipset. Choosing the most advanced chipset structure ensures the leading system performance. 2. Memory type, maximum support capacity is determined by the chipset, the maximum support capacity has a very much impact on the system's arithmetic processing capabilities. Similarly, if our application requires a lot of calculation processing, such as databases, ERP, etc., it is recommended to configure a larger memory capacity based on the actual situation. 3. The use of high-speed I / O channel I / O is always the bottleneck of the computer system, which is very important for the improvement of the overall performance of the server with high-speed I / O channels. For applications where clients are often exchanged with servers, such as databases, erp, securities, etc. require more high speed I / O performance. 4. The network support server must communicate with the client through its internal network cards, the network bandwidth has a decisive significance to the server's response. So don't ignore the server's support for the network. Fourth, S-Service (Service) performance, price, and service consistently the three main factors for user purchase servers. The service is first is maintenance, the weight of maintenance is the maintenance speed, because users cannot endure the server for a long time that the server does not get maintenance. 7 × 24 service is very necessary for users. Secondly, technical support. Including pre-sales, after-sales phone support, and support software provided on the website. 5. S- saving costs Sometimes users often have been confused by the price of the product itself, and think that the lower the price, the lower the cost. In fact, we should consider having the total cost for cost. Because price differences often mean that performance, quality, and service are different. For the server we must fully consider four aspects of MAPS (administration, availability, performance, service), may get a real cost. Every increase in server performance is costly. In terms of price, there is a low price means that users' pay will be more. Low prices mean the decline in quality, the decline in service. The high-end server is not a normal computer product. The quality of its quality is itself a life of the user's own business. If the business users will pursue low prices too much, it will inevitably take a risk of servers and after-sales maintenance and long service cycle. Buy well-known brands, cost-effective products, meaning that users don't have to pay more in the future, thereby reducing the total cost. High-end server product IBM RS / 6000 Enterprise Server Model S80 S80 is built on the successful design of the S70 and S70 Advanced model, and the number of processors is increased to 24, and the memory is increased to 64GB. The S80 server is the first RS / 6000 platform using the RS64 III processor. S80 uses the AIX UNIX operating system.
The S80 platform combines performance, scalable expansion, investment protection, reliability, and flexibility, providing strategic solutions for medium and large companies. Product Features: * Adopt PowerPC RS64 III microprocessor. The processor is based on IBM copper chip technology, faster, and more reliable. * Up to 64GB of ECC SDRAM memory, providing 64-bit addresses faster performance. * Support 32-bit and 64-bit applications. Allow users to upgrade to 64-bit applications according to their own situation. * The external SSA RAID disk support is larger than the traditional SCSI's disk capacity, and the disk performance is increased to 160MB per second. * 64-bit system structure, improve the physical memory usage, meet the application requirements that need to quickly access large amounts of data. * Up to 56 PCI adapter slots, providing extended options for a large increase in capacity. Sun En En En En En En En En En En En En En En En En En En En En En En En Enrupates the SPARC processor-based scalable symmetry multiprocessing computer system, which runs in the Solaris operating environment. For host-based or client / servers, such as online transaction processing, decision support systems, data warehouses, communication services, or multimedia services, etc., it is an ideal generic server or data server. Product Features: * Up to 64 CPUs can be configured; memory can be expanded to 64GB; the online disk storage capacity can reach 64TB. * With dynamic reorganization characteristics (available online service capabilities) and dynamic system domain features. * The data bandwidth provided can reach 12.8GB per second by using GigaPlane-XB interconnection by the core part of the system. * Has RAS characteristics (reliability, availability, and serviceability), normal running time is greater than 99.95%. * Power, fans, and most board system components can be hot exchange, and can be replaced by the system in the system in an online operation. HP 9000 V Enterprise Server HP 9000 V Enterprise Server provides high availability and scalability to meet the high-end calculation requirements of large enterprises and data center needs. It provides an optimized E-Services platform to meet the needs of the Internet era. Based on the recognized reliability and maintenanceability of the HP 9000 V Enterprise Server, HP provides a supporting solution and service, using the user's application 24 hours a day, 365 days a year. Product Features: * The industry leading 552MHz PA-8600 processor, 440MHz PA-8500 processor and 200/240 MHz PA-8200 processor. * HP Scalable Computing Structure (SCA) achieves performance breakthroughs. * Can be expanded to 128 CPUs to expand to 128GB of memory. * Ultra-flat longitudinal cross-switching technology is used in HP, and the system level bandwidth is as high as 61.66 Gb / s. * Up to 7.6GB / S I / O channel bandwidth. * N 1 redundant protection improves reliability. SGI 2400 Server SGI 2400 Server The structure eliminates the main bottleneck of SMP technology, so that the system is calculated, computing capacity, memory capacity, and bandwidth, system interconnection bandwidth, I / O bandwidth and network connection capability can be present in low-delay. Linear variation. This server balances performance, scalability, availability, and compatibility. Web services, data warehouses, visual services, scientific computing, image processing, and simulation can be carried out. Product Features: * Adopt CC-NUMA (Cache Coherence NonuniForm Memory Access) structure, each node consists of processor and memory. * Up to 64 CPUs can be configured; memory can be expanded to 128GB. * I / O bandwidth peak reaches 49.92GB / sec. * You can choose to use a third party PCI board.