SQL Server large server: scalability, availability and management

zhaozj2021-02-16  99

Update Date: June 24, 2004 Microsoft Research Institute, Jim Gray SQL Server Development Team, Richard Waymire Summary Microsoft SQL Server has developed into support giant databases and applications, including several thousand people used by millions of people Database. SQL Server supports upward extensions on a symmetrical multiprocessor (SMP) system (allowing users to add resources such as processors, memory, disk, network bandwidth to establish single large nodes) or outward into multi-node clusters (allowing giant databases The server cluster is partitioned, and each server stores a portion of the entire database and completes some of the work, ensuring that the database provides access to an overall form to access this scaling capacity. By expanding outward, SQL Server 2000 creates top-level performance indicators that have not been achieved in any database system in any platform in any platform in the transactional committee benchmark C (TPC-C). The .NET server provides a high availability and automatic management mechanism with the SQL Server cluster. SQL Server supports high availability through built-in failover and replication technology. At the same time, SQL Server also provides a powerful management model based on user interface, wizard program, repetitive task job scheduling, and SQL-DMO-oriented application operations. The architecture of SQL Server is suitable for modular growth, automated configuration, maintenance, and programming implementation of large server areas. Contents SQL Server 2000 and Windows Server 2003: SMP and Cluster Large Server SQL Server 2000: Scalability, Availability and Easy Manufacturing Scalable Indicators Scalable Hardware Equipment Architecture SQL Server Software Scalability Architecture SQL Server and Windows Server 2003 Easy Management Summary Copyright Introduction With the rapid development of e-commerce, online business applications, business intelligence, many successful companies expand their online applications. At present, every Internet or internal network users are a potential customer, so applications are facing huge users and transaction loads. Most companies are building large-scale servers in order to manage Ji Ti information and support millions of customers and users. During this process, the database system has become the core of these large servers. Scalable systems provide you with a way to extend networks, servers, databases, and applications by adding more hardware devices. Scalable computer systems can expand the number of applications, database size, and network throughput without modifying application code. The extended server will be used as the original miniature system to manage by the user-in-user unit. As shown in Figure 1, the system can be expanded in the following manner: • Add a hardware device to a single node or upgrade it into a large node. This method is called upward. • Add more nodes and distribute data and workloads in these nodes. This method is called outward. Figure 1: Up-extended extending extended scalable system allows designers to expand the system from a small scale and constantly expand the system as needed. Some applications - For example, customer relationship management - requires small nodes, even only portable computers that can only store partial databases and perform partial applications. Ideally, all nodes of this distributed system will provide the same operation and present the same programming interface. To date, most of the scaling capacity is achieved by symmetrical multiprocessor (SMP), i.e., add more processors, memory, disks, and network cards to a single server. A number of vendors have proven that SMP servers can provide 10 times the uplifting of a single processor system in a regular business load environment. However, single-node architecture will eventually reach a bottleneck and cannot achieve further effective expansion. This bottleneck is expressed as a gradually narrowing return rate or an amazing expensive hardware device.

In order to achieve more than 10 times the extension, application designers tend to use a clustered extension extension architecture. In this case, the workload and database will be divided in the SMP node array. Out of an extension system is extended by adding more nodes to the cluster. Although the cluster is actually a node array, it can program and manage the single-set system. Ideally, this division is completely transparent to client and applications. All real large systems are created by extension clusters, including IBM's MVS Geoplex and SP2, HP VMSCluster and Nonstop Himalya, and NCR's Teradata System, etc. In addition, the cluster also appears in the form of a storage area network, manufacturers who provide such products include EMC, HP, IBM, etc. Unlike growing large SMP systems, clusters can be small-scale expanded by convenient components. At the same time, the relative independence of cluster nodes also forms natural failover and high availability design. Despite this, the cluster method also brings certain management challenges due to more components need to be managed. Back to top SQL Server 2000 with Windows Server 2003: SMP and cluster large Server Microsoft Windows Server 2003 and SQL Server 2000 support SMP-style extended architecture and cluster extension extension architectures. SQL Server can be run down to run on a portable computer or Windows, or can be extended to run on a giant server. SQL Server can achieve superior peak performance in the field of transaction or data warehouse applications. Although most common SMP hardware systems are designed with 2, 4 and 8 roads, SQL Server 2000 and Windows Server 2003 are missing on the SMP hardware system of up to 64 nodes. These systems can support 64 GB of memory in the 32-bit Intel architecture, and 4 TB memory can be supported on the 64-bit ITANIUM architecture. So far, the maximum configuration supported by SQL Server 2000 running on Windows Server 2003 systems has reached 32 processors and 512 GB of memory. These systems have shown superior SMP scalability, whether in official reference or real application environments. Currently, single CPU can support 14,000 users to access a database of 1 TB, and a set of 8 processor nodes support more than 92,000 concurrent users to manage 10 million billion-megapons recorded on the 8 TB disk array. SQL Server The system is accessed, and a 32-way CPU node supports 290,000 users access hosts 24 TB disk arrays on the SQL Server database. The biggest in these servers can handle more than 100 million business transactions per day. The SQL Server SMP node cluster has more functions. During a benchmark test, HP demonstrates a 32-node cluster that is composed of a server equipped with an 8-way Xeon processor that supports more than 575,000 concurrent users, a database of capacity 53 TB can be TMPC units are not more than $ 15 low cost to achieve up to 709,220 transactional committees. Benchmark C (TPC-C) per minute per minute. Based on this TPC-C performance indicator, SQL Server 2000 has become a database system that has the best peak performance and optimal cost performance in the world. In addition, SQL Server is equally excellent in decision support and data mining fields, and highlighting performance indicators and cost performance in the widely used TPC-H query collection. The TPC-C benchmark results show that the performance of SQL Server has increased more than twice the rate of more than twice as many years since 1995. At the same time, its performance price is also growing with similar speeds. The continuous improvement of hardware devices and software products will still ensure this trend in the short term.

Back to top SQL Server 2000: Scalability, Availability and Easy Manage SQL Server 2000 Enterprise Edition to build larger servers through the features provided by Windows 2000 Server and Windows Server 2003. SQL Server uses additional processor resources to run additional execution threads and use additional memory resources to store database information in memory. The SQL Server relational engine supports high-speed transaction processing and demanding data warehousing applications. The query execution engine utilizes multiprocessors and multi-disk systems through parallel combined hash linkages and merged join methods. The query processor has made a number of revolutions, including the Hash Team, Joint Overlay Index, Bit-Vector Filter implemented in the hash link, and transparent application access to the cluster-based partition-based partition watch view. The query executor implements good decision support query SMP performance through a large number of main memory (up to 512 GB), a large asynchronous I / O and parallel internal queries. The optimizer adopts a number of special techniques for the star architecture and a rich index database. It will optimize batch updates by first sorting before applying to the base table and index. The query processor uses built-in OLE DB, so data can be integrated from a variety of different data sources. With these technologies, SQL Server enables optimal TPC-C performance and optimal SMP scalability on non-clustered Intel systems, and implements optimal peak performance on the SMP cluster. SQL Server 2000 supports a very important index view of the application that faces the report. At the same time, SQL Server also includes powerful analysis (OLAP) tools for establishing and processing cube. In addition, SQL Server also provides data mining tools and text indexes and acquisition components. Distributed transactions allow multiple servers that run Windows Server 2003, Windows 2000 Server, Windows XP, and Windows CEs. At the same time, distributed transactions also allow SQL Server to participate in transactions across DB2 / MVS, UNIX and Windows nodes, including database products from IBM and Oracle. The Microsoft Distributed Transaction Coordinator supports the XOpen XA interface and automatically manages transactions across these nodes. Microsoft and HP use Microsoft Distributed Transaction Coordinats to establish a 32-node cluster that handles 100 million transactions every day, and 45-node clusters. This cluster consists of 32 servers running SQL Server, and each server stores part of the content of the database. Microsoft COM is responsible for managing applications and coordinating transaction processing between servers. Scalability and availability SQL Server 2000 has powerful scalability and reliability, including: • Log transmission characteristics for the hot backup server. The partition view can be updated in the cluster node. ? Large-capacity memory support capabilities (up to 16 TB). • SMP support (up to 64 processors). • Support for large Windows Server 2003 Data Center Server clusters. • Support for multiple SQL Server 2000 instances on a single server. • Transparent access to the SQL Server server is implemented by integrating with Active Directory. • Data and database management operations can be improved concurrent characteristics. • Index view and snowflake architecture used to support large-scale data warehouses. • Built-in XML support capabilities for Internet and data exchange operations. • Notification services for client cache and message communication applications. SQL Server uses Microsoft Cluster services to support symmetrical virtual servers: Each SQL Server cluster node acts as a hot backup for other (up to three) nodes while processing normal workloads.

For disaster recovery, SQL Server supports log transmission from a server, and when the primary server disasters, the secondary server can restore and continue to provide services to customers in a few minutes. SQL Server has a long-term reputation in terms of easy installation and management: • Enterprise Manager allows operators to monitor and manage multiple SQL Server instances by embedding into a single console in the Windows Server 2003 main console. • The database has a large extent to have autonomous adjustment. As the memory is increasing, SQL Server will expand memory usage. When the memory pressure from other applications is increased, SQL Server will release part of memory space. Similarly, SQL Server will also dynamically expand or contract databases and log spaces based on actual needs. • The system will calculate database statistics and automate other internal maintenance tasks, thereby liberate database administrators and operators to make it possible to pay attention to higher levels of problems. • SQL Server provides a large number of wizards that help administrators automatically complete the standard task. These tasks will be executed regularly by a job scheduler. The alarm system will record the event into the Windows event log and make a warning to the operator via email or fax. In addition, a data deposit stored process defined by the user can be called for each event category. • SQL Server 2000 can support multiple instances on a single SMP node. A large SMP node can manage a server that serves a service for multiple sets of databases at the same time. SQL Server can support Gigabit data; unique restrictions in reality use of backup, recovery, and identifying such databases. In addition, in recent years, SQL Server products have achieved a long-term development in this field: At present, backup and recovery operations can be incremented and can start again. Computer Associated companies can perform backups at speeds of 2.6 TB per hour and recovered at a speed of 2.2 TB per hour. During this process, the online throughput is only 25%. More incredible is, using Windows Server 2003 shadow disk technology, SQL Server can copy a database that is 2.5 TB from disk volumes in just 11 minutes. These indicators are still increasing by further improved storage and networking technology. For more information, search for "TB backup reference test" on the Web site of Computer Associates (http://ca.com/). SQL Server is designed to automatically monitor and regulate itself according to the change of the operating environment of the load and hardware devices. Among them, it adds to assist in completing the database design, monitoring system health, and displaying system status and query plans in graphical ways, recommending restructuring methods, and help operators complete routine management tasks. A set of advanced built-in workflow systems will be responsible for arranging data cleaning, data conversion, and data loading processes for most data centers and data warehouses. In the end, a wizard program will check the system workload and recommend a better physical design. The cluster allows SQL Server to extends to large databases of any size. Windows Server 2003 cluster provides modular extension, allowing customers to purchase only their desired products and expand the system by adding processing, storage, and network modules to servers when demand growth. Microsoft simplifies the construction and management of large-scale servers. In fact, Microsoft wants to implement plug-and-play feature in the enterprise cluster, and automate automation for wider cluster configuration and management work. Partition clusters can potentially support capacity of hundreds of megabytes. The size of such systems should be sufficient to adapt to various applications. In addition, if the current market price trend continues, then in 2005, such clusters will take millions of dollars to build a conventional component.

The following table records the scaff status of SQL Server. The numbers referenced are not strictly limited, but only represent the expectations of Microsoft's extension of related tools. Up to March 2003 SQL Server 2000 telescopic feature technology Activity user throughput T database scale SMP, failover, parallel query, distributed transaction, SQL Server Enterprise Manager 400,000 per minute 300,000 transactions, 250,000,000 per day, 40 TB SMP clustering, failover, parallel query, distributed transaction, SQL Server Enterprise Manager 700,000 per minute 500,000 transactions per day 100 million transactions 60 TB data warehouse and decision support, star architecture, complex query optimization, cube, data Mining Technology 100 The 3 TB of 2,000 times per hour, the other end of the scalability, SQL Server 2000 can shrink on a small Windows system and provide operational mode for these systems to support mobile applications to support mobile applications. In addition, SQL Server also provides versions for Windows CE. Back to top Scalability Indicators Growth Types add more data as an organization, they must handle increasing transaction loads and maintain larger-scale databases. Each larger growth means a new scalability challenge. Scalability involves several growth types: Figure 2: Telescopic challenge? Increase the number of users and network load. If the number of users is doubled, the network and database load will also double. Increase database gauges. For capacity usually reach several hundred jig databases, such as backup, recovery, loads such as backups, will become bottlenecks. What is the complexity of transaction. Application designers are constantly adding more intelligent elements that can help users complete single-tuning tasks. At the same time, related applications are also widely used in data mining, data analysis and other fields. Add the amount of application. As the application develops, the more simple and cheap, companies are using a new way. These new applications have increased the existing database and the server's load. ? Increase the number of servers. Cluster and distributed applications will involve a large number of nodes. As desktop systems and portable computers have continuously enhanced, they will establish a local data storage mechanism and become a replica of critical data sources. The scalable system allows you to manage a large number of nodes from a single location. Up-extended expansion, extension and speed enhancement ideal system performance should exhibit linear change trends, that is, if the number of processors and disks increases double, then system throughput should be doubled, or request response time Half of it should be shortened. These two results are called linear upward expansion and linear speed improvement. However, since all aspects of the requirements are required to have good telescopic capabilities, linear expansion is rarely applied in practice. Simplify the scalable ability into a single indicator - such as the number of processors that the system can support - seems to be very attractive. However, many database applications are very sensitive to I / O performance, and therefore, adding a CPU in a limited set of I / O performances will not be able to achieve the purpose of improving speed. Microsoft SQL Server running on today's common 4 processor servers can implement comparable performance with other software products running on a hardware platform equipped with a 10 processor. SQL Server 2000 and Windows Server 2003 can support 64 GB of main memory on Intel 32-bit architecture, support 4 TB primary memory on Intel Itanium 64-bit architecture. Huge main memory capacity can reduce I / O communication and provide potential performance driving for SQL Server. In addition, SQL Server 2000 also supports a 32-channel SMP system and a large cluster that uses these SMP systems. There is currently no universally accepted scalability measurement standard. However, you can get some valuable information from http://www.tpc.org from http://www.tpc.org from the transaction performance committee (TPC) benchmark. TPC is a non-profit organization that encompasses industry standards for transaction processing and database baseline testing. Members of the committee currently include all mainstream database vendors and server hardware system vendors.

The committee industry has defined a series of reference test projects, including: TPC-A, TPC-B, TPC-C, TPC-D, TPC-H, TPC-R, and TPC-W. TPC-C is an industry standard for measuring OLTP system performance and scalable. TPC-C will have a wide test of typical database capabilities, including queries, updates, and queue-type mass transactions. It has set a very stringent specification in key areas such as database transparency and transaction independence. Many people treat TPC-C as an ideal indicator of OLTP system performance in the real world. Independent analysts have fully reviewed the relevant benchmark results, and TPC also provides a complete announcement report. These reports include a large number of information about different system usage methods and system costs. The results of the audit in the following table prove that SQL Server 2000 and Microsoft Windows Server 2003 can provide excellent SMP scaling capacity for up to 32 processors. In fact, the TPC-C performance and cost-effectiveness of the TPC-C can be achieved in Oracle or DB2 in SMP records (or cluster records) in other platforms. SMP in TPC-C benchmark platform * Performance and cost performance (8 to 64 CPU) SQL Server, DB2 and Oracle comparison database hardware devices CPUTPMC USD / TPMC system cost effective SQL Server 2000 Enterprise version HP ProLiant DL760-G2 8P8115 $ 0257.69 $ 884,216 March 31, 2003 Oracle 9i R2 Enterprise Edition IBM ESERVER PSERIES 660 - 6M18105, 02523.45 US $ 52,462,401 September 21, 2001 DB2 / AS400 V4 R5IBM ESERVER ISERIES 400 - 840-2420-124163 ,77651.58 $ 8,448,137, December 15, 2000 Oracle 9i R2 Enterprise Edition IBM Eserver PSeries 69032427,7617.75 US $ 7,591,191,038 May 31, 2003 Oracle 9i R2 Enterprise Edition HP 9000 SuperDome64443,41415.64 US $ 6,621,072 August 2002 SQL Server 2000 SQL Server 2000 The 64-bit NEC Express 5800 / 1320XC C / S32433, 10812.98 US $ 5,619,528 June 30, 2003 * As of March 6, 2003, the best SMP test results from each database manufacturer. The above table shows the latest SQL Server SMP TPC-C benchmark results, as well as optimal comparison results on the 8 and 32 SMP servers. The results show that SQL Server can support 100,000 TPMC on the 8-channel SMP server and access capacity of 8 TB, SQL SREVER has the best 8-channel SMP performance - its performance is better than all 8-way UNIX systems running DB2 or Oracle. At the same time, the cost of Microsoft solutions is less than one-third of other systems. On the 32 processor system. The performance of SQL Server is slightly higher than the best Oracle test results. However, in general, SQL Server has higher peak performance compared to DB2 or Oracle, and the price is much lower than UNIX solutions. The above table shows the best TPC-C test results for single-node SMP - the upward expansion performance indicators. In fact, most large servers are composed of a web server area located at the front end of the cluster database server. This is an outward extension design, the following table shows the performance indicators of this design.

SQL Server, DB2 and Oracle Clusters on TPC-C Benchmark Test Platform * Cluster Performance and Cost - Free Ratio (8 to 64 CPU) Database E Hardware Devices CPUTPMC USD / TMPC System Cost Valid SQL Server 2000 Enterprise Edition HP ProLiant DL760- 900-256P272 (34 x 8) 709,22014.96 US $ 10,603,803 October 15, 2001 Oracle 9i R2 Enterprise Edition HP ProLiant DL580-PDC 32P32 (8 x 4) 138,36217.38 US $ 2,404,503 March 5, 2003 SQL Server , DB2 and Oracle Clusters on TPC-C Benchmark Test Platform * Cluster Performance and Cost - Free Comparison (8 to 64 CPUs) (Continued) Database Hardware Device CPUTPMC USD / TMPC System Cost Validation Oracle 9i R2 Enterprise Edition HP ProLiant DL580- PDC 32P32 (8 x 4) 137,26118.46 US $ 2,533,095 September 6, 2002 SQL Server 2000 Enterprise Edition ** HP ProLiant DL760-G2 8P8 (1 x 8) 115,0257.69 US $ 884,216 March 31, 2003 * As of the best cluster test results on March 6, 2003. ** This is a non-commissioned test result used as a reference. SQL Server has long been a leading position of the TPC-C benchmark to extrovert the category. As desirable, these outward expansion test results are far better than the upward expansion test results implemented by SMP. SQL Server 2000 maintains performance leadership in this area. SQL Server 2000 can provide optimal performance and cost performance compared to other database products. Its performance indicators are 60% higher than its most close competitor. Here is a set of Oracle test results, one of which is a Windows 2000 Server, another set of Linux, which is almost the same hardware device (although the two benchmark tests are separated by 6 months, so that Linux is hardware It seems a spare price advantage). Best SQL Server test results are more than 5 times higher than the best Oracle cluster test; in fact, Oracle's best cluster test results are only equivalent to test results of single-node 8 CPU SQL Server system. Similarly, Oracle is also a little less than in terms of cost performance. Figure 3: In contrast throughput and CPU, the TPC-C reference test results show that SQL Server has the best peak performance and cost performance compared to all database products on other platforms. In terms of scalability, SQL Server 2000 is changing in a cluster that is composed of HP nodes running Windows2000 Server. As shown in FIG. 3, thousands of transactions are processed on the cluster by adding an 8 processor SMP system group. The cluster starts from 16 SQL Server nodes, increasing to 24 and 32 nodes in turn. (The number of actual nodes in this process is 17, 26 and 34, respectively, and additional nodes act as a transaction coordinator.) The maximum size system consists of 272 CPUs, providing 58 TB databases distributed on more than 3,000 disks. service. This largest system should be sufficient to apply to the most large e-commerce site, even if it is still unable to meet the requirements, it can also expand the cluster size by adding more nodes. These performance indicators have far exceeded the record ahead of all other systems in various platforms. On the cluster composed of the HP DL76-9000 server, as the node is increasing, SQL Server shows a linear extension expansion trend (once the funds initially designed to design an external expansion scheme).

The test data shows the telescopic performance from 17, 26 to 34 8 processor SMP nodes (total relating to 272 CPUs). As the system's upward extension, disk and network bandwidth will continue to expand as the processor and memory increase. The TPC-W benchmark is responsible for simulating a web server application with a large number of complex transactions. The main object it tested is a performance indicator of the number of Web interactions per second and cost (USD / WIPS) cost-effective indicator of each WIPS unit. To date, only IBM / DB2 and SQL Server have this test results. The following table shows the best test results of the two products so far. The best test results on the TPC-W reference (100,000 items) * Database Hardware devices WIPS USD / WIPS availability SQL Server 2000 Enterprise Edition UNISYS ES 7000010, 44016.73 USD July 10, 2001 IBM DB2 UDB 7.2IBM EServer XSeries 4307,554136.80 June 2001 Oracle No corresponding record --- * As of March 6, 2003, the best SMP test results provided by the manufacturers on March 6, 2003. Workloads related to decision support and report generation related workloads, TPC A wide variety of related loads are defined and named TPC-H. TPC BenchmarkH (TPC-H) is a decision support benchmark, which defines a special query and concurrent data modification operation for business applications. This benchmark simulation is analyzed by a large amount of data by complex queries. The performance indicators obtained by TPC-H are called TPC-h hourly combination query performance indicators (qphh @ size). SQL Server has a TPC-h test result on both 100 GB and 300 GB categories. The table below shows the best test results on the 300 GB level. The data in this table has no obvious advantageous points. However, SQL Server is located in the lowest cost of the solution and has considerable performance on the SMP server equipped with a 16 processor. SQL Server Solutions and SMP UNIX Solutions in TPC-H Benchmark Test Platform (300 GB) comparison database hardware device cpusqph @ 300 GB USD / Qphh @ 300 GB system cost effective date Informix XPS 8.31 FD1HP Alpha Server ES40 Model 6/667162 , 8321,058 US $ 2,995,034 February 14, 2001 SQL Server 2000 Enterprise Edition 64-bit UNISYS ES7000 Orion 130164,77421 US $ 1,043,153 March 31, 2003 Oracle 9i R2 Enterprise Edition HP Alpha Server ES45 Model 68/1000165, 976453 US $ 2,706,063 June 1, 2002 IBM DB2 UDB 7.2HP ProLiant DL760 X900-64P6412, 995199 US $ 2,573,870 June 20, 2002 * As of March 6, 2003, the best performance indicator of various manufacturers. The TPC test results were verified in other reference tests provided by SAP and PeopleSoft. The following table summarizes these reference test items. It can be seen from it, SQL Server is in the first place in a number of important end user baselines.

Application Best Test Results Better Test Project World Record Keep TPC-C709, 220 TPMC $ 14.96 / TPMCSQL Server TPC-W21, 139 WIPS @ 10,000 $ 32.62 / Wipssql Server TPC-H27, 094 qphh @ 3 TB 240 USD / QPHHOracle SAP R / 3 Sales & Distribution47,528 concurrent user IBM SAP R / 3 R / 3 R / 3 R / 3 R / 3 Sales data / hour SQL Server Great Plains Software2,400 concurrent users SQL Server Onyx57,000 concurrent users SQL Server Pivotal ERELATIONSHIP20 , 000 concurrent users SQL Server CA Brightstor Backup2.6 TB / hour SQL Server PeopleSoft eBill Pmt191,694 pay pen / hour SQL Server PeopleSoft CRM 8.425,400 concurrent users SQL Server PeopleSoft Financials15,000 concurrent users, IBM JD Edwards OneWorld9,000 concurrent users Oracle Best Performance Index reflects the application's system scaling capacity. You certainly can't calculate these indicators in advance, however, you can evaluate its scalability by viewing the standard benchmark and focusing on similar applications in the relevant industry. Back to top Telescopic Hardware Equipment Architecture Technology Development Trend Promotes to establish scalability systems Today, the daily components constitute the fastest and most reliable computers, networks, and storage devices. The entire computer industry uses the same RAM chip family product in the computer. Compared with the same memory for a dedicated computer, the price of the daily computer is 3 to ten times. At the same time, fierce competition between microprocessor vendors also created a speedy handling chip. In fact, most of the powerful supercomples are also built on these chips. Traditional water-cooled host systems are also higher than those with these speeds. In addition, daily workstations and servers often transcend host systems in terms of performance. This rapid rhythm in the daily market has enabled the development level of the bottom market to the traditional host system and microcomputer architecture in a few years ago. In terms of network interconnection, the interconnection of daily computers has also been developed. The transmission speed of the Ethernet has reached 120 megabytes per second. Exchange Ethernet can increase the transmission speed by 100 times in the local network through daily prices. 10 Gigabit and 40 Gigabit Ethernet are gradually rise. Exchange Ethernet, Fiber Channel, and other new interconnect technology provide a cheap high-speed system regional network (SAN) and will lay the foundation for the cluster architecture within the coming years. Figure 4: Continuously reducing disk storage device Price See Full-Sized Image. As for storage technology, the highest performance and the most reliable disk is a 3.5-inch SCSI disk. Its capacity is incremented by twice a year, and the average hardware failure time has reached 50 years. Today, 74 GB disks have become mainstream standards, and 300 GB disks are facing those large-capacity applications. It is estimated that by 2005, the 1 TB capacity disk will appear (Figure 4). At the beginning of 2003, US $ 1,000 can purchase 300 GB of disk capacity. This is approximately 10,000 times before 20 years ago. This low-cost price reveals why the mainstream server typically configures a disk system for several TB: such a disk system price is approximately $ 3,000 / TB. The above chart also also expects 2005 situations. At that time, the server usually configures 50 TB disk storage, approximately ten years, the capacity of the disk is the mainstream. This disk system will be able to accommodate a very huge database.

A computer architecture for SMP and clusters selection processor, disk and network increasing increases, bringing an architecture challenge: Which hardware architecture can best utilize these conventional components? So far, there is no architecture from winning, however, there are three common architectures that have been proven to provide scaling capacity and have been widely recognized: shared memory, shared disk, and no resource. Shared memory is used by the SMP system, while sharing disks and shared memory are used at the same time by the cluster. Windows 2000 Server and Windows Server 2003 support all of these architectures and will continue to perform their own improvements with the development of these architectures. The SMP system SMP expands the server by adding more processors to a single shared memory space. The extension of the system will be implemented by adding memory, disks, network interfaces, and processors. SMP is the most common way to exceed single processor restrictions. SMP software model - commonly referred to as shared memory model - runs the same operating system copy, the application process is as running on a single processor system. The SMP system is relatively easy to program, which can exert the advantages of industry standard software products and hardware equipment. SQL Server 2000 can be well retractable on the SMP system. Currently, in a single SMP node, practical limits for general use include: • 64 processors. ? 512 GB main memory. • Each node 30 TB is protected by a protected storage (configured to 40 sets of RAID sets and 400 block 74 GB disk drives for 10 logical volumes). • 400,000 Active clients accessed by the IIS Web server or some of the monitoring SQL Server. These are the largest of Microsoft's foresee. The general large server has only half or less of this scale. Over time, SQL Server, Windows and hardware technology will continue to develop forward, thereby supporting a larger configuration scheme. SMP scalability is currently, SMP has become the most popular parallel hardware architecture. Industry Standard Intel Microprocessor SMP Server provides extraordinary performance and cost performance for database platforms. The INTEL's Xeon-based 8-way SMP motherboard introduced in the market has been applied on the servers manufactured by many hardware vendors. In recent years, Intel 8x Xeon Server has become the main force in the field of client-server and e-commerce calculations. Currently, the Xeon system equipped with 32 processors and the 64-bit Itanium architecture of Intel has become increasingly mature. The most impressive SMP performance indicator utilizes a large number of main memory supplied by these Itanium processors. SQL Server implements almost linear SMP telescopic capabilities in the TPC-C online transaction baseline test. System throughput (number of transactions per second) continues to increase with the increase in the number of CPUs. With the continuous improvement of microprocessor speed, the construction of SMP systems has become increasingly expensive. When the system is extended from the 1 processor to 4 processors, the price changes are the most moderate. It is relatively relatively relatively easy to extend to 8 processors. However, when the number of processors exceeds 32, the price will rise rapidly, and the return will continue to decrease. At the software level, multiple processors must be serially connected to the parallel access of the shared resource. Such serialization features limit the actual scaling capacity in shared memory SMP systems independently. These software bottlenecks in the operating system, database system, and applications are equally important. Despite this, the SMP system is still the most common telescopic form and will remain this position in the future. Intel's Xeon and Itanium processors make a powerful and low-cost SMP node.

SMP performance increases with CPU and other increasing decreases database hardware devices CPUTPMC US dollars / TPMC system cost effectiveness SQL Server 2000 Standard Edition Dell PowerEdge 2650 / 2.4 / 1P116, $ 2562.78 $ 46,502 September 11, 2002 SQL Server 2000 Enterprise Edition HP Prolaint ML530G3 2P226, 7253, 72 US $ 99,111 March 31, 2003 SQL Server 2000 Enterprise Edition HP ProLiant DL580-G2 / 2GHz 4p477,9055.32 US $ 413,764 December 31, 2002 SQL Server 7.0 Enterprise Edition HP ProLiant DL760 -G2 8P8115,0257.69 US $ 884,216 SQL Server 2000 Enterprise Edition Unisys ES7000 Orion 2303234,32511.59 US $ 2,715,310 March 31, 2003 SQL Server 2000 Enterprise Edition 64-bit NEC Express 5800 / 1320xc C / S32433, $ 10812.98 5,619,528 June 2003 Oracle 9i Enterprise Edition IBM Eserver PSeries 69032427,76017.75 US $ 7,179,598 May 31, 2003 As shown in the table, SQL Server can provide a wide application standard SMP platform Good performance. With conventional hardware devices, SQL Server provides a cost-effective database support capability. The SQL Server system equipped with 32 processors is compared with the Oracle UNIX system equipped with a 32 processor. The cost of the UNIX system is 1.6 times higher than the SQL Server system, however, its performance indicators are only improved than the SQL Server system. 18%. Windows Server 2003 cluster has an outward extension capability, enabling customers to add processing capabilities, storage space, and network services to the existing configuration scheme. Figure 5 shows a cluster that develops from 8 CPU single nodes from 8 CPU, 6 nodes to 48 CPUs and 6-node structures by adding a node. In this case, each node in the SMP large server cluster is generally built through conventional components and network interconnect technology. Typically, double interconnect mechanisms are used to provide fault redundancy. Figure 5: 6 nodes, 48 ​​CPU cluster cluster structural clusters consist of a series of external systems to operate, loosely combined and independent computers. The cluster node can be either a single processor system or a SMP system. The node can be connected via a conventional network or a dedicated ultra-high speed communication bus. Computers in the cluster are collaborate, allowing the client to treat the entire cluster as a stand-alone server with ultra-high performance and high reliability. Due to the modular characteristics of the cluster, it can be extended out of incremental form and low cost by adding a server, disk, or network resources. Microsoft believes that the cluster is the most economical way to achieve scalability within the 8 processor above performance space. The cluster structure can be more powerful than the independent SMP node by extending the extension. When the system requirements exceed the conventional SMP node performance space, or when the failure redundancy requirements are equipped with a second fault transfer server, multiple nodes constitute a cluster will become a very tempting candidate. SQL Server 2000 and Windows clusters have realized telescopic characteristics and fault redundancy capabilities in the regular market sector. Microsoft has built-in cluster technology in Windows 2000 Server and Windows Server 2003 operating systems, which can be used in conjunction with regular servers and network interconnect mechanisms, while it is also able to make somewhere (such as HP, Dell, The specific hardware accelerator produced by IBM and UNISYS is fully utilized. This cluster support capabilities can be utilized by Microsoft BackOffice products such as SQL Server, Internet Information Server and Exchange.

In addition, many third-party products can also be managed on this architecture. Shared disk clusters and non-shared resource clusters There are currently two basic clusters: shared disk clusters and non-shared resource clusters (as shown in Figure 6). In shared disk-type clusters, all processors directly access all disks (and data), but they do not share master memory. An additional software level - is called a Distributed Cache or Lock Manager - will be used to manage caches in all processors. IBM DB2 / OS390 Sysplex and Oracle Parallel Server are typical examples of shared disk parallel database architectures. Since the lock or cache manager seizes data access, the shared disk cluster has the same scalability as the shared memory SMP system. Figure 6: Under shared disk clustering is compared with the non-shared resource cluster, the non-shared resource cluster is a league composed of a database system. Each node in non-shared resource cluster is a separate computer with independent resources and operating systems. Each node has its own memory and disk storage; between nodes communicate by exchanging messages on a shared network connection. Each node is a service and usability unit. Each node has some disks, tape, network connection, database partitions, and other related resources, and services for these resources. When a node fails, the disk owned by the node may switch to other neighboring nodes by failover, but within any time, each disk will be managed by one node. The non-shared resource cluster is easy to build through conventional components. SQL Server 2000 Cluster SQL Server 2000 supports non-shared resource cluster models with distributed partition views and distributed transactions. The data table can be divided into a member table that is not intersecting according to the primary key, and each member table is stored on one node in the cluster. Distributed partition views on each node will unify all member tables as a location transparent view. The application can access the joint consisting of the member table by this view in a virtual table. Figure 7: Currency-size image comparison See Full-Sized Image. The test results are leading the performance indicators of other database systems with great advantages. In fact, Figure 7 shows the performance indicators implemented as of December 2002, SQL Server implemented on this continuously extended cluster, and all test results at 100,000 TPMC or more system for this performance indicator (TMPC) Value (USD / TMPC). SQL Server has a significant leading advantage in performance and cost performance. Note that the system of the SQL Server is approximately $ 13 / TPMC for the test results between 100,000 TPMC to 700,000 TPMC. In contrast, UNIX systems are not only expensive, but also throughput is obvious. SQL Server Performance Improved performance improvements in SQL Server in TPC-C benchmarks are very significant. The coordinate chart in Figure 8 shows the peak throughput rate and peak cost-effective report since SQL Server since 1995. SQL Server performance increases from 2,455 TPMC to 709, 220 TPMC, and it has increased by 290 times in 7 years. However, during this period, its cost ratio fell from $ 240 / tpmc to $ 3 / TPMC, approximately 90 times. If the annual performance is increased in accordance with annual calculations, the price reduces 65%. If this number is calculated according to the finishing, in the past seven years, SQL Server's performance is doubled each year, and the price is half a year. With the introduction of cluster characteristics in SQL Server 2000, currently, SQL Server supports the maximum transaction database scale has no practical limitations.

Figure 8: Since the 1995 SQL Server performance enhances the full size image. Calculated according to the fashion, in the seven years of 1995 to 2002, SQL Server an average annual performance is doubled, and the price is half. A single SQL Server instance on the Windows operating system platform can support thousands of users access to databases that contain billion records. Such systems can support a community that exceeds 250,000, or a larger INTERNET user (such user unregistered with the server established) community. Here is a set of intuitive numbers. The largest banks have 10,000 people with 10,000 people, with a largest telephone sales agency with less than 10,000 active agents; it can be seen that these systems are sufficient to support large-scale businesses. For demonstration purposes, the SQL Server team has established a large multimedia database called TerraServer. This set of data is stored in a capacity of several TB satellite Earth images, which covers more than 10 million square kilometers. These images are stored in 300 million database records on the 324 HP StorageWorks disks. The server has been open on the Internet since June 1998. So far, 600 million queries from millions of visits have been processed. In another SQL Server demo project, a set of bank databases containing billions of records are divided into 20 servers running SQL Server, where each server is used as a cluster node. The database is divided by the application. COM and Microsoft Distributed Transaction Coordables To coordinate transactions involving two servers. Using partition views, SQL Server 2000 proves extraordinary performance: 709,220 TPMC's peak performance of 709,220 TPMC on a 34-node cluster. This indicator is more than 6 times higher than single node performance, and involves a set of databases with a capacity of 54 TB. By adding more nodes, the performance of this system can also be further improved. These examples demonstrate that SQL Server can handle huge transaction capacity (millions of people per day), huge user communities (tens of thousands) and huge scale databases (TB). At the same time, this product will continue to grow at a rate of 2 to 3 times per year. Although this growth is to some extent due to the performance improvement of hardware devices and the price reduction, the rapid development of SQL Server itself cannot be underestimated. We believe that in the foreseeable future, the improvement of hardware devices and software products will continue to continue with this fast pace. Back to top SQL Server Software Scalability Architecture SQL Server 2000 Application Architecture SQL Server 2000 is designed to match the .NET architecture. It provides the tools and support systems to help customers establish an active server page and COM objects to implement business logic and access data managed by SQL Server. Each Windows node usually uses the same SQL Server address space to manage all SQL databases on the node. SQL Server runs in a master address space with multiple thread pools. Some threads are specifically used to complete logistics management tasks such as logging, buffer pool management, operational request service, and system monitoring. In addition, a larger thread pool is responsible for supporting user requests. These threads will perform the stored procedures or SQL statements requested by the client. Typically, SQL Server is applied to the client / server running environment, and the client program on other computers establishes a connection, initiates SQL request or directly runs the stored procedure written in Transact-SQL language (as shown in Figure 9) Diction). Client programs may also be on the server node. SQL Server uses an Built-in Transaction (TP) monitoring program mechanism - Open Data Services - to support a large number of clients. In the actual application, this configuration can support up to 5,000 concurrent clients.

When exceeding this size, you can consider dividing the application on different nodes of the cluster, or establish a connection between the client and the SQL server using the web server and the TP monitor. A common TP monitor such as CICS, Tuxedo and Top End has been ported to Windows and provides a SQL Server interface. Currently, more and more applications begin to use Internet Information Services (IIS) and Active Server Page (ASP) to implement the mechanism. These ASP pages use Microsoft Visual Basic Scripting Edition or JavaScript mode to invoke business logic COM objects accessed by SQL Server through the Active Data Object (ADO) interface. Object Request Agent and Distributed Transaction Manager in the Windows operating system. Figure 9: Typical SQL Server Client / Server Run Environment View full size image. Cluster transparency cluster transparency allows applications to access data and objects located on each node in the cluster like accessing local data, and data can be ported from one partition to another without affecting application operation. Since the node can be added to the system without modifying the application and transplanted between these nodes, transparency has become a key element of modular extension. For high availability, transparency is also a key element that allows data to transfer data from one node to another when the node fails. Distributed System Technology Distributed System Technology is a key element of transparency in a cluster. By constructing an application and system to interact in a remote procedure call mode, the modular level of the application is improved, and can be distributed to different nodes in the cluster. The client calls the service through the name. Process calls can call local services or call remote services through remote procedure calls. Microsoft puts a huge amount of money in the components that construct their software products to interact through remote procedure calls. In this way, the infrastructure implemented is called OLE (object links and embedded), COM (Component Object Model), DCOM, Microsoft ActiveX (Internet-centered COM extension) and the latest COM . Many features of Com are currently in use, and other more features are coming soon. Specifically, in Windows and SQL Server 2000, Microsoft provides the following features:? COM is one of the core parts of Windows Server. It can call any object by safe and efficient way, so that a program can call other programs running on any location in the network. COM combines the characteristics of the transaction manager with the object requesting agent and TP monitor. It has become the core of Microsoft distributed object mechanism. • Distributed transaction allows applications to complete work on many SQL Server Database Partitions and other resource managers, and automatically obtain ACID distributed transaction semantics through transparent ways. • OLE DB allows SQL Server and other data integration programs to access data from any data source. The OLE DB interface can be established for almost all data sources. Most Microsoft Data Storage Components have OLE DB interfaces (such as Exchange Server, Active Directory, Word, and Excel), while the OLE DB interface for original data storage mechanisms such as VSAM and RMS will also emerge. • Database Management Object (DMO) is a COM embodied in all SQL Server management interfaces. Using DMO, customers, and ISVs can establish tools to manage local and remote computers running SQL Server.

Windows also offers a reliable queue mechanism-message queue that allows applications to initiate delay call requests. Such a queue mechanism is equally applicable to the disconnected node, allowing these nodes to be submitted after their recovery network connection works. • SQL Server 2000 now contains a set of notification services that provide a robust and scalable approach to notify the client when the database status changes. This notification service can be used to manage stock prices, delivery agents, and other cases that need to be taken in time when the database status changes. Windows currently supports these features and many other cluster mechanisms, including cluster security (domain), group and software management (system management servers), cluster naming (distributed name service and Active Directory) and cluster performance monitoring (system monitoring) ). SQL Server adds these mechanisms to the management tools built in the Enterprise Manager, which makes the enterprise manager to manage and monitor server arrays running SQL Server. A key target for Windows and SQL Server is to ensure that the cluster can be easily managed and used as an independent large system. Windows and SQL Server simplifies the mechanism of managing and balancing the application server between many nodes in the cluster. The Windows cluster provides a general console for all components and a simple management model. Another value-added income of the cluster is that fault redundancy features a lot of system failures and provide highly available services. Distributed Partition View SQL Server allows users to divide a data sheet into multiple member tables and create a partition view on the basis of this member form. Such a view is referred to as a distributed partition view. Each member table can be stored on a different cluster node. If the view definition is copied to all cluster nodes, the application can access the entire data table by any node like a table locally stored. Distributed partition views will be created in the following way. First, the data table will be divided according to the prefix of the primary field. After dividing the sub-table is called a member table, except for the value range of the partition field in addition to an additional integrity constraint condition applied to the partition attribute (for example, Customer_ID is between 100,000 to 2000000), all sub-tables The same architecture is the same as the original data table. A joint view will be created on the basis of all sub-tables. After that, the administrator should perform the following steps on each member of the cluster: 1. Administrator defines the member table and fill it with the data belonging to the member table. 2. Administrator defines links to all other members of the United States to enable SQL Server 2000 to access other nodes during their distributed query process. 3. Administrators define distributed partition views in a joint manner of the member form. Thereafter, the application will be able to access distributed partition views and underlying members tables as all data. SQL Server 2000 Query Optimizer and Microsoft Distributed Transaction Coordinator (MS DTC) will ensure that the program performs efficiently and gets the ACID property. Partitioned Data and Data Pipe SQL Server has always been allowed to divide databases and applications between different SQL Server instances running on multiple nodes. The client establishes a connection with an app located on a server. If the request requested by the client needs to access the data on another node, the application will access these data through the Transact-SQL statement, or initiate a remote procedure call to the SQL Server instance located on other nodes. For example, all local orders, shipping orders, and inventory lists may be stored in each warehouse of distributed applications. When a single order is extended or a new product from the factory, the local system must perform a transaction involving the warehouse and factory nodes. In this case, the application running on a server in a repository needs to directly access plant data, or call a stored procedure located on the factory server. MS DTC and SQL Server will automatically manage data integrity between factory and warehouses.

Once data and applications are divided into multiple servers in the cluster, it is also necessary to transplant data between these servers through a convenient and high performance manner. By capturing result sets returned by the remote process call directly to the node local data table, the data pipe greatly simplifies the data transfer mode between the server. This mechanism applies to many applications and can be used to replace distributed queries. Distributed transaction distributed transactions is an indispensable part of Windows Server, and it is also another big step towards a comprehensive Windows Server cluster mechanism. To create a distributed transaction, the application needs to declare the Begin Distributed Transaction. Since then, MS DTC will automatically manage transactions. The MS DTC will also connect to SQL Server to open transaction standard X / Open XA. The client can connect a TP monitor such as CICS, Encina, or Tuxedo, and send the request to the corresponding server through these monitoring programs. It is another way to transmit transactions to the appropriate server using the TP monitor to implement distributed applications. At the same time, the TP monitor also allows SQL Server to participate in transactions across multiple nodes. In addition, Windows Server 2003 also includes Microsoft COM , an embedded transaction monitor. COM can assign client requests to different application servers. All of these ways make the data and applications between the servers in the cluster are relatively easy and simple. The transparent partition and parallel database technology have so far, all of these features described herein have existed, and you can purchase and use them at any time. In fact, many customers are installing SQL Server on the server, expanding it into an 8-channel SMP hardware system, which is divided by an extension, and then dividing their databases and applications. Typically, these partitions are placed at positions close to their actual users: data is placed in the retail market, and the data center for recent activities is placed in the accounting department, and the data warehouse is placed in planning and marketing sectors. All of these applications remain relatively independent and divided according to natural way. In addition, the data streams between different sectors are also properly defined. The graphical interface and operation interface combined with the Microsoft Visual Basic script will be used to automatically complete a number of operations on a computer running SQL Server. In the past few years, Microsoft has always hoped to add transparent data partition mechanisms to run SQL Server to implement data partitions without deliberately adjusting applications. After realizing partition transparency, the SQL Server team wants to implement parallel query decomposition. That is, a large-scale data query that is usually initiated by the decision support application is a series of components that can be performed independently in multiple nodes of the partition database, respectively. Data partition example: TPC-C The distributed partition view in the SQL Server per day makes the ultra-high TPC-C performance that are twice as discussed in the previous sections. The best part of this design is capable of managing twice the data between two times the data to add more nodes and more joined SQL Server to the cluster. Three of Microsoft, Intel and Compaq jointly established a large system (140 CPUs, 45 nodes, 4 TB memory) in the classic three-layer DCOM SQL Server 7.0 application. Twenty front end nodes are used to simulate 160,000 connection users that submit transactions at a speed of 14,000 per second. Twenty server nodes store the same proportion of database subset in their respective SQL Server instances. This system does not use distributed partition views, so applications must manage data partitions themselves. The client node is responsible for simulating the network load and generates a DCOM call for the client's related objects, and triggers the ODBC call for the server. All servers store a total of 1.6 billion accounts records and reserve spaces that can accommodate 300 million historical records.

These servers perform 85% of transactions locally and perform 15% of transactions remotely. MS DTC is responsible for coordinating distributed transactions. Five nodes are specifically used to implement this function. The system has continued to run a year in Microsoft Executive Briefing Center, and process approximately 1.1 billion transactions per day, 5 times the transaction processing volume of the tourism industry, 10 times more credit card transactions occurred every day, 1000 times the largest The amount of transaction in the scale stock exchange. Now, we will implement this application by distributed partition view on SQL Server 2000. High availability databases implemented by Microsoft Cluster Services provides fault redundancy and high availability by switching their loads to another server by failing to fail over or require downtime. The failover mechanism works in the following way. Two Windows Server 2003 servers are configured as a cluster. These two servers can support two SQL Server instances. Each server manages an application database partition. So far, we have used all standard SQL Server technology. With SQL Server Enterprise Edition, each server will turn to a virtual server that can continue to provide services due to offline due to failure or system maintenance. To achieve this feature, the SQL Server database will be stored on the shared SCSI disks accessed in both servers. If a server fails, the other server will take over the disk ownership and restart the fault server on the surviving node. The server after reboot will restore the database and start accepting the client connection. On the other hand, when the primary server fails, the client will reconnect the virtual server. As a Windows 2000 Server feature, the Microsoft Cluster Services allows the virtual server name and IP address to migrate between different nodes to allow the client to notep the server's transfer. Microsoft Cluster Services is included in Windows 2000 Server, Windows Server 2003 Enterprise Edition, and Windows Server 2003 DataCenter Server. SQL Server Enterprise Edition provides built-in installation support for building virtual servers. After the configuration is complete, in addition to the hardware device fault, the virtual server will be unlimited to other servers on the network. See Figure 10. Figure 10: SQL Server cluster configuration SQL Server failover is completely automated. Detect and recover from the fault take a few minutes. When the fault node is fixed, it will restart and serve as a new backup server. This node can also replace the original server after repair. Data replication data replication for data centers and disasters recovers helps the configuration and management of partition applications. Many applications will naturally divide multiple components that are not related to each other. For example, hotel, retail and storage store systems have strong geographicality. Therefore, applications and databases can be split into different servers based on geographical distribution. Similarly, customer care, sales team automation, and telephone sales applications are also very suitable for division. Despite this, all of these applications require some global sharing data. At the same time, they also need to implement periodic reports and fault recovery mechanisms by electronic manner. The same mechanism can also be used to allow a site to act as a data warehouse for data capture OLTP systems. Furthermore, the data warehouse will provide the data provided to many data centers that provide decision support data for analysts. Some applications do not require a data warehouse and publish the update content directly to the data center. The same mechanism can also be used to allow a site to act as a data warehouse for data capture OLTP systems. Furthermore, the data warehouse will provide the data provided to many data centers that provide decision support data for analysts. Some applications do not require a data warehouse and publish the update content directly to the data center. SQL Server has a set of powerful and easy-to-use data replication systems. A set of graphical user interfaces in the Enterprise Manager allows administrators to indicate that the database is released, and other nodes subscribe to these content. This release-distribution-subscription mechanism supports one-to-one and one-to-many release method. The laminated distribution server can extend the replication mechanism to a large number of subscribers.

Copy is performed in transactions, so each subscriber will see a database located at a consistent time point. SQL Server applications typically release dozens of trillion updates per hour. Release can be used instant, periodic or on-demand. The entire copy process will be completely automated and very easy to manage. Back to top SQL Server and Windows Server 2003 Easy Manage Because Microsoft provides easy-to-install operating systems and databases through graphical tools and wizards, it is relatively easy to build a giant system with SQL Server. SQL Server also provides a wizard program for establishing a procedure. These giant systems involve thousands of client systems and large databases; therefore, easy management is a severe challenge. Data transfer rates with most disk drives 3 MB / sec, dump and identify the capacity of 100 GB databases in one tape drive usually take more than 10 hours. Defining and managing security properties for 10,000 different users are also a daunting task. In addition, it is also a very time consuming job for 10,000 sets of clients and management of hardware devices and software products. Microsoft fully recognizes that management is the biggest obstacle to achieving scalability. Therefore, Microsoft describes solutions for these issues in the product documentation. This part will summarize the management mechanism of Windows Server 2003 and SQL Server 2000. The scalable Windows Server 2003 management mechanism manages thousands of client's hardware and software configurations perhaps the most challenging work facing the large client / server system. Windows Server 2003 and Microsoft System Management Server will automatically complete a large number of such tasks. First, Windows Security features provide a domain concept and a single login mechanism for each application running on the Windows operating system. The Windows security feature provides a user group. You can manage a large number of users by providing the user group to provide authorization and add the user to the group. The Windows 2000 Server security mechanism is implemented by a series of security servers (domain controllers) on the network node. This method is also taking into account the scalability and availability. A single domain can be extended to more than 40,000 users. The multi-domain architecture is constructed by means of domain trust relationships, and Windows security features can also be extended. The security system also provides programming interfaces and intuitive graphical interfaces to enable nodes to easily implement network security. Second, the Microsoft System Management Server allows a single administrator to manage software configurations, license authorization, and upgrades for thousands of clients. The system management server can automatically complete a large number of tasks, and minimize exceptions as only technical experts can solve. In addition, the Windows DHCP protocol will automatically assign IP addresses as needed to avoid time consumption and mission that may have an error, this manner implements node flexibility and retains the address pool. In addition, Windows Server 2003 also provides built-in tools to record error logs, manage disk space, setting priority, and monitor system performance. All of these tools can be managed on a cluster consisting of client and server nodes. The previous example shows that the system monitor on a node can track the CPU and network utilization of multiple nodes. Each Windows node has more than 500 performance counters for internal behavior. SQL Server, Microsoft Exchange, and many other servers of the Windows operating system are integrated system monitoring. SQL Server adds 75 performance counters and integrates a Windows event log for an incident. Scalability SQL Server Management SQL Server Enterprise Manager is a breakthrough in the field of database server management. It provides an administrator with a visual manner from a single console management and operates multiple sets of SQL systems. The key features of the SQL Server Enterprise Manager include: • To control and monitor the graphical management interface of multiple sets of servers and related client health. • Job Scheduler, such as dump, reorganization, and integrity checking, such as dump, reorganization, and integrity checks.

• Allow administrators to set up automatic exception handling and automate data management object mechanisms for related tasks by writing a Visual Basic script or by the wizard generating script. These stored procedures report execution results or notify the relevant operator using an email and a pager system based on Telephony API (TAPI). • Allow third parties to add an extension mechanism for new management tools. • A full set of graphical interfaces configured and managed database replication? Integrate with Active Directory, Active Directory will register the server and database in a manner through the name. The SQL Server Enterprise Manager contains both a wizard that sets routine operations. It provides a wizard program for establishing automatic backup, reorganization, and running tasks; a wizard program for routine from the database to the Internet and enterprise internal networks; and a guide to set database replication . Utility for loading, backup, recovering, checking, and reorganizing large databases is the key to running a giant database system. It takes hours and even days to make a backup of a single high-performance disk drive for a database of high-capacity TB. Using multiple sets of disk and tape, SQL Server and Windows 2000 Server implements TB-level transmission rate: 26% online backup implementation 2.6 TB / h / hour, using 12% additional CPU load, 2.2 TB / h Online recovery. Also. The speed of the backup and recovery operations executed from the disk will be faster. The SQL Server Enterprise Manager job scheduler uses a regular tape automatic control device to arrange the backup / recovery process. Backup operations can be performed at full speed or in a slow mode in the background. The giant database can be backed up within a few hours by implementing incremental backups and increasing the parallel level. With Windows Server 2003's shadow replication technology, the database up to TB-class database can complete the backup and recovery operation in minutes. Back to top Summary WINDOWS Server 2003 and SQL Server can extend up to a giant database on a single SMP server, or extends to multiple servers that perform some applications and store some databases, respectively. SQL Server Enterprise Manager simplifies the configuration and management of these servers. At the same time, OLE transactions, data replication, and data pipelines simplify data and request transfer methods between these servers. Currently, a single SQL Server instance supports more than 250,000 active users with a single server with a single server through IIS or ADO / ODBC. These servers can handle millions of services within 8 hours a day. They support a database consisting of hundreds of millions of records recorded on hundreds of disks with a total of up to TB. The Windows Server 2003 cluster consisting of 32 such servers can handle more than 50,000 business matters within a capacity of more than 50 TB, which handle more than 100 million transactions per minute. By adopting a cluster mechanism, the size and throughput of SQL Server 2000 systems will not be restricted. Automatic failover, COM ,. NET XML Web services, and Itanium processors equipped with large-capacity main memory are the latest pace of SQL Server Windows Server 2003 clusters that can be expanded upward and outward. SQL Server can perform these tasks with an unparalleled cost / performance ratio and ease of use. SQL Server 2000 maintains optimal performance on standard TPC-C benchmark items. In addition, the performance of SQL Server, Windows Server 2003, and underlying hardware devices is incremented by twice the speed of twice the year in the last three years. Microsoft hopes that this trend will continue to remain in the coming years. Today, SQL Server achieves extravagance. This performance will also be a top floor in the future version. For more information about SQL Server, check out http://www.microsoft.com/sql/default.asp or access to the SQL Server Forum on Microsoft Network (Search Keywords: MSSQL).

转载请注明原文地址:https://www.9cbs.com/read-12002.html

New Post(0)