Microsoft SQL Server 2000 Super Management Manual (6)

xiaoxiao2021-03-06  69

6. Capacity plan

Capacity plan

History of capacity planning

Transaction processing

Capacity plan

Memory planning

Processor capacity plan

Solvering subsystem capacity plan

Capacity planning

Select the collection of materials

Summary of this chapter

Capacity plan includes the resources required for computing systems, and how to optimize resources, and also include planning network growth, which reduces new hardware to reduce the impact on system execution, and cost. This chapter will learn the basic elements of this important step in a system.

Type Capacity Planning is divided into two forms: Precapacity Planning and postcapacity planning. The pre-capacity planning can be considered as a size determined plan, which is predicted to complete the hardware requirements of the workload within the specified time, in line with the definition of the Service Level Agreement, referred to as SLA. The SLA sets the time required to perform a specific feature must be complied with and maintained (complete actions or exchanges used by the transaction).

Note SLA is the operating conditions that are consistent with all organizations that participate in the system, which is used to ensure high-efficiency and smooth system operations. For example, SLA can be used to ensure that the system responds to a query within the determined time. This response time is the length of time that all users, operational groups, application teams, and effectiveness groups.

In addition, certain spaces of capacity (such as space allocated to CPU processing capability, disk available space or available memory) are retained to maintain these activities in a stable operating state and the response time under maximum load conditions. In the previous capacity plan, since the system has not been activated, there is no executable information to plan reference, so it must be referred to in other information, the results of the plan also depends on the correctness of the reference information. For example, the design system's database team can provide a database planning blueprint and an initial size detail; design application team, as well as an application-related query, you can let us know how a query will use system resources The management team can provide the number of users connected simultaneously, and the number of queries through the system. These information can provide a system possible working load (can be used as a reference to the number of CPUs), and the Database size (reference to the number of determine the number of disks), and the like. The post-capacity planning can be seen as predicatory analysis, which is a system, at any time, and complex research on the bundled hardware consumption system resources for the established and used. The post-capacity plan ensures that the system's resources are sufficient to meet the growth of future workloads. The main purpose of the study is to provide information to the database administrator (DBA), so that DBA uses data to discriminate whether the system should be adjusted to meet the performance range of the system defined in SLA. In this chapter, we will look at the two capacity planning - post-capacity planning and pre-capacity planning, and analyze their similarities and differences. In terms of typical post-capacity planning, historical performance data stored in the database can be analyzed. Through analysis, the normal growth trend of the CPU can be predicted (through the level of CPU during observation), as well as disk, memory and network usage. It is also possible to predict that when a new user may cause the CPU, disk, and memory usage. These studies can be very detailed, and you can understand the way specific users use the prediction of the intensity of the system usage caused by new users to achieve capacity planning. In addition to predictive analysis, post-prediction analysis is also provided, for example, the workload can be estimated by hypotheses. After the information provided, how to use the user how to use resources, we can accurately estimate when adding a specific nature user (such as person responsible for paying account management) to the system workload, How to consume system resources. Predict analysis allows system managers to add a hardware device to the system, to avoid reducing system effectiveness and response time in response to new to users in the system. The post-tuning information is also provided. Micro-tune information (such as information on the disk array required to deal with the query) From the history of execution information, when you want to improve the system performance, you can use how to change the system settings. This information can show the activity bottleneck caused by a disk array of disk arrays than another disk array. For example, new users will increase the number of data sheets to be accessed. The number of data sheets and access to the user can be monitored and tracked. This information helps predict that the changed information list can prevent the disk subsystem to reach the bottleneck.

History of capacity planning in early multi-user times, the concept of capacity planning and execution efficiency is not widely understood and developed. By the 1970s, the capacity planning project is only simple to find customers with objective customers perform applications. It is not easy to find these customers, but more difficult to comply with the company or organization. In the 1970s, customers and app vendors have developed an analysis method to perform specific benchmarks or workloads, and estimate the optimal initial size of a machine. They established a soft body similar to the target customer's application, and perform statistics on similar hardware to collect performance performance. These statistics are then used to determine the size of machines that best meet customer needs, and calculate when the system changes (such as adding more users, handling more applications, etc.), the required machine size size . The maximum disadvantage of this method is that the fund is too high, so these early simulators use the analysis results, mostly used by the system manufacturers to develop marketing strategies, or compare the performance performance of superior products with competitive vendors. At this stage, analysts develop different analytical methods, predict the situation used in existing systems. On the surface, this process seems to be less challenging, in fact. Since the theoretical law of the test does not exist, it is also lacked the tools for collecting information, even the computer scientist, the parent of the capacity planning, DR. Jeffrey Buzen, still developed the theory and decision to calculate the way, so actually still Very hard. In the 1980s, the previous benchmark simulation evolution became standards, such as ST1 benchmark, TP1 benchmark, and DEBIT / CREDIT benchmarks, but the focus of the reference is based on the system, find the most effective hardware device, rather than development A benchmark for use, let the hardware device meet this benchmark. So the user still does not use these reference to find the most suitable hardware device, of course, because everyone's usage is not the same. The user's request has led to the formation of a computer guild, which is Transaction Processing Performance Council. This committee specifies a standardized transaction load for more than 45 hardware and software manufacturers. These references can display the capacity of hardware and database software; it is a pity that users are still unable to take advantage of these information planning workload. The benchmarks provided by the guild cannot be used for the size of the planning capacity, because these benchmarks cannot react the actual workload. They are usually designed to show performance, such as how many transactions can be handled within a specified time. Since the time required for these exchanges is often short, and the amount of information is less, it seems to be a lot of processing in a short time. The number of transactions, a powerful impression on the implementation of the system, but in fact, only because of the implementation of the workload of the design.

At the same time, the use of master-slave calculations and associated database technologies is mature, and the prediction system initial size and capacity planning is also growing. Most of the most applications are written in accordance with the main distribution, and the server is generally used in the central data storage device, while the user interface is executed on the local machine or on the remote web site. Such use methods allow the client to use the GUI (graphic interface) that is already familiar with, which effectively utilizes the processing function of the expensive server with the most economical way. When a large number of servo executes the database application, the server has become the focus of most size and capacity planning research. Today, the application simulation benchmark is still the most common method of determining the size of the servo. Collecting historical efficacy information is still the best way to predict future use of machines. Although the process is expensive, it is possible to obtain considerable accuracy if the usage rate of the server can be accurately simulated. However, due to large-scale projects may require users or millions of investments in millions of dollars, only the largest customer can access this system for testing. Obviously, at this stage we need a capacity planning method for small and general scale systems. For these systems, only 90% of the system size and capacity planning can be achieved as long as the knowledge is used through simple calculations and general system. Trading Processing In this section, we have to look at how to analyze the CPU, memory, and disk, etc. of the database server, etc., so that the appropriate server is selected for a given application. A library server is only the function of the database; in terms of its workload, it is only transaction on the server. When SELECT or UPDATE statements are executed, the database server translates these statements into one series of reads and write operations. In fact, any transaction consists of the read operation of the database and the write operation. In this unfinished phase, the database server processes these I / O operations. The system we have chosen must be able to grasp the type and quantity of the transaction and can handle the I / O operations generated by those transactions. There are two main transactional states: online transaction processing, OLTP, Decision Support System, DSS. The OLTP transaction OLTP transaction is a working load unit, since the database is processed in an immediate manner or online mode, is therefore commonly expected to be completed in a short period of time. In other words, these transactions constantly update the database with instant information, so that the information accessed by the next user is instant information. For example, a line ordering system, all of its inventory status may be spread in various data sheets of the disk system, such as item_table or stock_level_table information table contains item types and quantities of goods inventory, Access information. When the user orders an item, it will access the information sheet, and the query is in stock. For the above-mentioned transaction processing system, its capacity is planned to collect information through interviews. Interviews may exchange views with the database designers, application designers, and business units. The opinions they have proposed will help to predict the number of transactions, and it is expected that the daily transaction time (for example, there are 25,000 transactions to be completed within 8 hours of work), while the number of users, and the homework spike (or say The spike is used in the time of processing the system load the right time when dealing with the transaction). Interviews may be the most important part of determining the size of the system scale.

Note When you design an OLTP system, you choose to have a hard body that has sufficient transaction ability to accommodate the load of spikes so that you can automatically deal with the worst conditions.

Real world ATMs are now seeing examples of the ATM (ATM). Suppose you are hired by an international bank to design the ATM system of its Chicago branch. In the interview, I found that the use of the ATM network is within a few hours between 11:00 and 2 pm - hacker is the time when most people have lunch. With this information, you can choose a transaction processing system with sufficient capacity to respond to this spike. The second type of DSS transaction trading system is DSS. DSS transactions are fairly large, and the processing is longer than OLTP transactions. A DSS transaction may have to take a few hours or even days. The application example of the DSS system is inventory archives: unless there is a special update, the database does not often have a write operation. These systems can generally provide important decisions for the information management department, and in all directions, decisions on business growth or shareholding level. Another example is that the US Air Force uses the DSS system to provide high-level people related information, including jet fighters, bombers and personnel weapons, current positions and status. As mentioned previously, due to the large amount of DSS transaction data, longer time processing is usually required, so it is different from the time range required for OLTP transactions. The OLTP transaction is a unique key (such as a customer number) to collect the required information, and the beginning of the query is related to the only key information. In the DSS, the query is not started by the unique key. On the contrary, it starts from the beginning of the database data sheet, and query all the information in the data table from the head to tail. A DSS transaction will also contain any information table link, and even more information to other information.

Note When you design a DSS system, a larger data block is selected so that each of the I / O transfer operations can pass more records, reduce I / O activity.

In such systems, the performance analyst wants to see that the usage rate of the CPU and other system resources can reach 100%, so we care about the type of application being executed, but the system takes more time to process the query. A experience in designing DSS systems is: Strengthen hardware as much as possible. In other words, do not design enough disk space to deal with the demand of the database, and plan to configure the database to multiple disk disks to disperse I / O activity. The problem of memory is not considered here, because there is not much cache activity here. (DSS transactions include full profile scanning, that is, query of the data sheet is from the first line to the last line.)

Real world Sales assumptions You are using the quarterly sales of digital editing companies, requiring the collection of goods sales in the region in the region. Search way will first connect to the first line of the Region data sheet to get the first Customer data sheet. After finding the first customer name, it will be connected to the Customer ORDER Data Sheet to determine if the customer is ordered in the quarter. Continue to search for the second customer name, then the third, so on. After searching for customers who have finished the region, they will continue to search for procedures. So the process of queries usually take a few hours to complete.

The principle of capacity planning is not unable to define spikes, which can be estimated to complete the pre-capacity planning during the stable state.

Note The fixed state refers to the predictable CPU usage in the last time. For example, if the CPU usage in the last time of work is 55%, then it is a fixed state. If in the same day, the system has reached 90% at a certain hour, that is, the spikes are used.

Once you know the maximum number of transactions completed within your work time, and the amount of time in each unit time can be calculated. However, since the actual rate of the transaction is not known, you should preset the retention space in the planning system. Reserve Capacity refers to a period that retains a system processing capability to cope with a high workload. The post-capacity planning of a subscription system should include continuous monitoring of major performance counters to record the system in the past and the current execution. These information is usually stored in the database and utilized in a comprehensive report of performance, capacity consumption and retention space. The database applications (such as Microsoft Excel) can generate graphics, trial tables, and transaction activity reports to predict the usage information of machine resources. The CPU usage is to establish a retention space on the machine, which is related to the Knee of the Curve theory. Simply put, the inference of this theory is that the usage is directly affected, and the length of the column directly affects the response time (in fact, the length is part of the response time formula), so the usage directly affects the response time. When the response time or the length of the column, the transition from the linear growth is based on the index, or the point tends to be intoxicated (to unlimited), the change is called a curve turning point. The usage rate and response time of the real world supermarket we use a supermarket as an example, look at the relationship between usage and response time. Here, the usage is defined as the frequency of using the cashier, and the service time is defined as the cashier to pick up the item between the item to the end of the account. Suppose you have come to the supermarket at 3:00, because of the morning, we have nothing to sway like you, so the cashier is 0%, the length of the column (number of people in front of you) It is also 0, so the recovery time will be equal to the service time (because it is almost immediately responding). Imagine the same situation in the afternoon at 5 o'clock. At this time, the supermarket is very busy. There are eight people in your front (that is, the length is 8). The current response time is equal to the front eight personal service time, and the sum of the service time you need (the length of time has to be determined by the number of purchases, payment methods, etc.). The use rate of the cashier at 5 o'clock in the afternoon is much higher than that of the morning, and this difference also directly affects the length and response time of the rare time.

Linear growth vs. Index growth is generally, we will try to maintain linear relationships for the operation of the system; that is, let the column grow in linear way. As shown in FIG. 6-1, linear growth refers to a state in which the growth growth is positively related to the growth rate. The experience will tell us that as long as the CPU usage is maintained below 75%, it can maintain linearity.

Figure 6-1 Linear growth of CPU usage

However, sometimes the CPU will be used in a steady state higher than 75%. This situation is quite unfavorable - especially the high usage will cause index growth in the column length. The exponential growth is the geometric incremental growth, as shown in Figure 6-2.

Note Each graph in this chapter assumes that the service time of the transaction is 0.52 seconds and assumes that each transaction has the same service time.

Figure 6-2 Index growth of CPU usage

Note that the CPU usage reaches 75%, and the curve of the column length is changed from the linear growth to the index (that is, the curve is almost vertical). Response Time Figure 6-3 shows how the usage does directly affect the response time. Note Similar curves also appear in response time charts and column length charts. These two charts show the case of responding to the situation, which is why? Do not let the CPU usage in the stable state exceed 75%. This is not to say that the CPU is not executed higher than 75%, but the longer it is performed in this case, the greater the negative impact on the length and response time. Do not exceed the curve turning point - in this example, this is 75% usage - is one of the most important principles in planning, and should be considered in the system how much CPU is required. For example, it is assumed that after the planning system is large, the processor usage will reach 180% after calculating all relevant factors. You can endure such a slow terrible system, or use two CPUs, so that each processor is lowered to 90% - only about 15% higher than the curve turning point. A better way is to use three CPUs so that each CPU is used to reduce to 60%, lower than 15% of the curve turning point. This principle also applies to other elements of the system, such as disk. The curve turning point of the disk is not the same as the processor - the curve turning point of the magnetic disc usage trend line is around 85%. Whether it is the size of the disk capacity or the I / O amount of the disk, 85% is a reasonable threshold. For example, a 9GB of disk, no more than 7.65GB of information should be stored. The restrictions on data storage are for the future, but more importantly, since a capacity has been filled with a disk, its search time will become longer, and the amount of information stored is relatively reduced. time. According to the same principles, if the I / O amount of the disk is 70 times per second, the disk is not possible to ensure that there is more than 60 I / O reach per second in the operation of the steady state. Follow these principles we mentioned, you can minimize response time, and your system is always just a blade because you don't use the processor and disk at maximum usage. The system will also reserve space to cope with the test of spikes. Figure 6-3 Response time vs. CPU usage

Description Remember a little: To create ideal performance, the CPU usage should be kept below 75%, and the disk usage should be maintained below 85%.

Paging Error (Page Faulting) is the most important principle of the planning processor and disk size does not exceed the curve turning point, what about the memory? To plan a memory size, we use the principle of paging errors. Paging errors are the normal functionality of the system to capture data from the disk. If the system needs a certain rule code or data paging, the data paging already exists in the memory, and a logical I / O event is generated, that is, the program code or data will be read from the memory, but need The transaction of the program or information is processed. But if the program or information is not included in the memory? In this case, we must execute an entity I / O to read the patch required from the disk. This job is done through a page error. When the required program or information pagination does not exist in the working set of the system in the main memory, the system will issue a paging error interrupt. Paging Errors Indicates that the other parts of the system can pick up a code or information from the physical disk - in other words, if the system is searching for the program or data paging is not in the memory, the system will issue a paging error indicating the other part of the system. An entity I / O and goes to the disk to capture what you need. If the page is in the standby list and therefore exists in the main memory, or the paging is not in the use of other programs, the paging error will not appear from the disk to pick paging. Entity I / O has two types: user and system. The user entity I / O is when the user trading requires reading that does not exist in the memory. A simple amount of information will be transmitted from the disk to the memory. The transfer process is usually processed by some data flow managers and disk controllers. System Entity I / O is a program that happens in the system that requires a program code pagination that does not exist in the memory. The system will issue a paging error interrupt and block the program from being executed until the required information will come back from the disk. Both entity I / O conditions extend the response time, because the rate of capturing the data from the memory is calculated as a second (one million seconds), and the rate of entity I / O is in milliseconds (thousand One second) to calculate. Since the paging error activity causes the entity I / O, and the response time is extended, we must minimize the paging error to make the system to achieve a better performance. Three types of paging errors occur in the system: Job System Paging Errors If the job system being executed is discovered that the next program address is not in the memory, the system will issue a job system paging error interrupt, pick it from the disk. A program code address. In terms of a program code address, the program code is transmitted from the disk to the memory requires a single entity I / O operation to complete. Application Paging Error If the system is executing any other program code, the next program page is not in the memory, the system will issue a paging error interrupt, capture the next code code pagination from the disk. For this type of pagination error, the program code is transmitted from the disk to the memory require a single entity I / O operation to complete. Paging Error Switching When the data paging is modified (called Dirty Patement), two steps called data paging swaps will be used, but the system will not only pick new information from the disk, but also the memory The current information is written to the disk. The paging error of this two steps requires two entity I / O to complete, but it guarantees that all data changes are saved. If this exchange often occurs, it may be the most serious factor in the negative impact of the response time. Paging Error Switching For paging errors take more time because it results in two entities I / O. Therefore, the number of paging errors in the system should minimize. When estimating a new system's minimum memory requirement, it is best to find all the memory specifications of all programs executed on the system (including the job system and the database engine) to predict the memory sufficient to handle all workloads. Sumber. And don't forget this factor in paging errors.

To maintain the system's memory in a sufficient state, the information of the paging error activity should be collected and saved as part of the data library efficacy reference information. When an additional memory is required, predictive analysis should be performed on this item. A sufficiently available memory should be reserved for spikes. When planning the system, in addition to the memory required by the system to perform, try maintaining more than 5% to 10% of additional memory to prepare from time to time. Note You cannot remove all pagination errors from your system, but you can minimize it. When there are more than two paging errors per second, the memory is added. The efficacy monitor (such as a paging error counter) will be explained later in the sections.

Memory capacity planning When planning memory capacity, some specific information may be needed, including the number of users that may occur on the system, the transaction workload type, and of course, which is also included. Which job system is also included. In planning, it may be typically starting from some interviews. In this case, we plan a size of a database server, so information related to memory usage and user-end application usage does not affect our estimate. The database server handles the needs of users and needs to find the necessary information to complete the transaction. To plan the size of the library server memory, you must know the number of users connected to the connection, and the number of transactions generated by these users. These I / O do not read the operation is a write operation, you must discuss with the app designer because they can provide relevant information such as the I / O number of different transactions. When calculating the number of appropriate memory is calculated for the system, other effects should also be calculated, such as quick-bouring intermediate rates and paging errors. Let us give a typical example. You are planning a system size, the database server will be used in an OLTP order registration system, you must know the number of users who produce workloads. This part of the information can help determine how much memory needs. For example, you know that there will be 50 simultaneous users on the system in any given time. In this system, you must prepare at least 25MB memory for the user.

Explanation, you need to provide 500KB memory for each user, because 500KB is the number of memory required for Shadow Process. Shadow Process is the current user program for each user in the system.

Next, you must know which work system adopted. In this example, the working system is Windows 2000, using approximately 20MB of memory. As a result, the memory of the memory must be higher than 45MB. You must also know the size of the database to be executed - in this example, Microsoft SQL Server uses a 5.5MB memory. The memories required are reflected in a total of 50.5MB. The last part of the required information is the size of the database processing area. This area considers two elements: record the file area and the database. The record file is saved related information about the occurrence of the occurrence. This area is very important, because if a system failure occurs during transaction processing, the information saved in the record file will be used to reply to the "previous" image - that is, the database image before the fault occurs. Recording file is also a reference for audit tracking. The database is quick to get a special area in the system. All information processed by the system will pass this area. The larger the database, the higher the cash rate. The so-called fast-censive intermediate rate refers to the information required by the system, it can be found in the memory - Obviously, you will want the system's quick-ending intermediate rate. If the required information does not reside in the memory of the memory, it will cause a quick error. Take a mistake with the paging error, the system must capture the required information and place it in the memory of the memory. Therefore, it will cause an increase in the entity I / O because the battery is too small, as the system must access the disk to capture information that does not exist quickly. These entity I / O will of course increase the response time of the transaction. To calculate the quick size, use the following formula:

Quick Dimensions = (Quick Cap Block Size) × (Number of Quick Blocks) Cap block size refers to the amount of data transmitted per I / O. Remember that SQL Server has a preset quick-tank size 8KB. The number of quick-tap block refers to how much blocks you want to keep you. In OLTP, select a smaller block size, because one of the points is not large, the smaller the size of the block, the less time to pass the time required. In DSS delivery, the block size should be enlarged because the amount of transmission is large, and the larger block size can reduce the I / O number.

Explanation No A quick-sized size setting can guarantee 90% or a greater fast cash rate. Experience is a small system that should have 25MB of quick-tap size. The medium system is 70MB, and the large system is 215MB. For a large-scale database system (approximately 300GB), it may require up to 3GB to achieve an ideal cash rate.

So far, we have collected a lot of information and can start to calculate our minimum memory demand. The following formula is often used to calculate the minimum memory requirements of a system:

Minimum memory requirements = (system memory) (user memory) (database processing memory)

The system memory is the amount of memory that the operating system and SQL Server need. The user memory is allocated to each other 500KB of memory total memory, and the database processing memory is the memory required for record files and quick acquisition. body. This simple formula can be used to calculate the lowest memory requirements of OLTP and DSS applications in general operation. In the DSS system, we should choose a larger quick-draw block, because the DSS app performs a complete data sheet scan in a step-by-step read. This capacity setting allows each entity I / O to read more data records. In addition, in the DSS system, it should not be used, because all I / O will be entity I / O. In the OLTP application system, you should check the fast-life rate after the system is installed. The higher the cash, the higher the system, the more you can have the best response time and performance.

Note that the system's fast-censor-based goal should be as close as possible to 100% and preferably not less than 90%.

Collecting memory usage When the system size is planned and adjusted, you should collected the performance information used by the memory. You can use this information to help you determine if the established system complies with SLA requirements, including response time, memory and CPU usage, and the like. The information collected can be done through the Microsoft Efficacy Monitor in the Microsoft Windows NT environment.

Note The Microsoft Efficacy Monitor in Microsoft Windows 2000 is called a system monitor.

Remember this is a capacity planning analysis, so the interval between the reporting time should be enlarged. The measurement period is in hours (most cases are set to 24 hours), so it should be reported every 24 hours. For the capacity planning research, it is quite enough to record once a day and write it into the performance database. You are used to monitor the scale of the performance, called a counter, which should be averaged in the interval of the reporting time. The memory records in your capacity planning research are included in the Memory object. (In the Efficacy Monitor, the object refers to the menu of the counter.)

Description To start the performance monitor, press Start / Program Set / System Management Tools / Efficiency Monitor. In the Valid Monitor window, select from the editing function table to the chart. You can use the Add to Chart Dialog box to select objects and counters for monitoring. To obtain further information, please click on the Efficvision Window.

There are the following counters: Page Faults / sec, the number of error per second. This counter contains the average of the paging errors per second in the system. Remember, a paging error will occur when the required program or information pagination is not working or exists in the standby memory. Cache Faults / SEC is a number of errors per second. This counter contains the average of the quick-out error that occurs per second in the system. Remember, as long as the Cache Manager can't find the paging of the file in an instant, it will take a quick error. Pages / sec, page number per second. This counter contains average number of pagins read or written from the disk per second system. This value is the sum of the other two counters -Pages Input / Sec to Pages Output / Sec. The paging traffic included by this counter represents the archive data that the system is used to access the application, as well as the paging read and read from the corresponding memory file. This counter should be used if you are very concerned about excessive memory pressure (also known as trasting) and possibly excessive paging action. Available Memory Available Memory. This counter indicates the number of unused memories in the system. These memories can be used as an additional memory used as a database or system. Available memory is the most important counter in memory planning.

Note Available Memory is not part of the performance monitor. The data can be selected in the work administrator, and the data can be obtained to obtain the data. (To call the work administrator, press the right click and select the work administrator from the quick function table.)

At least this selection Available Memory and Page Faults / SEC are part of the entire capacity planning data. Analysis of memory information Once information is collected, these information can be drawn into charts to predict future trends. The chart in Figure 6-4 shows predictive analysis. In this example, the information collection of the available memory is from October 22, 1999 to January 14, 2000. Use Microsoft Excel to plug these data into charts and calculate its trend line. The sawtooth curve shows the history of actual use, and the line represents the linear trend of the usage. It can now be seen that analysis prediction shows that on February 18, 2000, the system's available memory will be less than 6%.

Figure 6-4 Linear memory prediction analysis

Figure 6-5 The chart shows the increase in the increase in paging errors in the same period, which is almost gradually climbed with the reduction in memory. Similarly, using Microsoft Excel can draw it after these data records, the zigzag curve shows its actual use history, while the straight line represents the linear trend of its usage. In this example, the chart predicts that the system will have more than 6 paging errors per second on February 18, 2000. This value indicates that the response time continues to increase, but the length of the response time will violate the SLA specification. This predictive method can effectively track the resources of the memory.

Figure 6-5 Linear paging error prediction analysis

The capacity plan of the processor we have analyzed and planned the size of the memory, which will be a capacity plan for the processor. At this point, we can make the following assumptions: the design architecture of the application and the database has been completed. The target of the fixed state CPU usage should be less than 75%. The expected fast-binding intermediate rate is at least 90%. There is no more than 85% of the spatial use or I / O activity rate of any disk. The server only performs a database. The disk I / O is average between all disks. We use these assumptions as the principles and thresholds of planning memory, but we need more information on the CPU's capacity. These information can be obtained from the database designers and app designers. It is expected that the CPU capacity of a database server may not be as complex as imagined. Remember, the database only processes the transaction. The application is performed on the machine's machine, so the data on the application size is not used in our computing formula. The server will only handle the user's requirements in a way to read and write operations, in other words, it processes I / O. Application designers can provide some information, let you know the I / O related issues that may be generated by the transaction. Database designers can provide information related to data sheets and indexes, which are closely related to the transaction. Therefore, the work on the hand is to judge how much I / O generated, and how much time to spend can be completed. We must know how much transaction will be handled, as well as the system's working day or spike-up period how much is in the definition. Obviously, the size of the spike is a better choice for the spikes, because this means that our built machine is enough to face the worst situation. Unfortunately, in most cases, it is difficult for us to achieve such information, and the information that can be used is often information related to steady state. To have a more in-depth understanding of the transactions we are about to deal with, we must go to "anatomy" transactions, or the cross-sectional view of the establishment of the transaction can help us determine the possible read and write (I / O) The quantity, and thereby calculates the expected CPU usage. We can use interviews with database designers and application designers to get these information. First we must know the transaction to be processed through the system, the type and number of times, then we must determine the number of I / O will be generated. This calculation can help us estimate the workload of the CPU. If it is a already existing system, the user can only perform a transaction at a time, using the performance monitor to track it and thereby judge the generated I / O number, and then establish a cross-sectional view of the transaction. This "real" information can be used to adjust its speed, type, and quantity for the CPU in use. At this point, we focus on the focus of the CPU capacity plan in the I / O generated by the user exchange. However, I / O may also be generated by the fault-tolerant devices in the use, and when we plan to plan the CPU, these additional I / O must also take into account. Most computer companies today provide fault tolerance through support for RAID (Redundant Array Os INExpensive Disks, Map Array) technology. In Chapter 5, we have made a detailed description of RAID technology. If the reader memory is still new, we have mentioned the most common RAID level: RAID 0 single magazine RAID 1 mirror disk RAID 5 multi-disk disk and data segmentation Since RAID 0 requires only one disk, it may happen Single point failure - in other words, if the disk is faulty, all the information on the disk will be lost, that is, the entire database. Figure 5-9 shows a RAID 0 disk array. RAID 1 provides a mirror backup of the database disk. When the disk failure, all the materials on the faulty disk can be completely retrieved from the backup data disk.

If the RAID 1 is used, the user can get an extra benefits: Separate Search (discussed in Chapter 5), allowing the system to search simultaneously on two disks, and thus greatly enhances the speed of search, reducing transactions Response time. Figure 5-10 shows a RAID 1 disk array. The selection of the RAID level will directly affect the number of disk I / O, because different RAID hierarchies, the number of disk write operations will be different. For example, the same write requirement can be completed at RAID 0 may only be written once, but it takes twice on RAID 1. If the user describes a transaction requires 50 read operations and 10 write operations, and use RAID 1, the number of writes will increase to 20. If you need two specified disks when using RAID 0, you need 3 discs when you change the RAID 5. In the RAID 5 configuration, use the same bit data to include information about other two disks, thereby reconstructing information on the faulty disk. Figure 5-11 shows a RAID 5 disk array. This library protection mode will rise to the cost of effectiveness and economy. Each write operation in RAID 5 will increase the number of read operations and write operations for each process, because each transaction must be written into two discs and need to be read. The same information, changes, and combine into new information, and then rewrite. This repetition process will slightly extend the response time of the transaction. To calculate the number of I / O numbers in different RAID hierarchies, the following formula can be used. For RAID 0: I / O quantity = (number of reads per transaction) (number of writes for each transaction)

If a transaction has 50 read and write operations and 10 write operations, the number of I / Os using RAIO 0 is 60. For RAID 1:

I / O quantity = (number of reads per transaction) (2 × (number of writes for each transaction))

If a transaction has 50 read-write operations and 10 write operations, the number of I / Os using RAIO 1 is 70. For RAID 5:

I / O number = 3 × (I / O number of each transaction)

If a transaction has 50 read operations and 10 write operations, the number of read operations is 150, and the number of write operations is 30. Therefore, the total I / O number of I / O 5 will be 180 using RAIO 5. The increase in I / O is a problem with the disk control card function, although it is obvious to the user, but the user does not need to adjust the application to solve the problem. Remember, the RAID scheme you select will directly affect the number of I / O processes. During our planning capacity, the addition of reading and writing operations should be considered because it will affect the use factors of the CPU and the number of disks we decided to adopt. Once the total number of readings generated by the user trading is calculated, the additional I / O number generated due to the selected RAID hierarchy can then be utilized to calculate the usage of the CPU. The following formula can be used to determine the CPU usage of the system:

CPU usage = (flow) × (service time) * 100

This traffic refers to the number of I / O processes per second, while the service time is to process a typical I / O exchange consumption time. This formula means that the usage is the time that the I / O processing in the system is multiplied by the time of performing each task, and then converts it to a percentage by 100. To determine the number of CPUs required by the system, you can perform the following steps for transactions that are about to be workloads, individually:

The total number of read operations to be processed by the system using the following formula: The total number of read operations = (the number of read operations for each transaction) × (the total number of transactions)

Use the following formula to determine how much read operation is entity I / O, how much is logical I / O:

The total number of logical reads is = (total number of read operations) × (fast tap rate)

The total number of physical read operations = (the total number of read operations) - (The total number of logical read operations)

Use the following formula to convert each type of read operation with the total number of read operations per second:

Logical reading operation per second = (total number of logical read operations) ÷ (Working cycle)

The number of entities per second = (total number of physical read operations) ÷ (work cycle)

The work cycle is a length of time in seconds, refers to the time consumed by the implementation. Use the following formula to calculate the total amount of CPU for processing the read operation:

Logical read operation time Total amount = (number of logical read operations per second) × (logical reading operation time)

The total amount of physical read operation time = (number of entities per second) × (entity reading operation time)

The logical read operation time is the time to process a logical read operation. Entity reading operation time is the time to process an entity read operation. These reading operations can utilize the performance monitor to achieve data. (See the last paragraph of this section .)

Description, the entity read operation time variable is 0.002 seconds, and the logical read operation time variable is 0.001 seconds.

Use the following formula to calculate the CPU usage of different read operations:

Usage = (flow) × (service time) × 100

The usage rate can be divided into logic and entity read operation usage:

Logical read operation usage = (number of logical read operations per second) × (logical read operation time)

Entity reading operation usage = (number of entities per second) × (entity reading operation time)

These information can be used to determine if the entity read operation usage is too high. This will then adjust the size to get more logical read operations. Use the following formula to calculate the total number of write operations to be transmitted through the system:

The total number of writes = (number of write operations for each transaction) × (total number of transactions) ×

(RAID increase factor)

RAID adders refer to variables that should be considered during the processing phase due to the RAID level. Use the following formula to calculate the number of write operations through the system per second:

Write an operation amount per second = (total number of write operations) ÷ (Work cycle)

Similarly, the working cycle is a length of time in seconds, refers to the time consumed by the implementation. Use the following formula to calculate the total amount of CPU for processing the write operation:

The total amount of write operation time = (number of writes per second) × (CPU write operation time)

Use the following formula to calculate write operation usage:

Write operation usage = (number of writes per second) × (CPU write operation time) × 100

The total CPU usage of the trading type was calculated using the following formula:

CPU usage = ((logical read operation usage) (entity reading operation usage) (write operation usage) × 100

We must calculate each transaction type performed by the system. For example, if it is responsible for a financial system, the transaction range allows customer withdrawals, deposits, or query balances. Finally, the three types of trading type must be calculated separately, so that the system's CPU can be planned correctly. Finally, the processor sum usage rate is calculated using the following formula:

CPU sum usage rate = sum of all transaction usage

If the CPU sum usage exceeds the 75% threshold, it will add more CPUs. According to the following formula, additional CPU will reduce the total CPU sum usage:

CPU sum usage rate (> 1 CPU) = (CPU sum usage) ÷ (number of CPU)

Adding enough CPUs can make the total CPU sum usage below 75%. For example, if the total CPU sum usage is 180%, three CPUs can be used, allowing CPU sum usage to 60%.

Note You may want to know why do not use this factor in each calculation. In fact, our way of use is indirect. The processor speed is included in the service time - handling a total of time spending a transaction.

Acquired the reading operation time Transmit the operation time of the system to acquire the system. Enter the following instructions in the MS-DOS window to enable Diskperf:

Diskperf -y

The function monitor is then started. Look for AVG. Disk Sec / Read and AVG. Disk sec / write counter in the Physical Disk object. Note that these counters provide an average read operation time of entity reading operations. Don't misign these are logical read operation time.

Collecting the use of a single CPU When the system starts working, the use of the CPU must be tracked, just like tracking memory usage. The effective monitor contains a numerous counter associated with the CPU individual usage. These counters are included in the Processor object. In terms of planning purposes, the following counters are quite useful:% The Processor TIME processor is busy with the percentage of occupation of the instruction. Instruction is the basic unit executing in the computer; "Thread) is an object that executes instructions; process is the item established when executed. This counter can be interpreted as the time spent in useful work. % Privileged Time Flowers percentage of processor time in Priviled mode. In Privileged mode, Windows NT Service Layer, Executive Routines, and Windows NT Kernel are executed; in the Privileged mode, in addition to the graphic accelerator and the printers, it is also possible to perform the drivers of most devices. % User Time spends a percentage of processor time in the user mode. In the user mode, all applications and subsystems of the subsystem will be executed. Graphics engine, graphics device driver, printer device driver and Window Manager will also be executed in user mode. The program executed in the user mode does not destroy the integrity of Windows NT Executive, Kernel, or device drivers. % Interrupt Time is a percentage of occupancy time spent by the processor processing hardware interrupt. The interrupt is executed in the privileged mode, so the interrupt time is a component of the% Privileged Time. This counter can help determine the source of excessive time spent in the privileged mode. The Interrupts / Sec's counter contains the average of the device interrupts experienced by the processor per second. When a task is completed or you need to pay, the device issues an interrupt signal to the processor. Devices that can generate interrupts include system timers, microscopy, data communication wiring, network card, and other external devices. In the interrupt, the general permeation intensity will be suspended, and an interrupt can switch the processor to another with a higher priority level. The hour pulse interrupt is frequent and periodically, and the interrupt action is executed on the background. When conducting capacity planning research, it is not necessary to use all counters - can choose counter according to the depth of research. However, at least the% Processor Time counter should be used to collect relevant information. The use of multiple CPUs is collected through the performance monitor to capture system average data for multiple CPUs. Use some counters under the System item:% Total Processor Time Each processor's% processor Times sum is divided by the number of processors in the system. % Total Privileged Time The% Privileged Times of each processor divides the number of processors in the system. % Total User Time The% User Times of each processor divides the number of processors in the system. % Total Interrupt Time Each processor's% Interrupt Times sum is divided by the number of processors in the system. The Total Interrupts / Sec to this counter contains the average of the interruptions of the device experienced by each second processor. It pointed out that the system device is busy within the scope of the computer. Analysis of the information obtained by using these counters can be used to predict the growth of the usage rate of specific CPUs, as this growth may increase the response time of the CPU. Figure 6-6 shows the use of CPU from October 22, 1999 to 14 January 2000.

Note that the CPU's usage continues to rise, until February 18, 2000, it will reach a critical value of 75%. Figure 6-6 Prediction analysis of linear CPU usage

The more data points collected, the more precise prediction.

Solver Sub System Capacity Plan Now we have planned the capacity of memory and processors, then start the capacity planning of the disk subsystem. The capacity planning of this part can be said to be quite easy, because the information we need is calculated. First we need to know that the total number of I / O to handle through the system. This information has been made in the processor capacity plan, and after we need to know the size of the database. Database designers can provide us this information. When planning the capacity of the disk subsystem, it is important to understand the size of the database that is planning and the number of I / O numbers per second, because these two factors may make the number of disk machines we need to become quite large. Many people are quite surprised after understanding the number of disks required for their database. However, additional disks can provide more information access points. If there is only one data access point, it is equal to establishing a bottleneck. Once all transactions must pass this bottleneck, the response time will increase. Experience is that more information access points should be established as much as possible. If the data access point is enough, you will not encounter bottlenecks that may cause a small amount of disk. Of course, more magnets can also be required to accommodate these I / O because there are too many I / O numbers generated to accommodate these I / O. This demand may be more urgent than accommodating the database. For example, it is assumed that there is a 10GB database system, and the number of I / O generated per second is 140. The rule of the disk space usage is 85%, so there is a 12GB of disk to accommodate the database. Now, check our disk demand from I / O view. If the speed of the disk is 70 times I / O per second, the rule of the disk I / O capacity is also 85% per disk, then we will need 3 disk to accommodate this I / O number per second. Therefore, once I / O capacity analysis produces a maximum result -3 disk - we should use 3 disc (the space sum is 12GB calculated by us), the rate of each disk is 70 times per second / O. Note that it is only the lowest demand - depending on the situation, you can use more high-capacity disks. Please also note that we don't count the factors of the RAID configuration in this analysis.

Note When the size of the magnetic disc subsystem is planned, 85% of the usage rules will always be used for I / O numbers for the size of the database or the number of users generated by the user. The calculation should be used to use a larger calculation result as a standard for the number of disks. In addition, remember that 85% is the absolute maximum of the disk usage. In fact, it should be less than 85%. It should also be remembered that the number of I / O per second can lead to bottlenecks and thus prolong response time.

Let us now understand how to determine the amount of disk to meet the needs of the system. This time we take RAID factors to take into account. You need to store three main components: Windows 2000 and SQL Server, transaction record files, and database itself. The number of disks required for these three components must first be calculated, and then add these quantities to understand the total number of disks required for the system. Windows 2000 and SQL Server disk demand must first calculate the number of magnetic discs required for the first component-Windows 2000 Server job system and the SQL Server library. Generally speaking, we will want this part of the disk to be a separate disk area set to RAID 1 (mirror disk), so that the fastest recovery possibilities are. How much disk is needed to see the size of the disk, but usually the Windows 2000 Server job system and the SQL Server library system can be fully installed as long as one disk can be installed. Calculated as follows:

Homework system and SQL disk = (Windows 2000 Server and SQL Disk) × (RAID Add Factor) In this example, RAID adders to 2 (Windows NT and SQL Server on a disk, while RAID 1 disk There is another as a mirror disk. It is not recommended to set the work system disk area to RAID 5 or RAID 0. To use RAID 5, at least two initialized disks must be used to make the job system and the database to recover the fastest speed. The disk demand for the transaction record file will then calculate the number of disks required for the transaction record file of the system. This quantity must be determined according to the maximum number of writes per second per second. Remember that these disks containing transaction record files are the most important part - these disks provide audit tracking, or is an image of the database "before", it is absolutely indispensable when there is a problem in the database. The audit track can cancel the transaction that is interrupted due to disk failure. The number of write operations has been calculated when planning the processor capacity. Now we will assume a number, for example, the transaction will generate 1,500,000 write operations on the RAID 0 disk area. If a given record file disk is used by RAID 1, there will be 3,000,000 write operations within 8 hours, or 104.16 write operations per second. (Remember to use RAID 1, the transaction should be twice as the number of writes per second should be RAID 0.) The formula of the quantity of the disk demand is as follows:

Transaction record file disk = (number of writes per second) ÷ (disk I / O capacity)

Remember that the disk I / O capacity should be 85% of the maximum rate of the disk. In addition, the resulting result of the maximum disk I / O and the number of write operations per second should be invasive to obtain an integer. Finally, it is necessary to determine that the number of write operations per second has taken into account the increase in RAID level. With a just example, if we use the disk rate of 70 times I / O per second, the maximum limit is 85%, then 104.16 write operations per second should require 1.7 disk, unconditional enters That is 2 disk. The final step of the Database Demand Demand is the number of disks required to calculate the database. Remember this quantity calculation to calculate the size of the data library and two different factors per second. After the results are obtained, they determine the amount of disk required for the need for the higher value. Calculating the number of disks required by the database to determine how much disk is required to accommodate the database, which can take advantage of the following formula:

Database disk = (data size) ÷ (disk size) (RAID increase factor)

Remember that the disk size should be 85% of the maximum space of the disk. In addition, the units of data size and disk size in the formula must be consistent (e.g., KB or MB). RAID adders refer to additional disks that need to support fault tolerance. In RAID 1, this value should be equal to the number of disks required to store the database; if RAID 5 is used, only one additional disk is required. If you use RAID 5, we need 2 12GB of disks.

Note It is recommended to use RAID 5 as a database disk.

According to the database I / O, the number of disks required as our previous example is as follows, according to the number of disks calculated in the database I / O, may completely change the number of recommended quotations of the database disks. To calculate this quantity, follow these steps:

The total number of read operations to be processed by the system using the following formula:

The total number of read operations = (number of read operations for each transaction) × (the total number of transactions)

Suppose each transaction has 500 read operations, and a total of 50,000 transactions, the total number of reads is 25,000,000. Using the following formula to determine how much read operation is entity I / O, how much is logical I / O: logical read operation total = (total number of read operations) × (Total number of operations)

The total number of physical read operations = (the total number of read operations) - (The total number of logical read operations)

Assume that the target fast-bearing intermediate rate is 90%, the total number of logical read operations is 22,500,000, and the total number of physical read operations is 2,500,000. Use the following formula to convert the total number of entities to the total number of read operations per second:

The number of entities per second = (total number of physical read operations) ÷ (work cycle)

The work cycle is a length of time in seconds, refers to the time consumed by the implementation. This value is also required when calculating the CPU capacity. In our example, if the work cycle is 8 hours, 86.8 entities read operations per second. The total number of write operations to be performed through the system can now use the following formula:

The total number of write operations = (number of write operations for each transaction) ×

(Total number of transactions) × (RAID increase factor)

Suppose uses the RAID 5 system and each transaction has 10 write operations, the total number of writes is (10) × (50,000) × (3) = 1,500,000. Use the following formula to convert the total number of entities to the total number of write operations per second:

The number of entities per second = (the total number of write operations) ÷ (work cycle)

In this example, the entity write operation is 1,500,000, the working cycle is 8 hours (28,800 seconds), and the number of entities per second is 52.1. Use the following formula to calculate the number of solid I / Os per second:

Substrate I / O quantity = (number of entities per second) (number of entities per second)

In this example, 86.8 entity read operations per second and 52.1, entity write operations, i.e., 138.9 entity I / O operations per second. Use the following formula to calculate the total quantity of the database disk:

Database disk = (number of solid I / O per second) ÷ (magnetic disc I / O capacity) (RAID increase factor)

Remember to use 85% rules when determining the disk I / O capacity, and the RAID increase factor is the number of additional disks required to support fault tolerance. In the amount of 138.9 entity I / O per second, if the disk rate is 70 times I / O per second, and the RAID 5 is used, we need four magnets to support the total I / O number. The other is the RAID 5 fault tolerant desired disk. Therefore, if it is calculated according to the size of the data library, our minimum demand is a disk, but if the I / O activity is calculated, we need 3 disk (using RAID 5). The conclusion is that to accommodate the database, we must prepare 3 disk, that is, the largest number of these two results.

How much disk does it take to the bottom system? To find out the total number of disks required for the system, the results of the above three parts can be summed up. We need 2 disk to install Windows 2000 Server and SQL Server, 2 disk to store transaction record files, and 4 disk to accommodate the database, so the entire system requires 8 magnets to meet demand. Most of the designers who reserve the transfer space will use the threshold we mentioned (75% CPU usage, 85% disk usage, etc.) as the maximum usage. In most cases, you may want to use a smaller value. Of course, this choice is not necessarily determined by designers. There are many external factors that affect design decisions, such as the company's hardware budget. A good system, the maximum CPU usage target should be 65%, and the disk usage is 70%. In any case, you should find the most ideal use ratio on the type of system designed.

Collecting Disk Use Once the system completes the settings and starts execution, the disk should be collected by the disk, and the necessary changes are required for continuous assessment. The system may extend to more users (and thus more transactions), the demand for the database is also possible to change (the result is the database becomes larger) and so on. When the post-capacity planning study of the disk usage is performed, the following counters should be tracked in the Efficvision Monitor. These counters are included in the PhysicalDisk object. % Disk Time is busy with a percentage of time consumption or write operation request. % Disk Read Time The selected disk is busy with the time percentage of the service read operation request. % Disk Write Time Selected Disposable Map is busy consuming time percentage of service write operation request. AVG. Disk read Queue Length In the sampling time interval, the selected disk is waiting for the average of the read operation request in the column. AVG. Disk Write Queue Length In the sampling time interval, the selected disk is waiting for the average of the write operation request in the column. AVG. Disk Queue Length In the sampling time interval, the selected disk is waiting for the read operations in the column and the average of the write operation request. DISK I / O Count Per Second The I / O behavior of the mean array of disks per second by the measurement cycle. This counter cannot be obtained directly through the performance monitor, and other two available counter data must be added - Disk Reads / Sec to Disk Writes / Sec. Disk space Used is currently used by the database or work system quantity. This counter cannot be obtained through the Valid Monitor, using the disk administrator to get this information. Disk Space Available currently available disk space quantity. This counter cannot be obtained through the Valid Monitor, using the disk administrator to get this information. To start a disk administrator, press Start / Program Set / System Administrator (Public) / Disk Administrator. For more information about the disk administrator, please press the DV Administrator's Description button. Analysis Disk Use Data Analysis Disk Use Information is a simple process. For example, if we want to analyze a system, you should collect information related to available disk space to determine which space is idle. Figure 6-7 shows the case where the data library available space is displayed, and in units of MB.

Figure 6-7 Space prediction analysis of available disks

As you can see, at the initial phase of the analysis, there is 2.05MB of idle space every 6.15MB space, which represents a disk with about 67% of space. By January 14, 2000, the idle space was reduced to 1.5MB, indicating that the disk was 75% space. Using Microsoft Excel to draw trend lines, it will find that February 18, 2000, only 1.3MB of available space, i.e., disk is used to use about 83% of space. At this time, DBA may need to purchase additional disks. Capacity Planning We put the capacity planning of the network in the end because it may not be able to get enough capacity planning information inside the system. The efficacy monitor does not provide any counter to display the network performance, so it is difficult to plan the size of the network. However, Internet is often the weakest ring in the system chain, so it should be considered to be carefully pragmatically assessing. To plan the size of the network, you need to know the number of users connected simultaneously on the system, how much each second is through the system, and the average amount of average data from these messages (calculated in a bit tuple). In accordance with these information, it can be estimated to estimate the minimum demand for network capacity is how many bits per second. For example, the target system may transmit the following data: 10 users transmit 25 messages per minute. These messages have 259 positions per length. We can calculate a total of 250 messages per minute, with a total flow of 64,750 bits, which is 518,000 bits per minute, or 8633.33 bits per second. A small network can meet this workload. The following formula can be used to estimate the network size: network size = (number of messages per second) × (message length) × (number of bits per packet)

This calculation allows for the ideal transmission cable that should be (ie, each second). In addition to monitoring network usage, we can only do it for online capacity planning. In addition, in most cases, a network that has been completed is used; it is probably difficult to have other options unless a given network does not support your system. Collecting Internet Use Data When the backend capacity planning research is studied, the BYTES / SEC THROUGH NETWORK INTERFACE performance counter in the online monitor should be tracked. This counter shows the percentage of the data line busy.

Description Online Monitor installation describes how to install online monitor themes in Windows 2000 Server.

Analysis of the network information to analyze network data, first calculate the online capacity (ie, the network size) mentioned before, and then check the Bytes / Sec to Network Interface counter. With these two data, the network usage rate can be calculated by the following formula:

Network usage = ((number of bit groups passing through the network per second) ÷ (network size) × 100

Figure 6-8 shows an example of a linear growth, and a comparison of online usage and data is drawn in the chart.

Figure 6-8 Network usage prediction analysis

This chart indicates that the specific network segment will reach maximum capacity on September 28, 2000. Similarly, the more data points collected, the more accurate prediction. Selecting the collection of materials in the later capacity planning, there is no provision to collect how many counter data is the most standard mode. The choice of the counter depends on the details of the data analyzed and intended. In addition to our previous introduction, the performance monitor also provides a counter that is useful under certain conditions. Here we come to see an example in which this situation - collects program information. Collection program information is valuable when cross-sectional analysis is required for workload activities. Workload profile analysis refers to judging the working conditions of each user's actual implementation. The efficacy monitor provides several different counters to help us achieve this purpose. These counters are quite similar to counters in the Processor object, but they are used to collect program data in this case. These counters can be found in Process, including the following:% processor time This program is performed using the processor to perform the percentage of time consumed by the instruction. The program code used to handle specific hardware disruptions or TRAP CONDITIONS is also included. % User Time The execution of the program is percentage of time consuming time in the user mode. % Privileged Time This execution of the program is used in the percentage of time consumption in the privileged mode. Page Faults / sec In this program, the paging error rate generated by execution is performed. The total consumption time (second) of the ELAPSED TIME execute program. Analysis program information analysis This information is not as complicated as imagined. For example, if we want to analyze the procedure of the system to determine which work is performed, you can use the data like the% Processor Time to collect the data. The counter will indicate the use of the system invested in an operation. Figure 6-9 shows that the accounts payable department uses the CalProc query to grow status. These information is useful because we can predict what happens if more users payable. In this chart, the trend line shows that the usage is continuously climbing and will reach 30% on February 18, 2000. Assuming a total of 10 departments payable, we can estimate that each user takes up 3% of the usage in February. Then we can inquire, if you add 3 users in February, the CalProce query will reach approximately 39% usage. When determining the measurement, it is important to determine the object to be analyzed, as this will affect the measurement settings used. If you have studied post-capacity research, please remember that you will not want this study to cause any performance problem - that is, if you decide to measure all the objects, and set a smaller measurement interval, it is equivalent to increasing performance issues. The smaller the measurement interval, the more frequent record writes, and if many counters are measured, the record will be very large. If the performance analysis must have a small interval to capture the performance problem, multiple records can be used. In any case, write a record is quite enough for capacity research. Figure 6-9 User Program Prediction Analysis

Table 6-1 provides a list of counters. In order to make a basic good capacity planning research, you should collect the information of these counters. Remember, not all counters are located in the Valid Monitor. Available Memory can be obtained using the work administrator. Disk Space Used and Disk Space Available can be captured with a disk administrator.

Table 6-1 Agers available in the Efficiency Monitor

Item Counter Processor% Processor Time (Statistics of individual CPU) System% Total Processor Time (Average of all CPUs) Entity Disk% Disk Time (for all disks array settings) AVG. Disk queuelele length memory Page Faults / Second Available Memory Network Section Total Bytes Received / Second These settings provide a good starting point for system performance prediction analysis. It shows the elements necessary to obtain information about the CPU, disk, memory, and networks, without adding additional pressure on the system, while maintaining a smaller database. As the research on this topic is deep, the counter can be added to extend the information you want to get. This chapter concludes that there is a lot of capacity planners in the enterprise, but most systems use this service. The effectiveness is not necessarily solved through the annual incorrect experience of the year. With these basic knowledge, correct information and calculation methods provided in this chapter, can easily track and predict the capacity of resources.

转载请注明原文地址:https://www.9cbs.com/read-110853.html

New Post(0)