Original link: http://www-900.ibm.com/cn/products/servers/pseries/tech/tpcc.shtml
TPC-C
As a non-profit organization, the Transaction Performance Committee (TPC) is responsible for defining transaction processing and database performance benchmarks such as TPC-C, TPC-H, and TPC-W reference testing, and publish objective performance based on these reference test items data. The TPC benchmark test uses a very stringent operating environment and must be carried out under the supervision of the independent audit institution. Commission members include most major database product vendors and server hardware system vendors.
Related Enterprises Participation in TPC Benchmark Tests in the specified operating environment, and develop more healthy and more scalability software products and hardware equipment through the techniques used in the application test.
TPC-C is an industrial standard benchmark test project designed to measure online transaction processing (OLTP) system performance and scalability. This benchmark item will test a wide database function including queries, updates, and queue small batch transactions. Many IT professionals treat TPC-C as a valid indicator that measures the "true" OLTP system performance.
TPC-C benchmarks measured an analog order entry and sales environment measurement per minute business (TPMC) throughput. It is particularly worth mentioning that it will specifically measure the number of new order transactions generated every minute when performing other four transaction types (such as payment, order status update, delivery, and securities level). The Independent Audit Agency will be responsible for notarizing the benchmark test results, and TPC will report a comprehensive test report. This test report can be obtained from the TPC Web site (http://www.tpc.org).
TPMC definition: TPC-C throughput measured at least 12 minutes at least 12 minutes per minute during the effective TPC-C configuration.
1. TPC-C specification summary
TPC-C is specifically for online transaction processing systems (OLTP systems), in general, we also refer to this type of system as a business processing system.
A TPC-C test specification simulates a more complex and representative OLTP application environment: Suppose there is a large commodity wholesaler, which has a number of commodity libraries distributed in different regions; each warehouse is responsible for 10 points for 10 points The goods; each point of sale provides services for 3,000 customers; each customer has 10 products; about 1% of the products in all orders are not in stock directly to the warehouse, requiring warehouses in other regions. goods.
The system needs to process the following types:
New-ORDER: Customer Enter a new order transaction; payment: Update the customer account balance to reflect the payment status; Delivery: Delivery (analog batch transaction); ORDER-STATUS: Query the status of the customer's recent transaction; stock-level : Query the inventory status of the warehouse in order to be able to replenishment in time.
For the first four types of transactions, the response time is required within 5 seconds; for the inventory status query transaction, the response time is required within 20 seconds.
Logical structure diagram:
flow chart:
2. Evaluation indicator
After two years of development, the TPC-C test specification was released in July 1992. Almost all vendors that provide hardware and software platforms in the OLTP market have released the corresponding TPC-C test results. With the continuous development of computer technology, these test results are also refreshed.
TPC-C test results mainly have two indicators:
● Traffic indicator (THROUGHPUT, TPMC)
According to the definition of TPC, the traffic indicator describes how many new-Order transactions can be handled every minute while performing Payment, Order-Status, Delivery, Stock-Level. The response time of all transactions must meet the requirements of TPC-C test specifications.
The bigger the traffic indicator value, the better!
● Price / Performance, Exquisite TPMC, is the ratio of test system prices (referring to the US quote) and traffic indicators.
The smaller the price, the better!
3. Release
The TPC-C test results of each vendor are published in two forms specified in TPC: Executive Summary and Detailed Test Reports (Full Disclosure Report). The main test indicators, test environment schematic, and complete system configuration and quotation are described in the test results, and in addition to the above content, the detailed test report also illustrates the setting and testing process of the entire test environment.
P690 TPMC test value: 76, 389, 839.00
$ / TPMC: 831.00
US US dollar offer: 6,349,223.0
Number of cpu: 32
Database: IBM DB2 UDB 8.1
Operating system: AIX 5L V5.2
Middleware: Tuxedo 8.0
Test Date: 2003.6.30
Configuration of P690 TPC-C test:
1. Backstage: 1 x eserver pseries 690 with 32 x 1.7GHz Power4 Processors with 128MB L3 Cache Per MCM (Total of Four MCMS), 512GB Memory
2. Front End: 30 x Eserver PSeries 630 Model 6E4 Each with 4 x 1.0GHz Power4 CPUS with 32MB L3 Cache, 16GB Memory
Specweb:
SPECWEB96: The maximum number of hypertext transport protocols (HTTP) operations implemented on the SPECWEB96 benchmark program, no significant degradation of response time.
SPECWEB99: Access, web server can use a predetermined amount of workload support. SPECWEB99 Detection Device Simulation Customer uses a slow INTERNET to send an HTTP workload request to the web server.
SPECWEB99 Tests Web Server Health
The SpecWeb99 is a web server reference test developed by the Standard Performance Evaluation Organization (SPEC). It measures the maximum concurrent connection number of web servers that meets the specific throughput and customer request response rate requirements. The computationally connected total baud rate is within 320 kbps to 400kbps, it meets the corresponding specifications.
The SpecWeb99 is running on a machine called the main client, which contains configuration files that allow users to load specific load requests. The main client also handles the transmission coordination problem between the system (SUT) in the client and server or test. The client generates a separate HTTP request stream via many sub-process / threads, and the simulation is sent to SUT. Figure 2 shows the hierarchical relationship of the client / server.
Figure: Typical SpecWeb99 experimental environment
In this test, the client sends request data to the server in the test. The test specification requires the connection between the client and the server that cannot use the segment size greater than 1460-bit TCP protocols. Therefore, each client reads a response of 1460 bits or less data blocks.
Two types of loads are used in test:
Static load. Static load has four types of files. The smallest file has an increase of 0.1 kB, the second file type has an increase of 1KB, and the last two types of files have increased by 10KB and 100KB. Each directory contains 36 files for each type of 9 files.
The file type of the target request is dispersed in various types. The secondary distribution is carried out in each of the 9 files in each class. The final target file is mixed:
35% request file less than 1 kB
50% request file is less than 10 kB
14% request file is less than 100 kb, but the request file greater than or equal to 10 kB1% is less than 1000 kB, but greater than or equal to 100 kb
Dynamic load. Dynamic load is based on advertising and user registration. A total of four request content types used in SpecWeb99 are standard dynamically taken operation, dynamic random operation, dynamic transmission operation, and customer graphics interface dynamically. Standard Dynamic Take Operation and Customer Graphics Interface Dynamic Take Operation The Simple Advertising Turning Features of the Web Server. Dynamic take action with advertising rotation tracks users and user choices, so ads can be customized by different ways. Finally, dynamically release the implementation of a user registered on the corresponding website.
P690 SPECWEB99 test value: 21,000
Web server: Zeus 4.0
Operating system: AIX 5L V5.1 (64-bit)
CPU number: 16
Test Date: 2001-10-1
Test configuration: 16 x 1.3GHz Power-4 Processors W / 1440KB Unified On Chip L2 Cache, 192GB Memory, 32 x 32 IBM Gigabit Ethernet-SX PCI Controllers, 32 x Gigabit Ethernet NetWork (1 Gigabit / Sec), 96 x Clients 4 x 375MHz Power3-II, RS / 6000 44P-270), Requested Connections = 21000, Max Fileset Size = 67319.6MB
P650 SPECWEB99 test value: 12,400
Web server: Zeus 4.1r3
Operating system: AIX 5L V5.2 (64-bit)
CPU number: 8
Test Date: 2002-10-1
Test configuration: 8 x 1.45GHz Power4 Processors W / 1.5MB (i d) unified on Chip L2 Cache, 32MB Unified Off Chip / SCM L3 Cache, 64GB Memory, 8 x Gigabit Ethernet-SX PCI-X Controllers, 8 x Gigabit Ethernet NetWork (1 Gigabit / Sec), 48 x Clients (6 x 668MHz RS64-IV, PSERIES 620 Model 6F1), Requested Connections = 12400, Max Fileset Size = 39801.28MB
P630 SPECWEB99 test value: 6,895
Web server: Zeus 4.2r1
Operating system: AIX 5L V5.2 (64-bit)
Number of cpu: 4
Test Date: 2003-2-1
Test configuration: 4 x 1450MHz Power4 Processors W / 1536KB (i d) Unified on Chip L2 Cache, 8MB Unified (Off Chip) / SCM L3 Cache, 32GB Memory, 4 x Gigabit Ethernet-SX PCI-X Controllers, 4 x Gigabit Ethernet NetWorks (1 Gigabit / Sec), 24 x Clients (4 x 375MHz Power3-II, PSeries 640 Model B80), Requested Connections = 6900, Max Fileset Size = 22199.12MB
NOTESBENCH:
Notesbench is a driver for testing a variety of Lotus Notes. The aim is to perform a custom workload, analog client operation. Notesbench Tests "Test Mail" and "Test Mail and Database". All IBM results that have been published are "Test Mail Workload". P680 Notesbench Test Value: 150,197
Number of users: 108,000
Average reaction time: 0.584 seconds
Domino server version: 5.06A
Operating system: AIX 4.3.3
Number of cpu: 4
Test Date: 2001.11.20
Test configuration: IBM EServer PSeries 680 (24 * RS64 IV / 600MHz; 96GB RAM, 30 Partitions)