Managing Web Site Performance
Table of Contents Executive summaryIntroducing a methodology for managing performanceStep 1. Establish performance objectivesStep 2. Monitor and measure the site Step 3. Analyze and tune componentsStep 4. Predict and plan for the futureSummaryAppendix A. Some performance management scenariosAppendix B. Tools for monitoring performanceReferences
Authors: High Volume Web Site Teammore Information: High Volume Web Sites ZoneTechnical Contact: Joseph SpanAnagement Contact: Willy Chiudate: April 23, 2001Status: Version 1.0
PDF VERSION Also Available.
Abstract
As enterprises implement Web applications in response to the pressures of e-business, managing performance becomes increasingly critical. This paper introduces a methodology for managing performance from one end of the e-business infrastructure to the other. It identifies some "best practices" and Tools That Help Implement The Methodology.
Contributors
The High Volume Web Site team is grateful to the major contributors to this article: Willy Chiu, Jerry Cuomo, Ebbe Jalser, Rahul Jain, Frank Jones, W. Nathaniel Mills III, Bill Scully, Joseph Spano, Ruth Willenborg, and Helen Wu.
Special NOTICE
The information contained in this document has not been submitted to any formal IBM® test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at THEIR OWN RISK.EXECUTIVE SUMMARY
As more of your company's business moves to the Internet, your IT organization is becoming a major focal point for such important business measures as revenue and customer satisfaction. You're enjoying unprecedented visibility, and it may not all be positive. If it hasn ' T already, The Performance of Your Web Site Will Become critically important.
This paper deals with managing performance. More than ever before, this task requires a perspective that considers components of the infrastructure from end-to-end, from the front-end browsers to the back-end database servers and legacy systems. The end- to-end perspective must be shared not only by you and your operations staff, but also by application developers and Web site designers. Required as well are thoughtful objectives for performance coupled with thorough measurements of performance.
This paper proposes a methodology that you can follow to manage your Web site's performance from end to end. Ideally, you have characterized your workload, selected and applied appropriate scaling techniques, assured that performance is considered in Web page design, and implemented capacity planning technologies . If you have not, you may want to review the white papers related to those phases of the life cycle at the same time as you consider this methodology (see References). Regardless, the methodology presented here can help you define your challenges and implement Processes and technologies to meet them.or Best Practices method of a high-volume web site consists in familiar tasks:
Establish Objectives Monitor And Measure The Site Analyze and Tune Components Predict and Plan for the Future
Some Benefits you can expert at authenticology the end-to-end methodology include:
Proper reporting of quality of service metrics Interactive and historical data on end-to-end performance Rapid identification of the problem source Improved support of business goals Understanding and control of transaction costs World class customer support and satisfaction
The goal of implementing the end-to-end methodology is to align the system performance with the underlying business goals. The methodology, coupled with implementation of the capacity-on-demand options available from IBM's powerful server family, make the goal achievable, and Set The Stage for Self-Managing It Infrastructure.
Introducture a methodology for managing performance
The IT infrastructures that comprise most high-volume Web sites (HVWSs) present unique challenges in design, implementation, and management. While actual implementations vary, Figure 1 below shows a typical e-business infrastructure comprised of several tiers. Each tier handles a particular set of functions, such as serving content (Web servers such as the IBM HTTP Server), providing integration business logic (Web application servers such as the WebSphere® Application Server), or processing database transactions (transaction and database servers) .Figure 1. Multi-Tier Infrastructure for E-Business
. IBM's IT experts have been working with IBM customers to architect and analyze many of the world's largest Web sites Figure 2 below shows how IBM's HVWS team defines the life cycle of a Web site; it also shows the categories of best practices recommended for one or more phases of the cycle. As it accumulates experience and knowledge, the HVWS team compiles white papers aimed at helping CIOs like you understand and meet the new challenges presented during one or more of the phases.
Figure 2. Life Cycle of A Web Site
Managing the performance of a high-volume Web site requires a new look at familiar tasks such as setting objectives, measuring performance, and tuning for optimal performance. First, HVWS workloads are different from traditional workloads. HVWS workloads are assumed to be high-volume and growing, serving dynamic data, and processing transactions Additional characteristics that can affect performance include transaction complexity, data volatility, security, and others IBM has determined that HVWS workload patterns fit into one of five classifications:.. publish / subscribe, online shopping, customer self-service, trading, or business-to-business. Correctly identifying your workload pattern will position you well for making the best use of the practices recommended in this and related papers. for more information about how IBM distinguishes among HVWS workloads, see The Design for Scalability White Paper.Secondly, Those Performing The Tasks Must Extend The in INCLUDE THE E- business infrastructure from end to end. This is most effective when all participants understand the application's business requirements, how their component contributes to the application, and how a transaction flows from one end of the infrastructure to the other. Only then can they work together to . optimize application performance and meet key business needs It's often best when one person is assigned ownership of each application considered critical to the e-business; the application owner assures the customer's perspective of application performance - response time - remains the primary focus of All participants.
This paper proposes a methodology that you can follow to manage your Web site's performance from end to end. Ideally, you have characterized your workload, selected and applied appropriate scaling techniques, assured that performance is considered in Web page design, and implemented capacity planning technologies . If you have not, you may want to review the white papers related to those phases of the life cycle at the same time as you consider this methodology. Regardless, the methodology presented here can help you define your challenges and implement processes and technologies to Meet Them.figure 3 Below Shows Our Methodology for Managing The Performance of a High-Volume Web Site In The Context of a Multi-Tier Infrastructure.
Figure 3. Methodology for Managing Performance of a hvws
Our methodology consists of familiar tasks with a new twist, driven by the requirement for an end-to-end perspective, and including tools that are available now to help you get started. See Appendix A for some sample scenarios about managing performance and Appendix B For A Summary of Tools Available from IBM, Including TivoliTM, IBM's Provider of E-Business Infrastructure Management Software.
Step 1. Establish Performance Objectives
The first task is to establish performance objectives for the business, the application, and operations. Performance objectives for the business include numbers of log-ons and page hits, and browse-to-buy ratios. Objectives for the application include availability, transaction response Time, And Total Cost Per Transaction. Operations Objectives Include Resource Utilization (NetWork, Servers, etc.) and the behavior of the component..
You should use the results of an application benchmark test to establish the "norms." Ideally, you acquire the norms from controlled benchmark and stress testing. If this is not possible, you should closely monitor and measure the deployment of the application and use the results to produce a performance profile ranging from the average to the peak hours and / or days.Metrics should be established from outside the site (response times, availability, ease of navigation, security, etc.), and from each server tier ( CPU, I / O, storage, network utilization, database load, intranet traffic rates, etc.). You need to establish thresholds so that operations can be notified when targets are near, at, or over their limits. See Appendix B for a List of tools available from ibm and tivoli.
Managing against a set of norms is an ongoing process. Frequent updates to expectations and thresholds may be required. Marketing may schedule a promotion that will drive site traffic to new highs. It is important that this be planned for to avoid "false alarms" that Can Occur if you have.com.
The team that sets the objectives should include representatives of each area; if that is not possible, the combined objectives should be communicated clearly to all areas, along with the emphasis on what may be considered a new paradigm, that of the end-to- End perspective.
Step 2. Monitor and measure the Site
In this step you examine and analyze the performance of the application. You view the application as a transaction flow from the browser through the Web servers and, if applicable, to the backend database and transaction servers, and back to the browser. You are concerned with the entities that make up the system (operating system, firewalls, application servers, Web servers, etc.) only insofar as they support the application.To understand end-to-end performance, you must understand and document the flow of each transaction type, for example, search, browse, buy, trade, etc. that done, you can use software that monitors the actual flow and alerts operations when any metric you specify exceeds the norms established in Step 1. for example, the alert informs you That Your Target Page Response Time Has Been Exceeded. You KNOW That Something In The System Has Degraded, But Where Is The Slowdown Occurring? How do you find the corprit?
You could instrument your application to record information at various points in the transaction flow. An open standard, Application Resource Management (ARM) defines an API and library for these records. In addition to Tivoli, several vendors have tools to display and analyze this data ....................................... ...CRIPLILE, TELEGEMENT.
Instead of recording information on every transaction, you can take averages at several points. Information about averages is nearly as good as full instrumentation, but comes at a lower cost and uses existing and transparent tools. Tools such as the WebSphere Resource Analyzer can be used TO Extract these averas through the resource management interface.other available tools include:
Report on the quality of customer experiences Analyze the Web site to verify links and enforce content policy Aggregate Web data into an overall business view Correlate log and performance data Monitor availability Use online analytic processing (OLAP) techniques to provide decision support
WebSphere Application Server provides a set of comprehensive performance metrics For servlets and beans, these include:.. Number of requests, requests per second, execution time, and errors Java ™ metrics reported include active memory, available memory, threads active, threads idle, ETC. Database Connections Are Also Included Connection Times, Active Database Connections, And Users Waiting for Database Access.
It's best to continuously monitor the site availability from outside to insure that Web logs transactions are executing successfully and within criteria. Examine the site navigation periodically to validate the links and content. Resource monitors will need to roll up their data into an aggregate application view. Have to be analyzed and correlated with other resource data.
In a recent customer engagement, IBM's HVWS team investigated a problem of slow customer response time. The Web server was running Netscape Enterprise Server and the WebSphere Application Server for dynamic content generation using Java servlets. A middle tier used Enterprise JavaBeans ™ (EJBs) to process transactions and then a JDBC call to the database tier. Using the external monitor for response times, they found that during peak hours, response times for consumer transactions increased from fourteen seconds to twenty seconds. Using the WebSphere Resource Analyzer and some DB2® tools , they collected the internal elapsed times for the application components. Figure 4 below shows the analysis of how each component contributed to total response time. Comparing the baseline time with the peak times, it's easy to see that the slowdown occurs in the servlet tier. The WebSphere Resource Analyzer Showed That The Application Server Was Running Out of Worker Threads Under Peak Load. Allocatin G Additional Worker Threads Eliminated The Slowdown.figure 4. Application Response Times - Baseline vs. Peak
Step 3. Analyze and Tune Components
So far, the methodology has provided objectives, measurements, and application insights. Thus it has allowed you to understand, monitor, and report on end-to-end performance. It has also allowed rapid problem determination. When performance issues come up, you CAN Quickly Investigate The Application and Isolate An Individual Component. Appendix A Contains Scenarios That Are Based on Real Events and Demonstrate How Components Are Analyzed and Tuns.
In this step, you analyze and tune specific components One common question:. Does the application scale gracefully In general, scalability refers to a component's ability to adapt readily to a greater or lesser intensity of use, volume, or demand while still meeting business? . objectives you want to assure that your application scales smoothly wherever deployed without experiencing thrashing, bottlenecks, or response time difficulties you need to examine how your application uses resources:. you're interested in CPU consumption per transaction and I / O and network overhead . See also Design for Scalability, our HVWS paper that recommends which scaling techniques should be applied to specific components.Another important question:? Is the application meeting economic criteria Now that resource consumption is understood, you know the "cost per transaction" and you CAN Assess WHether the Application IS Using Resources As Projected by The Performance Objectives. You want to consider the Best practices pertaining to scalability and page design and learn what's needed to optimize how the resources in each tier are used. The application owner uses this to work with development, operations, and design to control and / or improve the efficiency of the application. In this Way Costs are helde on budget.
We used the methodology recently to benchmark a customer's application and found that throughput seemed to be stalled in the database server. Furthermore, the database was consuming more resources than was expected based on the historical archived data. The DBA ran the analysis tools and quickly determined that one of the application SQL statements was forcing a full table scan (very expensive, very bad). This had not had any measurable effect during the initial deployment of the application with a limited number of customers. However, as the number of customers grew, the size of the database increased significantly. The DBA was able to define an alternate index into the table, test the change, and resolve the problem within a short time. It was the methodology that pointed us quickly to the database tier and allowed Us to DETERMINE THE CAUSE.THE All-Important Questions: Can Response Time Be Improved? Using The Component Response Times, The Application .
In one recent engagement, the customer help desk was flooded with complaints of slow or nonexistent performance. The senior management was concerned that the system seemed to be failing and IT seemed unable to tell them why. Using our methodology, we accessed the site with the WebSphere Studio Page Detailer to analyze page response times. Page Detailer showed us that response times were long due to excessive delays in obtaining TCP / IP socket connections. We investigated the intranet, firewalls, and site connectivity. It turned out that when the site went online, the firewalls had been set up to allow a fixed number of concurrent socket connections. As traffic increased (the site was succeeding), more and more customers contended for the same number of connections. This was easily corrected. In this case, as In Many Others, The Solution Seem Obvious When You isolatent. it is the methodology That Allows US to do so.figure 5 below shows Tools and techno logies available to monitor and analyze Web site components. You can see, for example, that you can monitor response time proactively using WebSphere Studio Page Detailer and Tivoli Web Services Manager (TWSM). See Appendix B for more detail about some available tools.
Figure 5. Tools Available to Monitor and Analyze Web Site Components
Step 4. Predict and Plan for the Future
Sadly, none of us can predict the future. However, an increasing amount of valuable information and useful tools are available to help you plan proactively to keep your Web site serving customers as they expect to be served and to avoid the problems that plague busy sites .
Figure 6 below shows one week of page hits for one of IBM's retail customers. All of the days have essentially the same pattern with predictable peaks and valleys. This site showed no "weekend effect," which may not be true for its "brick and Mortar "Store, Nor for Other Retailers. this Kind of Information Enables Site Personnel to Prepare for Peaks And Use the Valleys for Other Operations When Needed.figure 6. Retailer Usage Pattern over One Week
While a typical week, as shown in Figure 6 above, can be counted on, a retailer also has to plan for seasonal rushes when peaks can easily exceed those of a typical week. Figure 7 below shows a retail site over six months, including the Annual Holiday Period WHEN THE NUBER of Hits Trip. During this Kind of Load, The Site Must Be at Its Best, IF Possible Free of Other Operations.
Figure 7. Retail Customer Seasonal Peaks
Retailers are not the only e-business facing seasonal demands. Figure 8 below shows how the number of hits for a bank grew over the months approaching tax time. Clearly, the financial sites have their own version of weekly and seasonal peaks and valleys.
Figure 8. Hit Rates over Six Months for a Financial Site
These examples demonstrate that it is possible to monitor your site and detect trends from which you can plan for the future and meet your business objectives. Your site will have peaks and valleys. You can measure them. You can reasonably predict when your peaks will occur AND You can Position The Resources You NEED To Handle The Demand and Serve Your Customers (and bring them back!).
Your trend data should suggest whether and when additional site components are needed. Powerful new servers have options, as well, that can generate capacity based on predicted workload. IBM can help you clarify which components match your particular requirements and objectives. See the Planning for Growth Paper to Learn Aboutur Capacity Planning Methodology and the Hvws Simulator for WebSphere.Summary
Managing the performance of a high-volume Web site is challenging, exciting, and possible. Following a methodology such as the one presented in this paper will help guide you and your team toward tasks they can understand and goals they can achieve. The success of your company's e-business depends on the tools and techniques your IT team chooses. There are many available, and more are coming, as well as capacity-on-demand options from IBM's powerful server family that set the stage for self-managing IT infrastructures AS Always, Their Use succeeds best in the context of a process.
The "BEST PRACTICES" Methodology for Managing A High-Volume Web Site Includes developing an end-to-end personfts developing an end-to-end person confective of the site and backing these familiar steps:
Establish Objectives Monitor And Measure The Site Analyze and Tune Components Predict and Plan for the Future
Using this methodology, Your Team Can Help Your Company Meet The Revenue and Customer Satisfaction Objectives of Its E-Business And Enjoy Improved IT Performance Management Benefits, Such As:
Proper reporting of quality of service metrics Interactive and historical data on end-to-end performance Rapid identification of the problem source Improved support of business goals Understanding and control of transaction costs World class customer support and satisfactionIBM's experience with high-volume Web sites has yielded .
APPENDIX A. Some Performance Management Scenarios
This Appendix Contains Three Brief Scenarios That Area Based On Real Events and Demonstrate The Principles of Our Methodology for Managing Performance.
CIO
When reviewing his schedule for the upcoming week, the CIO notes a midweek meeting with the marketing department, a Tuesday working lunch with his colleague from Finance, and the monthly CEO staff meeting on Thursday. He works with his assistant to be sure he takes appropriate Information to each meeting.
On Tuesday he will take the latest reports showing costs, projected capacity over the next year, and likely capital spending. The cost chart in Figure 9 below shows, at a high level, the cost per transaction and the cost breakdown by tier. The capacity chart in Figure 10 below illustrates the expected growth in the number of users and transactions. These expectations were jointly reached with the marketing group. The CIO will show his Finance colleague how the increase in workload drives a needed increase in capacity and, thus, capital spending for next year. He points out that operations is working closely with application development to examine costs. They have identified where improvements can be made in the application and have projected the cost savings in terms of cost per transaction and reduced capital spending. He uses THE COST SAVINGS Chart in Figure 11 BELOW To Show How The Proposed Improvements Will Reduce The Cost Per Transaction More Effectively THE in-Plan Improveme NTS.figure 9. Average cost per web transaction
Figure 10. Current and Projected System Load
Figure 11. Cost Savings with proposed enhancement
..................
At the marketing meeting he brings the charts that report system availability, response time, transaction rates, and an analysis of consumer navigation experience. Marketing is concerned about an upcoming promotion. They expect that it will drive traffic to new highs and worry that the system will slow down. Having anticipated this line of discussion the CIO brings out charts showing the current peak demand on the system and the amount of available overhead. He is able to demonstrate that the system has the headroom to handle up to a 30% increase in workload while still maintaining current response times during peak hours. His colleagues in marketing are pleased to see that IT has anticipated the effects of the ad campaign and are satisfied that the system will be able to contain the burst of traffic.Finally, our CIO prepares For The CEO MONthly Staff Meeting. Each Major Function Is Expected To Present a Short Highlight Report on The Current and Upcoming Months. The CIO WILL SHOW Chart s that illustrate system availability, response time and costs vs targets. He will then discuss upcoming events, like the marketing campaign, and his plans to support them. He expects the presentation to go well because he is confident that the system is providing him the Proper Information to Support His Role.
Content Problems
Last week, marketing, sales, development, and IT proudly deployed a new application that not only significantly enhanced the function of the e-business site, but also dramatically improved the look and feel of the site for the consumer.
After just a few days, however, IT noted that the Tivoli Web Services Manager was producing alerts that indicated that nearly all pages were slowing down and response time was approaching the maximum allowed by the service level agreement. Using the Tivoli Web Services Analyzer to examine site traffic patterns, IT observed that the site slowed down in proportion to the number of new visitors and customers. All pages were affected, indicating the problem was systemic.IT contacted Development to review the new content. Development remained puzzled, as they had tested The New Pages Thoroughly Before Migrating The Into Production.
The application owner convened the performance team. One member was detailed to examine page performance using the WebSphere Studio Page Detailer. He reported, "Page Detailer shows that socket connect and SSL connect times are fine. This would seem to absolve the network, firewalls, routers, and TCP / IP layers. It also shows that transactions are processing well within criteria, so there does not seem to be a problem with that part of the system. However, Page Detailer does show that static content (such as GIFs) Slowed Down Dramatically After The New Application Was Deployed. "
Armed with this information the team quickly identified the Web server as the likely problem area since it is responsible for serving up static content. As this shop was using Netscape Enterprise Server they asked for a PerfDump to be executed. PerfDump reports on the internal performance of the server. Within minutes they were able to examine the output and determine that the cache hit ratio for static content had degraded. Clearly the addition of the new application had added much new static content to be served and the Web server cache was now too small to efficiently manage the new total. A quick look at the operating system input / output statistics using VMSTAT confirmed that real I / O had jumped dramatically within a day or so of the new application roll out.IT was able to modify the cache size setting In the Web Server and Deploy The Change At The next Scheduled maintenance period.
Bottleneck
The e-business site was launched last month, just in time for the TV ad campaign. To date the site is successful. Traffic is growing as predicted, sales are strong, and complaints have been quite low. However, in the past few days .............. ..
IT employs the Tivoli Web Services Manager to examine the site They determine that only transaction pages have slowed down;. The number of transactions (sales, etc.) continues to rise, while the number successfully processed is stagnant Customers are complaining to the help. desk and by e-mail about the slow response times. Analysis of the access logs produced by Tivoli Web Services Analyzer (TWSA) confirms that many customers are leaving the site without waiting for their business to complete. Later they complain about not knowing if their business was successfully processed. A transaction in doubt is the worst possible customer problem, one that can destroy confidence in the site and the enterprise.It's apparent there is a problem in the transaction processing. IT still checks out the Web server to eliminate it as A component the team extracts the Overall Response Times for Transactions (from the Tivoli Web Management Solution) And Usees The WebSphere Resource ANA lyzer to obtain the average elapsed times for the servlet and bean during the slowdown. Rapid subtractions demonstrate that the increased load extended the execution time of the bean. In fact, when a specific transaction rate is reached, the application can not process any more That The Transaction Rate Remains fixed.
Resource Analyzer at the bean engines also showed that the application server threads were busy processing requests while VMSTAT showed the CPU was less than 50% busy with no I / O or page wait. Believing that the bottleneck was found, the team recommended that additional threads be assigned to the pool so that the bean could process more concurrent requests.Before deploying such a change, the team runs the Mercury Interactive Load Runner® to create an artificial load on the test system. They then add threads to the pool expecting the bottleneck to disappear. They rerun the test with the new setting, but the bottleneck still occurs at nearly the same transaction rate. Resource Analyzer confirms that all the threads, including the new total, are still in use while response time continues to rise.
Now they know that the thread starvation is a symptom of the problem but not the cause. The next step is to re-create the problem again. This time they take a dump of the Java Virtual Machine and examine the Java threads for a pattern. They see that all threads are blocked on the same method in their bean. They examine the source code and discover that this method is synchronized (that is, under lock control). A developer investigates and reports that the code need be synchronized only while it Updates a Shared Object, But That The Programmer Synchronized The Entire Long Running Method. This Causes All Transactions To Block, Waiting for this Common Routine.
The programmer codes a fix and test reruns the test. With the change made, the test system can fully utilize the CPU. The transaction rate is no longer constrained. The bottleneck is broken. Test schedules a regression test for that evening and the next day . Meanwhile, IT has configured an additional server to handle the production load pending availability of the fix. Testing is complete by the weekend. The fix is deployed into production during the Sunday morning maintenance period. by Monday evening, production monitoring confirms that the bottleneck IS Resolved, Transaction Rates Are Up, And Response Time Is With Criteria.Appendix B. Tools for Monitoring Performance
The Appendix Introduces Some of The Tools Available To Monitor Web Site Performance.
WebSphere Application Server (WAS) Resource Analyzer
The WAS Resource Analyzer can be used with operating system tools such as vmstat to monitor a number of performance measures related to the application server. These metrics are classified into Enterprise JavaBeans (EJBs), ORB thread pool, system runtime resources, database connection pool, And servlets. Was Resource Analyzer IS Available for All Was Platforms.
Resource Analyzer on EJB
The Resource Analyzer Monitors Execution Of Your EJBS AT Three Levels: Server, EJB Container, And Individual EJB. The Table Below Summarizes The Statistics Provided.
Authors: High Volume Web Site Teammore Information: High Volume Web Sites ZoneTechnical Contact: Joseph SpanAnagement Contact: Willy Chiudate: April 23, 2001Status: Version 1.0
PDF VERSION Also Available.
Abstract
As enterprises implement Web applications in response to the pressures of e-business, managing performance becomes increasingly critical. This paper introduces a methodology for managing performance from one end of the e-business infrastructure to the other. It identifies some "best practices" and Tools That Help Implement The Methodology.Contributors
The High Volume Web Site team is grateful to the major contributors to this article: Willy Chiu, Jerry Cuomo, Ebbe Jalser, Rahul Jain, Frank Jones, W. Nathaniel Mills III, Bill Scully, Joseph Spano, Ruth Willenborg, and Helen Wu.
Special NOTICE
The information contained in this document has not been submitted to any formal IBM® test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at Their own risk.
Executive Summary
As more of your company's business moves to the Internet, your IT organization is becoming a major focal point for such important business measures as revenue and customer satisfaction. You're enjoying unprecedented visibility, and it may not all be positive. If it hasn ' T already, The Performance of Your Web Site Will Become critically important.
This paper deals with managing performance. More than ever before, this task requires a perspective that considers components of the infrastructure from end-to-end, from the front-end browsers to the back-end database servers and legacy systems. The end- to-end perspective must be shared not only by you and your operations staff, but also by application developers and Web site designers. Required as well are thoughtful objectives for performance coupled with thorough measurements of performance.This paper proposes a methodology that you can follow to manage your Web site's performance from end to end. Ideally, you have characterized your workload, selected and applied appropriate scaling techniques, assured that performance is considered in Web page design, and implemented capacity planning technologies. If you have not, you may want TO REVIEW The White Papers Related To Those Phases of The Life Cycle At The Same Time As You Consider this Methodology (See References). Regardless, The ME THODOLOGY Presented Here Can Help You Define Your Challenges And Implement Processes and Technologies To Meet Them.
Our Best Practices Methodology for Managing The Performance of a High-Volume Web Site Consists of Familiar Tasks:
Establish Objectives Monitor And Measure The Site Analyze and Tune Components Predict and Plan for the Future
Some Benefits you can expert at authenticology the end-to-end methodology include:
Proper reporting of quality of service metrics Interactive and historical data on end-to-end performance Rapid identification of the problem source Improved support of business goals Understanding and control of transaction costs World class customer support and satisfaction
The goal of implementing the end-to-end methodology is to align the system performance with the underlying business goals. The methodology, coupled with implementation of the capacity-on-demand options available from IBM's powerful server family, make the goal achievable, and Set the stage for self-managing it infrastructure for managing performance
The IT infrastructures that comprise most high-volume Web sites (HVWSs) present unique challenges in design, implementation, and management. While actual implementations vary, Figure 1 below shows a typical e-business infrastructure comprised of several tiers. Each tier handles a particular set of functions, such as serving content (Web servers such as the IBM HTTP Server), providing integration business logic (Web application servers such as the WebSphere® Application Server), or processing database transactions (transaction and database servers).
Figure 1. Multi-Tier Infrastructure for e-business
. IBM's IT experts have been working with IBM customers to architect and analyze many of the world's largest Web sites Figure 2 below shows how IBM's HVWS team defines the life cycle of a Web site; it also shows the categories of best practices recommended for one or more phases of the cycle. As it accumulates experience and knowledge, the HVWS team compiles white papers aimed at helping CIOs like you understand and meet the new challenges presented during one or more of the phases.
Figure 2. Life Cycle of A Web Site
Managing the performance of a high-volume Web site requires a new look at familiar tasks such as setting objectives, measuring performance, and tuning for optimal performance. First, HVWS workloads are different from traditional workloads. HVWS workloads are assumed to be high-volume and growing, serving dynamic data, and processing transactions Additional characteristics that can affect performance include transaction complexity, data volatility, security, and others IBM has determined that HVWS workload patterns fit into one of five classifications:.. publish / subscribe, online shopping, customer self-service, trading, or business-to-business. Correctly identifying your workload pattern will position you well for making the best use of the practices recommended in this and related papers. for more information about how IBM distinguishes among HVWS workloads, see The Design for Scalability White Paper.Secondly, Those Performing The Tasks Must Extend The in INCLUDE THE E- business infrastructure from end to end. This is most effective when all participants understand the application's business requirements, how their component contributes to the application, and how a transaction flows from one end of the infrastructure to the other. Only then can they work together to . optimize application performance and meet key business needs It's often best when one person is assigned ownership of each application considered critical to the e-business; the application owner assures the customer's perspective of application performance - response time - remains the primary focus of All participants.
This paper proposes a methodology that you can follow to manage your Web site's performance from end to end. Ideally, you have characterized your workload, selected and applied appropriate scaling techniques, assured that performance is considered in Web page design, and implemented capacity planning technologies . If you have not, you may want to review the white papers related to those phases of the life cycle at the same time as you consider this methodology. Regardless, the methodology presented here can help you define your challenges and implement processes and technologies to Meet Them.figure 3 Below Shows Our Methodology for Managing The Performance of a High-Volume Web Site In The Context of a Multi-Tier Infrastructure.
Figure 3. Methodology for Managing Performance of a hvws
Our methodology consists of familiar tasks with a new twist, driven by the requirement for an end-to-end perspective, and including tools that are available now to help you get started. See Appendix A for some sample scenarios about managing performance and Appendix B For A Summary of Tools Available from IBM, Including TivoliTM, IBM's Provider of E-Business Infrastructure Management Software.
Step 1. Establish Performance Objectives
The first task is to establish performance objectives for the business, the application, and operations. Performance objectives for the business include numbers of log-ons and page hits, and browse-to-buy ratios. Objectives for the application include availability, transaction response Time, And Total Cost Per Transaction. Operations Objectives Include Resource Utilization (NetWork, Servers, etc.) and the behavior of the component..
You should use the results of an application benchmark test to establish the "norms." Ideally, you acquire the norms from controlled benchmark and stress testing. If this is not possible, you should closely monitor and measure the deployment of the application and use the results to produce a performance profile ranging from the average to the peak hours and / or days.Metrics should be established from outside the site (response times, availability, ease of navigation, security, etc.), and from each server tier ( CPU, I / O, storage, network utilization, database load, intranet traffic rates, etc.). You need to establish thresholds so that operations can be notified when targets are near, at, or over their limits. See Appendix B for a List of tools available from ibm and tivoli.
Managing against a set of norms is an ongoing process. Frequent updates to expectations and thresholds may be required. Marketing may schedule a promotion that will drive site traffic to new highs. It is important that this be planned for to avoid "false alarms" that Can Occur if you have.com.
The team that sets the objectives should include representatives of each area; if that is not possible, the combined objectives should be communicated clearly to all areas, along with the emphasis on what may be considered a new paradigm, that of the end-to- End perspective.
Step 2. Monitor and measure the Site
In this step you examine and analyze the performance of the application. You view the application as a transaction flow from the browser through the Web servers and, if applicable, to the backend database and transaction servers, and back to the browser. You are concerned with the entities that make up the system (operating system, firewalls, application servers, Web servers, etc.) only insofar as they support the application.To understand end-to-end performance, you must understand and document the flow of each transaction type, for example, search, browse, buy, trade, etc. that done, you can use software that monitors the actual flow and alerts operations when any metric you specify exceeds the norms established in Step 1. for example, the alert informs you That Your Target Page Response Time Has Been Exceeded. You KNOW That Something In The System Has Degraded, But Where Is The Slowdown Occurring? How do you find the corprit?
You could instrument your application to record information at various points in the transaction flow. An open standard, Application Resource Management (ARM) defines an API and library for these records. In addition to Tivoli, several vendors have tools to display and analyze this data ....................................... ...CRIPLILE, TELEGEMENT.
Instead of recording information on every transaction, you can take averages at several points. Information about averages is nearly as good as full instrumentation, but comes at a lower cost and uses existing and transparent tools. Tools such as the WebSphere Resource Analyzer can be used TO Extract these averas through the resource management interface.other available tools include:
Report on the quality of customer experiences Analyze the Web site to verify links and enforce content policy Aggregate Web data into an overall business view Correlate log and performance data Monitor availability Use online analytic processing (OLAP) techniques to provide decision support
WebSphere Application Server provides a set of comprehensive performance metrics For servlets and beans, these include:.. Number of requests, requests per second, execution time, and errors Java ™ metrics reported include active memory, available memory, threads active, threads idle, ETC. Database Connections Are Also Included Connection Times, Active Database Connections, And Users Waiting for Database Access.
It's best to continuously monitor the site availability from outside to insure that Web logs transactions are executing successfully and within criteria. Examine the site navigation periodically to validate the links and content. Resource monitors will need to roll up their data into an aggregate application view. Have to be analyzed and correlated with other resource data.
In a recent customer engagement, IBM's HVWS team investigated a problem of slow customer response time. The Web server was running Netscape Enterprise Server and the WebSphere Application Server for dynamic content generation using Java servlets. A middle tier used Enterprise JavaBeans ™ (EJBs) to process transactions and then a JDBC call to the database tier. Using the external monitor for response times, they found that during peak hours, response times for consumer transactions increased from fourteen seconds to twenty seconds. Using the WebSphere Resource Analyzer and some DB2® tools , they collected the internal elapsed times for the application components. Figure 4 below shows the analysis of how each component contributed to total response time. Comparing the baseline time with the peak times, it's easy to see that the slowdown occurs in the servlet tier. The WebSphere Resource Analyzer Showed That The Application Server Was Running Out of Worker Threads Under Peak Load. Allocatin G Additional Worker Threads Eliminated The Slowdown.figure 4. Application Response Times - Baseline vs. Peak
Step 3. Analyze and Tune Components
So far, the methodology has provided objectives, measurements, and application insights. Thus it has allowed you to understand, monitor, and report on end-to-end performance. It has also allowed rapid problem determination. When performance issues come up, you CAN Quickly Investigate The Application and Isolate An Individual Component. Appendix A Contains Scenarios That Are Based on Real Events and Demonstrate How Components Are Analyzed and Tuns.
In this step, you analyze and tune specific components One common question:. Does the application scale gracefully In general, scalability refers to a component's ability to adapt readily to a greater or lesser intensity of use, volume, or demand while still meeting business? . objectives you want to assure that your application scales smoothly wherever deployed without experiencing thrashing, bottlenecks, or response time difficulties you need to examine how your application uses resources:. you're interested in CPU consumption per transaction and I / O and network overhead . See also Design for Scalability, our HVWS paper that recommends which scaling techniques should be applied to specific components.Another important question:? Is the application meeting economic criteria Now that resource consumption is understood, you know the "cost per transaction" and you CAN Assess WHether the Application IS Using Resources As Projected by The Performance Objectives. You want to consider the Best practices pertaining to scalability and page design and learn what's needed to optimize how the resources in each tier are used. The application owner uses this to work with development, operations, and design to control and / or improve the efficiency of the application. In this Way Costs are helde on budget.
We used the methodology recently to benchmark a customer's application and found that throughput seemed to be stalled in the database server. Furthermore, the database was consuming more resources than was expected based on the historical archived data. The DBA ran the analysis tools and quickly determined that one of the application SQL statements was forcing a full table scan (very expensive, very bad). This had not had any measurable effect during the initial deployment of the application with a limited number of customers. However, as the number of customers grew, the size of the database increased significantly. The DBA was able to define an alternate index into the table, test the change, and resolve the problem within a short time. It was the methodology that pointed us quickly to the database tier and allowed Us to DETERMINE THE CAUSE.THE All-Important Questions: Can Response Time Be Improved? Using The Component Response Times, The Application .
In one recent engagement, the customer help desk was flooded with complaints of slow or nonexistent performance. The senior management was concerned that the system seemed to be failing and IT seemed unable to tell them why. Using our methodology, we accessed the site with the WebSphere Studio Page Detailer to analyze page response times. Page Detailer showed us that response times were long due to excessive delays in obtaining TCP / IP socket connections. We investigated the intranet, firewalls, and site connectivity. It turned out that when the site went online, the firewalls had been set up to allow a fixed number of concurrent socket connections. As traffic increased (the site was succeeding), more and more customers contended for the same number of connections. This was easily corrected. In this case, as In Many Others, The Solution Seem Obvious When You isolatent. it is the methodology That Allows US to do so.figure 5 below shows Tools and techno logies available to monitor and analyze Web site components. You can see, for example, that you can monitor response time proactively using WebSphere Studio Page Detailer and Tivoli Web Services Manager (TWSM). See Appendix B for more detail about some available tools.
Figure 5. Tools Available to Monitor and Analyze Web Site Components
Step 4. Predict and Plan for the Future
Sadly, none of us can predict the future. However, an increasing amount of valuable information and useful tools are available to help you plan proactively to keep your Web site serving customers as they expect to be served and to avoid the problems that plague busy sites .
Figure 6 below shows one week of page hits for one of IBM's retail customers. All of the days have essentially the same pattern with predictable peaks and valleys. This site showed no "weekend effect," which may not be true for its "brick and Mortar "Store, Nor for Other Retailers. this Kind of Information Enables Site Personnel to Prepare for Peaks And Use the Valleys for Other Operations When Needed.figure 6. Retailer Usage Pattern over One Week
While a typical week, as shown in Figure 6 above, can be counted on, a retailer also has to plan for seasonal rushes when peaks can easily exceed those of a typical week. Figure 7 below shows a retail site over six months, including the Annual Holiday Period WHEN THE NUBER of Hits Trip. During this Kind of Load, The Site Must Be at Its Best, IF Possible Free of Other Operations.
Figure 7. Retail Customer Seasonal Peaks
Retailers are not the only e-business facing seasonal demands. Figure 8 below shows how the number of hits for a bank grew over the months approaching tax time. Clearly, the financial sites have their own version of weekly and seasonal peaks and valleys.
Figure 8. Hit Rates over Six Months for a Financial Site
These examples demonstrate that it is possible to monitor your site and detect trends from which you can plan for the future and meet your business objectives. Your site will have peaks and valleys. You can measure them. You can reasonably predict when your peaks will occur AND You can Position The Resources You NEED To Handle The Demand and Serve Your Customers (and bring them back!).
Your trend data should suggest whether and when additional site components are needed. Powerful new servers have options, as well, that can generate capacity based on predicted workload. IBM can help you clarify which components match your particular requirements and objectives. See the Planning for Growth Paper to Learn Aboutur Capacity Planning Methodology and the Hvws Simulator for WebSphere.Summary
Managing the performance of a high-volume Web site is challenging, exciting, and possible. Following a methodology such as the one presented in this paper will help guide you and your team toward tasks they can understand and goals they can achieve. The success of your company's e-business depends on the tools and techniques your IT team chooses. There are many available, and more are coming, as well as capacity-on-demand options from IBM's powerful server family that set the stage for self-managing IT infrastructures AS Always, Their Use succeeds best in the context of a process.
The "BEST PRACTICES" Methodology for Managing A High-Volume Web Site Includes developing an end-to-end personfts developing an end-to-end person confective of the site and backing these familiar steps:
Establish Objectives Monitor And Measure The Site Analyze and Tune Components Predict and Plan for the Future
Using this methodology, Your Team Can Help Your Company Meet The Revenue and Customer Satisfaction Objectives of Its E-Business And Enjoy Improved IT Performance Management Benefits, Such As:
Proper reporting of quality of service metrics Interactive and historical data on end-to-end performance Rapid identification of the problem source Improved support of business goals Understanding and control of transaction costs World class customer support and satisfactionIBM's experience with high-volume Web sites has yielded .
APPENDIX A. Some Performance Management Scenarios
This Appendix Contains Three Brief Scenarios That Area Based On Real Events and Demonstrate The Principles of Our Methodology for Managing Performance.
CIO
When reviewing his schedule for the upcoming week, the CIO notes a midweek meeting with the marketing department, a Tuesday working lunch with his colleague from Finance, and the monthly CEO staff meeting on Thursday. He works with his assistant to be sure he takes appropriate Information to each meeting.
On Tuesday he will take the latest reports showing costs, projected capacity over the next year, and likely capital spending. The cost chart in Figure 9 below shows, at a high level, the cost per transaction and the cost breakdown by tier. The capacity chart in Figure 10 below illustrates the expected growth in the number of users and transactions. These expectations were jointly reached with the marketing group. The CIO will show his Finance colleague how the increase in workload drives a needed increase in capacity and, thus, capital spending for next year. He points out that operations is working closely with application development to examine costs. They have identified where improvements can be made in the application and have projected the cost savings in terms of cost per transaction and reduced capital spending. He uses THE COST SAVINGS Chart in Figure 11 BELOW To Show How The Proposed Improvements Will Reduce The Cost Per Transaction More Effectively THE in-Plan Improveme NTS.figure 9. Average cost per web transaction
Figure 10. Current and Projected System Load
Figure 11. Cost Savings with proposed enhancement
..................
At the marketing meeting he brings the charts that report system availability, response time, transaction rates, and an analysis of consumer navigation experience. Marketing is concerned about an upcoming promotion. They expect that it will drive traffic to new highs and worry that the system will slow down. Having anticipated this line of discussion the CIO brings out charts showing the current peak demand on the system and the amount of available overhead. He is able to demonstrate that the system has the headroom to handle up to a 30% increase in workload while still maintaining current response times during peak hours. His colleagues in marketing are pleased to see that IT has anticipated the effects of the ad campaign and are satisfied that the system will be able to contain the burst of traffic.Finally, our CIO prepares For The CEO MONthly Staff Meeting. Each Major Function Is Expected To Present a Short Highlight Report on The Current and Upcoming Months. The CIO WILL SHOW Chart s that illustrate system availability, response time and costs vs targets. He will then discuss upcoming events, like the marketing campaign, and his plans to support them. He expects the presentation to go well because he is confident that the system is providing him the Proper Information to Support His Role.
Content Problems
Last week, marketing, sales, development, and IT proudly deployed a new application that not only significantly enhanced the function of the e-business site, but also dramatically improved the look and feel of the site for the consumer.
After just a few days, however, IT noted that the Tivoli Web Services Manager was producing alerts that indicated that nearly all pages were slowing down and response time was approaching the maximum allowed by the service level agreement. Using the Tivoli Web Services Analyzer to examine site traffic patterns, IT observed that the site slowed down in proportion to the number of new visitors and customers. All pages were affected, indicating the problem was systemic.IT contacted Development to review the new content. Development remained puzzled, as they had tested The New Pages Thoroughly Before Migrating The Into Production.
The application owner convened the performance team. One member was detailed to examine page performance using the WebSphere Studio Page Detailer. He reported, "Page Detailer shows that socket connect and SSL connect times are fine. This would seem to absolve the network, firewalls, routers, and TCP / IP layers. It also shows that transactions are processing well within criteria, so there does not seem to be a problem with that part of the system. However, Page Detailer does show that static content (such as GIFs) Slowed Down Dramatically After The New Application Was Deployed. "
Armed with this information the team quickly identified the Web server as the likely problem area since it is responsible for serving up static content. As this shop was using Netscape Enterprise Server they asked for a PerfDump to be executed. PerfDump reports on the internal performance of the server. Within minutes they were able to examine the output and determine that the cache hit ratio for static content had degraded. Clearly the addition of the new application had added much new static content to be served and the Web server cache was now too small to efficiently manage the new total. A quick look at the operating system input / output statistics using VMSTAT confirmed that real I / O had jumped dramatically within a day or so of the new application roll out.IT was able to modify the cache size setting In the Web Server and Deploy The Change At The next Scheduled maintenance period.
Bottleneck
The e-business site was launched last month, just in time for the TV ad campaign. To date the site is successful. Traffic is growing as predicted, sales are strong, and complaints have been quite low. However, in the past few days .............. ..
IT employs the Tivoli Web Services Manager to examine the site They determine that only transaction pages have slowed down;. The number of transactions (sales, etc.) continues to rise, while the number successfully processed is stagnant Customers are complaining to the help. desk and by e-mail about the slow response times. Analysis of the access logs produced by Tivoli Web Services Analyzer (TWSA) confirms that many customers are leaving the site without waiting for their business to complete. Later they complain about not knowing if their business was successfully processed. A transaction in doubt is the worst possible customer problem, one that can destroy confidence in the site and the enterprise.It's apparent there is a problem in the transaction processing. IT still checks out the Web server to eliminate it as A component the team extracts the Overall Response Times for Transactions (from the Tivoli Web Management Solution) And Usees The WebSphere Resource ANA lyzer to obtain the average elapsed times for the servlet and bean during the slowdown. Rapid subtractions demonstrate that the increased load extended the execution time of the bean. In fact, when a specific transaction rate is reached, the application can not process any more That The Transaction Rate Remains fixed.
Resource Analyzer at the bean engines also showed that the application server threads were busy processing requests while VMSTAT showed the CPU was less than 50% busy with no I / O or page wait. Believing that the bottleneck was found, the team recommended that additional threads be assigned to the pool so that the bean could process more concurrent requests.Before deploying such a change, the team runs the Mercury Interactive Load Runner® to create an artificial load on the test system. They then add threads to the pool expecting the bottleneck to disappear. They rerun the test with the new setting, but the bottleneck still occurs at nearly the same transaction rate. Resource Analyzer confirms that all the threads, including the new total, are still in use while response time continues to rise.
Now they know that the thread starvation is a symptom of the problem but not the cause. The next step is to re-create the problem again. This time they take a dump of the Java Virtual Machine and examine the Java threads for a pattern. They see that all threads are blocked on the same method in their bean. They examine the source code and discover that this method is synchronized (that is, under lock control). A developer investigates and reports that the code need be synchronized only while it Updates a Shared Object, But That The Programmer Synchronized The Entire Long Running Method. This Causes All Transactions To Block, Waiting for this Common Routine.
The programmer codes a fix and test reruns the test. With the change made, the test system can fully utilize the CPU. The transaction rate is no longer constrained. The bottleneck is broken. Test schedules a regression test for that evening and the next day . Meanwhile, IT has configured an additional server to handle the production load pending availability of the fix. Testing is complete by the weekend. The fix is deployed into production during the Sunday morning maintenance period. by Monday evening, production monitoring confirms that the bottleneck IS Resolved, Transaction Rates Are Up, And Response Time Is With Criteria.Appendix B. Tools for Monitoring Performance
The Appendix Introduces Some of The Tools Available To Monitor Web Site Performance.
WebSphere Application Server (WAS) Resource Analyzer
The WAS Resource Analyzer can be used with operating system tools such as vmstat to monitor a number of performance measures related to the application server. These metrics are classified into Enterprise JavaBeans (EJBs), ORB thread pool, system runtime resources, database connection pool, And servlets. Was Resource Analyzer IS Available for All Was Platforms.
Resource Analyzer on EJB
The Resource Analyzer Monitors Execution Of Your EJBS AT Three Levels: Server, EJB Container, And Individual EJB. The Table Below Summarizes The Statistics Provided.
Statistic
Stateless Session Beans
Stateful Session Beans
Entity Beans
Instantiate
YES
YES
YES
DESTROY
YES
YES
YES
REQUESTS
YES
YES
YES
Requests per second
YES
YES
YES
Execution Time
YES
YES
YES
Live Beans (Pooled and Active)
YES
YES
YES
Creates
YES
YES
Removes
YES
YES
Activation
YES
YES
Passivation
YES
YES
Loads
YES
Stores
YesResource Analyzer On Servlet
The Resource Analyzer monitors execution of servlets at three levels:. Servlet engine, Web application, and individual servlet It monitors and collects cumulative metrics at servlet engine levels and provides an analysis of the metrics at the Web application and for individual servlets Metrics collected include. Requests Per Second, Average Response Time, And Number of Concurrent Requests.
Resource Analyzer on System Resources
The Resource Analyzer Monitors System Resources Consumed by The Java Virtual Machines (JVM). IT Collects and Reports Such JVM Metrics As Total Memory and Amount of Memory Used / Available.
Was Site Analyzer
The WebSphere Application Server Site Analyzer measures Web site traffic. Site Analyzer provides detailed analysis of Web content integrity, site performance, usage statistics, and a report writing feature to build reports from the content integrity and usage statistics. The table below summarizes the functions of Each Major Features of The Site Analyzer.
FeatureFunctionAryInsiTent & Site structure analyysis
Identify Duplicate and Inactive Files on Web Server Detects Unavailable Resources Such As Broken Links Or Missing Files Content with Excessive Load Time Usage Analysis
Who Accessed The Site Where The Visited How They Navigate The Site Visualization and Reports
Allow users to view site structure and quickly locate pages with problems via color schemes and icons On-demand searching for certain page attributes Provide predefined reports that are fully customizable Client / Server Configuration
Analyzers in Server Side To Transform Raw Data Into Valuable Information and Store IT IN DATABASE CLIENT INTERFACE PROVIDES Administration, Visualization, And Report-Generation Functions, Performance Tools
A variety of AIX tools is available to first identify and understand the work load, and then to help set up an environment that approximates the ideal execution environment for the work. The table below summarizes the AIX monitoring tools.
TasksToolsAIX monitoringAIX toolsPerfagent toolsSample toolsAdapter toolsSwitch tools Managing memory resourcesvmstatsarlspspssvmon Managing CPU resourcesvmstatsartimecpu_state Managing network resourcesnetstat
TaskstoolsmetricsNetscape MonitoringPerfdumpcache Hit Ratios, Memory, Threads
TasksToolsMetricsSite InvestigatorQuality of ServiceSynthetic TransactionAnalyze data / generate reports Tivoli Web Services Manager (TWSM) Tivoli Web Services Analyzer (TWSA) ContentResponse timeAvailabilitySite traffic analysis
References
Document References
IBM High Volume Web Site White Papers
Design for Scalability, December 1999 Design Pages for Performance, May 2000 Planning for Growth, October 2000 Tetsuya Shirai, Lee Dilworth, Raanon Reutlinger, Sadish Kumar, David Bernabe, Bill Wilkins, and Brad Cassels, UDB Performance Tuning, 2000 Ken Ueno, Tom Alcott, Jeff Carlson, Andrew Dunshea, Hajo Kitzhofer, Yuko Hayakawa, Frank Mogus, And Colin Wordsworth, WebSphere V3 Performance Tuning Guide, 2000 IBM RedBooks
Product References