Duwamish case analysis (MS)

xiaoxiao2021-03-06  137

Duwamish case analysis (MS) Capacity Planning - Duwamish Online Sample E-Commerce Sitepublished: April 1, 2000by Paul JohnsBased On Duwamish Online Sample E-Commerce Site

Microsoft Enterprise Services White Papere-Commerce Technical Readiness

Note: this White Paper is One of a Series of Papers About Applying Microsoft® Enterprise Services Frameworks To E-Commerce

Solustrations. E-Commerce White Paper Series (http://www.microsoft.com/technet/itsolutions/ecommer/plan/ecseries.mspx) Contains

A Complete List, Including Descriptions, of All The Articles in this series.

Credits

Program Managers: Raj Nath, Mukesh Agarwalreviewer: Shantanu Sarkarother Contributors: Jyothi C M, Laura Hargrave

On This Page Introduction Capacity Planning Considerations Microsoft® Windows® DNA Architecture Web Servers The capacity planning process Defining requirements Testing Results, Analysis, and Configuration Selection Conclusion Appendix: WAST Best Practices

IntroductionThe Capacity Planning White Paper is One of a Series About Microsoft® Enterprise Services Frameworks. For a Complete List of

...................

This White Paper Addresses Key Issues In The Capacity Management Function of Service Management As It Specify Applies To

A Business-to-Consumer E-Commerce Service Solution. Anyone Reading this Paper Already Should Have Read The Microsoft

Operations Framework Executive Overview White Paper, Which Contains Important Background Information for this Topic. The THE

FOLLOWING Section Provides a brief overview of this information.

Microsoft® Operations Framework OverviewDelivering the high levels of reliability and availability required of business-to-consumer Web sites requires not only greattechnology but also great operational processes. Microsoft has built on industry experience and best practice to create the

Knowledge Base Required to Set Up and Run Such Processes. This White Paper Is Part of this Knowledge Base, Which IS

Encapsulated in Microsoft Operations Framework (MOF), Basely Service Solutions and IT

Service management.

Service Solutions Are The Capabilities, Or Business Functions, That It Provides To Its Customers. Some Examples of Service

Solustrations Are:

• Application Hosting • E-Commerce • Messaging • Knowledge Management

Trends in Application Hosting and outsourcing, The Guidance That Mof Provides Strongly Supports the Concept of

Providing Software As a Service Solution.

IT Service Management Consists of The Functions Customers Need To Maintain A Given Service Solution. Examples of It Service

Management functions include:

• Help Desk • Problem Management • Contingency Planning

MOF EMBRACES The Concept of It Operations Providing Business-Focused Service Solutions THROUGH THE USE OF Well-Defined

Service Management Functions (SMFS). These SMFS Provide Consistent Policies, Procedures, Standards, And Best Practices That

CAN Be Applied Across The Entire Suite of Service Solutions Found in Today's It Environments. The Mof Model Positions All The Mof Model

SMFS in a life Cycle Model Shown Below:

See Full-Size Image.

More detail on the mof process model can be found at http://www.microsoft.com/services/MicrosoftServices/cons.mspx

MOF and ITILMOF recognizes that current industry best practice for IT service management has been well documented within the CentralComputer and Telecommunications Agency's (CCTA) IT Infrastructure Library (ITIL). The CCTA is a United Kingdom government

Executive Agency Chartered With Development of Best Practice Advice and Guidance on The Use of Information Technology in

Service Management and Operations. To AccompLish this, The CCTA Charters Projects with Leading Information Technology

Companies from arriplines of it service management. mof

Combines these collaborative industry standards with specific Guidelines for Using Microsoft Products and Technology. Mof

Also Extends Itil Code of Practice To Support Distributed It Environments and Current Industry Trends Such As Aplication

Hosting and web-based Transactional and E-Commerce Systems

Top of PageCapacity Planning Considerationsif You're Planning TO Implement An E-Commerce Or Enterprise Application, You'll Need To Familiarize Yourself with Capacity

Planning to ensure That your application system willpelt. Slow Web Servers and Server Crashs Can

Encourage Your Customers to try your compesetors' Web Sites-And Maybe Never Come Back.

Enterprise and E-Commerce Applications Are Similar in Many Ways, But There's One Important Difference.

In Most Enterprise Applications, Growth Can Be Anticipated and Planned for Because The Limiting Factor In Application Demand

IS The Number of Employees WHO Will Access Your System.

And, unless you merge with another company, you're not going to double in size overnight.). E-commerce apps are different. The limiting factor is the number of customers who want to access your Web site at the same

Time. That Number Is Bound by The Number of Internet Uses in The World, But It's Very Hard to Predict How Many Will Use IT

At Any Given Time. Worse Yet, That Number Chan Change Very Quickly-an Ad Campaign, New Product, Search Engine Listing, OR

Favorable NewSpaper Story Can Cause Your Site to Become Much More Popular Almost Instantaneously-Double

Overnight is not uncommon at all.

SO, CAPACITY Planning Is About Planning The Hardware and SOFTWARE Aspect of Your Application So That You'll Have Sufficient

Capacity To Meet Anticipated and Unanticipated Demand. Duwamish Online IS A Good Example of A Microsoft® Windows® DNA

Application, So We'll Take a Look At It and Tell You How We Performed The Capacity Planning for IT.

Top of Pagemicrosoft® Windows® DNA Architecturein Order to Understand Capacity Planning for Windows Dna Applications, Such As Duwamish Online, You'll Need To Understand

What A Windows Dna Application Looks Like. The Following Diagram Shows The Duwamish Online Hardware Configuration.

Figure 1: Duwamish Online Hardware ConfigurationSee Full-Size Image.

Windows Dna Applications, Including Duwamish Online, Have Multiple Logical Tiers, And Almost Always Several Physical Tiers.

You Can Read More About Windows Dna Applications In A BluePrint for building Web Sites Using The Microsoft Windows DNA

Platform At:

Http://msdn.microsoft.com/library/en-us/dndna/html/dnablueprint.asp

The client tier is generally your customer's Web browser connected to your application servers through the Internet. Theclient tier is not shown in the diagram, but is on the other side of the firewall shown above.

ON Our Side of the Internet, Duwamish Online Has Four Identical Web Servers That Comprise The Web Tier That Are Connected By

Microsoft® Windows® 2000 Network Load Balancing. Most of the Logical Middle Tier Components, Such As Workflow and Business

Logic Layers, Run on The Web Servers. This is more effect Because The need for Communications over the network is

Eliminated. Thase Web Servers Are Protected by The FireWall Shown Above.

In addition to the network connecting the Web Servers to the Internet, The Web Servers Are Also Connected THROUGH a Private

Lan to the Database Tier and Other Specialized Servers. There Include The Queued Components Server (for Credit Card)

Authorizations and fulfillment), The Box That Runs The Primary Domain Controller (PDC), And DNS Server. in duwamish Online,

We use a fail-overcluster of two servers connected to a common raid disk array as our database tier. An administrative and

Monitoring NetWork Is Connected to All of the Machines. This Means That The Web Servers Are Connected To Three Networks Using

Three network adapters.

For A More Complete Synopsis, See The Entries in The duwamish Online Diary At: http://msdn.microsoft.com/vio/sampleapp.asp

Any of these Pieces, Or their Components, Can Be a Bottleneck, So You'll Need To Establish Ways To Monitor THEIR Performance

And load-That's Why We Have A Second Network and Server for Administering and Monitoring Duwamish Online. Microsoft®

Windows® 2000 Provides Performance Counters That Allow You to Monitor Most of The Important Bottlenecks-Some of The Mostimportant Are Listed Below.

Top of PageWeb Serverscpuweb Applications Tend to Be Processor-Bound. Contention Issues, Caused by More Thread Trying to Execute The Same

Critical Section or Access The Same Resource, CAN Cause Frequent Expensive Context Switches, And Keep The CPU Bussy Even

Though The Throughput Is Low. It is also Possible To Have Low CPU Utilization with Low THROUGHPUT IF MOST THREADS Are

Blocked, Such as When Waiting for the Database.

There Are Two Basic Ways To Get The Processing Power You Need. You Can Add Additional Processors To Each Server, or You CAN

Add more servers.

Adding processors to an existing server is offen less expensive (and less troublesome) Than Adding additional servers. But

For MOST Applications, There Coms a Point When Adding Additional Processors Doesn't Help. In Addui, There Are A Maximum

Number of processors That Can Be Supported by The Operating System.

Adding Servers Allows you to scale linearly to as large a web farm as you need. (Linear Scaling Means That Two Servers Handle

Double The Load of One, Three Servers Handle Three Times The Load, Ten Servers Handle Ten Times The Load, And So ON.

Several Dual and Quad-Processor Systems Were Tested for Duwamish Online To Determine The Most Effective Machine. Originally,

Adding TWO Additional Processors Didn't Help Performance At All Due To Thread Contention Issues in The Duwamish Online

Application. Reconfiguration of the Components Reduced That Contention Enough That by Adding The Third and Forth Processors

gave about a 30% performance increase-not great, but better performance can mean fewer servers to manage-a definiteadvantage. We're looking into the problems that prevent better usage of the additional processors and will publish the

RESULTS at a later date.

Memoryduwamish Online Is A Relative Is Relative Application and The Catalog Is Relative SMALL, Insufficient Memory Problems Have Not NOT

yet accurred.

Remember That Ram Access (AT About 10ns) IS About a Million Times Faster Than Disk Access (About 10ms), So Every Time You

Have to swap a page Into Memory, You'll Really Slow Down The Application. Adding Sufficient Ram Is The Best and Least

Expensive Way To Get Good Performance from any system.

You can make your application has enough memory by Checking The Paging Counters (Paging SHOULD BE RARE ONCE THE APP IS

Running) and The Working Set Size Which Should Be Significantly Smaller Than Available Ram in WINDOWS 2000.

NETWORKTHERE ARE A Number of Potential Bottlenecks That Can Occur in your networking hardware.

First, Your Connection to the Internet Might Be a Bottleneck If It's Not Fast Enough for All The Bits You're senting. If Your

Application Becomes Very Popular, You May NEED TO OBTAIN A HIGHER-Speed ​​Connection Or Redundant Connections. Redundant

Connections Also Help Your Reliability and Availability.

You Can Reduce Your Bandwidth Requirements to Prevent Bottlenecks by Reducing The Amount of Data you Send, Especially

Graphics, Sound, And Video. Your FireWall Can Also Become A Bottleneck if it's not fast enough to handle the traffic you're

asking it to handle.

Note that you can not run an Ethernet network at anywhere near its theoretical capacity because you'll create many collisions (two senders trying to transmit at the same time). When a collision happens, both senders must wait a random amount of time

BEFORE Resending. Some Collisions Are INEVITABLE, But They Increase Rapidly As Your Network Becomes Saturate, Leaving You

With almost no effect.

You Can Reduce Collisions a Great Deal by Using Switches Rather Than Hubs To Interconnect Your Network. A Switch Connects Two

Ports Directly Rather Than Broadcasting The Traffic To All Ports So That Multiple Pairs of Ports Can Communicate without

Collisions. Given That The Price of Switches Have Significantly Decreased in The Last Few Years, It's Usually a Good Idea To

Use A Switch Rather Than A HUB.

During One Test of Duwamish Online, We Got Some Very Odd Performance NumBers-The Database Was Not Working Very Hard and The

Web Servers Were Very Busy and Very Slow. Upon Further Investigation, We Noticed That We'd A 100Mbps Hub To Connect T

Web Servers with the Database Server. Because All of the traffic-incoming, outgoing, and inter-server-was going through one

Hub, IT Became Swad, Thereby Blocking The System from Processing Transactions Quickly. Removing the Bottleneck by Using A

Switch allowed us to test (and scale) SuccessFully.

Database Server and Diskthe Last Potential Bottleneck-The Database-Can Be The Hardest to FIX. Other Bottleneck Fixes Are Relative Obvious; IF THE

Web Servers Are Identical, Then Buy Another One; if you need More Bandwidth, Then Get a Faster Connection or An AdditionalAl

connection or redundant and / or faster networking hardware. But for read / write real-time data you have to have exactly onecopy of the data, so increasing database capacity is much trickier. Sometimes the bottlenecks will be in the database server,

Sometimes they'll be in the disk arch.

To Some Degree, you can include Database Server Capacity by Segmenting Your Data. In Duwamish Online, Database Server

Capacity Has Never Been An Issue (We're Running ON A Relative, But Still Using Only About 25%

Of CPU Capacity Even All Four Dual-Processor Web Servers Are Running AT 100% CPU Utilization. So All the data-catalog,

Inventory, Customer Records, And ORDER Information-Is Put ONTO The Same Database Server.

IF Database Server Capacity Becomes An Issue, There Are A Number of Things You Can do. If CPU Capacity Is The Issue, THEN Add

Additional CPUS. Microsoft® SQL ServerTM Makes Good Use of Additional Processors. if The Disk is the Bottleneck, THEN USE A

Faster Disk Array. More Ram Would Likey Help, Too, Because SQL Server Has Some Very Sophistated Caching.

Another option is to split the database across multiple servers. The first step is to put the catalog database on a server OR

Set of servers. Because the catalog is usually read-only, it's safe to replicate the data. You can also split off read-most

Data, Such As Customer Information. But if you need Multiple Copies, Replicating The Information Properly is Harden.

But it's possible thing you Could Get So Bussi That The Read / Write Data Has To Be Segment. This is relatively Simple for MOST

Applications; You CAN Segment Based on Zip Code, Name, Customer ID. But it takes application programming in the database

Access layer to make this work. The Layer Has To Know The Server To Go To For Each Record.sql Server 2000 Makes this Easy-with no application program Splitting a Table Across Multiple

Machines. THIS WORKS VERY Well, And Gives Linear Scalability Up to The maximum cluster size. in Fact, SQL Server 2000

Currently Is The World's Fastest TPC-C System: About 227,000 Transactions Per Minute ON A Cluster of 12 Machines with Eight

Processors Each-67% Faster Than The Previous To Be 575 Times the Combined Transaction Volume of

Amazon.com and ebay.

So, Although It's Harder And More Expensive To Scale Database Servers, It Is Possible To Scale The as Large As You're Likely

TO NEED.

TheoreTical Model vs. Empirical Testing

IT's Possible to Develop a theoretical model for the cost of transactions. The Tca Model (Which is described in Another

Document) Allows You to Estimate The Cost of Each Transaction In Terms of Processor Cycles. You Can Then Develop a Model with

Your Projected Mix of Transactions To Determine How Many Cycles The Application As A WHOLELES.

Models Such as this sale must to help you predict how much and what kind of machines to buy. But Since these Models Are

Based On Empirical Testing of the Actual Application On Actual Hardware, You Can't Avoid Performance Testing By Using Such

Modeels.

This Raises The Question of How To Stress Test Web Applications That You've Not Yet Deployed. There Are Several Products

Available That Run Test Scripts Against your Web Application, Using A Relative SMALL NUMBER of Machines To Simulate A Large

number of clients. The Web Application Stress Tool (WAST) is one of these, we'll discuss it later on.Duwamish Online decided not to build a theoretical model for two reasons. First, we were going to have to test the actual

Application On the Actual Hardware Configuration (Rather Than A Pilot Application Farm) Anyway, So There Wasn't Any Savings

In Time. in Fact, The Analysis Would Have Taken Much Longer Than Our Somewhat Ad hoc Methods. SECOND, MOST OF THE EXISTING

Models aren't comprehensive enough to explain all of the behavior in a system. for instance, many models can't predict the

Bottleneck Caused by Contention for Shared Resources or Network Collisions, NOR CAN THEY Predict Contention Problems Caused

By a Mix of Transactions. Some Models Even Assume That As you add additional processors to a server, you'll get linear

Scaling. this is almost never true for any application Running ON A Symmetric Multiprocessing Operating System, Such As

Windows 2000.

You can use the the theoretical model to make your first girls About what hardware to get and how to configure it, but you'll

Still Have to Do Full-Scale Testing On Your Deployment Farm To Confirm That IT Performs To your requests.

But The TheoreTical Models CAN Still Be Useful. First, They're Very Useful for Predicting What Will Happen to The Performance

Of an existing web site if you change the application or add addressal users. And the can be useful for Running Tests on a

Relatively Small Application Farm To Extrapolate The Results So That You make Sure To Purchase Enough Hardware. Because

There's Sometimes a long Lead-Time for Purchasing Sophistated Hardware, a TheoreTical Model Can Be Very Useful Indeed. THIS

is especially true where the scalability issues are well known, such as the linear scalability of buying more Web servers.Theoretical models are less useful for predicting how many processors or how much memory each box should have because the

Factors That Affect Performance Are Difficult To Model-They're Complicated and Often NOT Well Understood.

Top of Pagethe Capacity Planning ProcessYou Can Think of The Capacity Planning Process in Five Steps, With Iteration, You Can Test and

Identify Performance Bottlenecks And Fix Them. Performance Bottlenecks Can Be in Software, Hardware, Or in The Way Components

Are Configured-ory Combination of the Three.

Figure 2: Capacity Planning ProcessSee Full-Sized Image.

Top of PageDefining RequirementSthe First Step in Your Capacity Planning Process Is To Define The Requirements for the Application. For Duwamish Online, WE

Used Some Relatively "Seat-of-THE-PANTS" Methods for Defining Our Capacity Requirements. if you have access to statistics

From An Existing Site, Solid Marketing Research IncludTIDITITIVESIAS, SALES Goals, And Project ON Average ORDER

Size, You Should Use these to define Capacity Requirements. and you sell ceert Update your assumptions as users

Actually Use your site after you deploy it.

The Duwamish Online Site Simulates The Sale of Consumer Goods, Such as Books, T-Shirts, And Coffee Mugs. This Simulation IS

An Accurate ONE: IT Verifies and Processes (But Does Not Charge) Credit Cards, And Sends Orders To The Company That Does

Fulfillment (But the Orders Are Ignored). in Order To Get People To Complete An ORDER, They Are Entered Into a Contestiff

The Complete An ORDER.

Page hitsWe really had no clue about how many page hits to expect, so we took a wild guess: since the entire msdn.microsoft.comcluster receives an average of a bit less than two million hits a day, we assumed that Duwamish Online wouldn ' T Get Any More

"We'll Probably Get Far Less.) So Our Requirements Specify That The Application Should Be Able To Handle Two

Million Page Hits Per Day. (To See a Different Version of Capacity Planning, Check Out The Way Duwamish Online DID IT AT

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnduwam/html/d4perfgoal.asp.)

Two Million Page Hits Per Day Is An Average of 23.14 Pages Per Second:

(2,000,000 pages / day / (24 HR / day * 60 mins / hr * 60 sec / min). However, We know That The page Views don't all come at the Same

Time. There are Peaks and Valleys in Demand. Because the duwamish online team does not know a good model for predicting the

Sizes of the Peaks and Valleys, IT Fell Back on An Old Rule of Thumb: The 80/20 Rule-At Least Until It Has Real NumBers. The

80/20 Rule Guesses That 80% of The Hits Will Be Received In 20% of the Time.

Using the 80/20 rule, peak usage will be:

(0.8 * (2,000,000)) / (24 * 60 * 60 * 0.2) = 92.59 Pages / Sec.

Concurrent Users / Concurrent ConnectionSyou Will Also Need To Consider The Numr of Concurrent Connections That Can Be Handled. This is Monitored by The Production

Farm That Keeps Track of The Numbers of Page Hits. This Helps in Understanding More About Visitors' Behavior.

On The Other Hand, An Extremely High Number of Concurrent Connections May Negative Impact Server Performance. Most Load

generation tools, such as WAST, can not generate the exact concurrent connection pattern the real-world operation willexperience. The Duwamish Online team will closely monitor this and compare the production numbers with the lab results.

Concurrent Connections Are Measured on The Server Side. The Concurrent User Load, However, Is Determined on The Client Side.

Since Connect Connection TENDS TOBE VERY DYNAMIC AND FLUCTUATES A LOT, People Usually Use Concurrent User Load (Which Is The

Simulated loading the stress tool generated) as an index of system loading. Note That it is found that you don't really need to

Know Anything About "Concurrent User" or "Concurrent Connection" in doing pre-deployment analyysis. determining the number of

Pages Per Second Will Allow You to Perform The Analysis.

The Duwamish Online Team Has Approximated An Average Concurrent Load of 1000 Users, with peak concurrent loading of 5000 users.

A Quick Check Shows That Ssen Numbers Are In The Right Ballpark: IF EACH User Browses Five Pages, There Would Be 400,000

Unique Users Each Day (Viewing 2,000,000 Pages Total). if The Team Handles 1000 Users Concurrently, That Would Mean We'd Have

400 Groups of 1000, Each Browsing An Average of 3.5 minutes. This is Calculated by Dividing 24 Hours by 400,000 and THEN

Multiplying by 1000 concurrent users.

Mix of usagefor MOSTS, ACTUAL Completed Orders Constitute One or Two Percent of visitutes. However, Because you don't have to pay

WITH DUWAMISH OnLINE AND BECAUSE THE TEAM ONLY ENTERS you for a prize if you complete an order, it's assocmed that 30% of

Visitors Will Complete An ORDER. Note That this High Percentage of places Orders Means That The Database Will Be buy More.

Response Timethis Is a Very Important Requirement: IF Users Don't Get Good Response, They'll Shop Elsewhere. The Industry Standard for

Response Time Is A Maximum of Three Seconds. So We buy this as our request. The response time went up to five seconds

Under Peak Load.

Don't Overstress System - Stop Before Contention Makes Capacity Growth Non-linearComputer Systems, Especial Large ONES, Are Complicated-So It's Sometimes Hard to Predict What Will Happen To A System As

ITS Load Increases. The Fact That Various Processes Are Contending for the Same Resources ISN't A Big DEAL WHEN THE LOAD IT

Light, But Can Become a hugebottleneck as the loading increas.

A Consistent Theme That The Duwamish Online Team Noticed Is That The System Would Respond Well As Load Was Inly To

A point, THEN OTHER ISSUES WOULD Keep The Performance from Improving (and Might Actually make Performance Decline).

There, It's Important to Note The Loads At Which The Performance Suffers, and The Be Very Sure Not to Approach OR

Exceed That Load.

CPU

If you graph cpu utilization against response time, You'll See An Interesting Non-Linear Relationship: as the loading increas

And CPU Utilization Passs a Certain Point, The Response Time Starts To Grow Exponentially. Note That The Point Varies from

Application to Application, So you have to test Empirical to determine this point.

Figure 3: Exponential Growth in response Time As CPU Utilization Risessee Full-Size Image.

The Reason for this Exponential Growth Is Typically That The Threads Are Competing for a Scarce Resource OR A CommONLY Used

critical section, causing a lot of context switching which is a relatively expensive operation. In addition, all ASP workerthreads might be busy so that new incoming requests are waiting in the ASP queue for their turn to be processed.

Clearly, you do not want to be anywhere near the point (about 70% of Maximum CPU Utilization for Duwamish Online Web Servers)

WHERE The response time grows rapidly.

In Order to Avoid Resource and Critical Section Contention Issues and to Allow Extra Capacity For Peak Times, The Team

Decided to Keep The CPU Utilization Around 20% WHEN Running At Average Load, and 50% AT Peak Load.

NetWork

AS Mentioned Previously, Networks Can't Be Run Anywhere Near THEIR RATED CAPACITY BECAUSE COLLISIONFER 中 o c

Significantly Reduce Throughput. You'll Want To make Sure That You're Not Using Too Much Network Capacity.

Top of PageTestingNow That It's BEEN Established Is, The Following Section Will Discuss The Testing of

Duwamish Online.

How We Tested Duwamish OnlineTo Test Duwamish Online, WE Set Up An Application Farm Running Duwamish Online and Use The Web Application Stress Tool, OR

WAST ON Several Client Machines. These Machines Are Connected to The Duwamish Online Application Farm Where The FireWall

Would Be. (The Tests Below Do Not Actually Use The Firewall Because It's Implement by the ISP AND We at Testing In A Lab

INSTEAD OF OVER THE INTERNET. IT Would, Of Course, Be More Realistic To Test with A FireWall.)

Web Application Stress Tool (WAST) WAST IS A Web Stress Tool That Is Designed To Simulate Multiple Browsers Requesting Pages from A Web Application. It Can Be

used to generate customizable loads on various Internet services, and offers a rich set of features for gathering performancedata for a particular Web site You can try to reproduce the test results:. just download WAST for free at:

Http://www.microsoft.com/technet/archive/itsolutions/intranet/downloads/webstres.mspx

There Are Other Commercial Tools Available, Such As Radview's Webload and Rsw's E-LOAD.

USAGE SCENARIOSWAST Uses One or More "Scripts" to Simulate A User That Is Browsing. The Team Used Two Scripts That Varied Primarily In The

Number of Users Who Placed ORDERS-ONE Script with a Normal 3%, And One with a huge 30% of users Placing Orders (As Described

Previously). The usage scenarios are in the table below.

Key Scenario Think Time Bandwidth Throttling A 59% Category Page18% Item Detail11% Keyword Search9% Home Page3% Shopping Vary from 0 to 5.5 seconds 128K ISDN connection B 30% Category Page20% Item Detail11% Keyword Search9% Home Page30% Shopping Vary from 0 To 5.5 Seconds 128k ISDN Connection

"Think Time" Is A Random Amount of Time Between The Completion of One Request and The Submission of the next. "Bandwidth

THROTTLING "IS Simulating The Slower Connections That Many Users Have. We Chose 128kbps ISDN As A Compromise Between Analog

And Broadband Connections and BROADBAND Connections.

Configurationswe Tested Duwamish Online On SeveR Different Application Farm Configurations Using Two Different Types of Computers-

INEXPENSIVE SINGLE / DUAL PROCESSOR WORKSTATIONS WE CALL "Little Bricks" and A More Expensive Two-to-Four Processor System WE

Call "Big Bricks"

The middle-tier components hosted on different servers were also tested-the database server and the Web servers. For theDuwamish Online application, performance was best with the components on the Web servers, so that configuration was used for

The rest of the testing and us.

The Big Bricks and Little Bricks Configurations WERE AS FOLLOWS:

Little Bricks Big Bricks P3 550 Xeon Single-Proc ($ 3200) 2-4 Processor: 550MHz Xeon (About $ 16K) 256 MB RAM 512 MB Ram 100 Mbit NIC Dual 100 Mbit NIC SCSI 19 GB SCSI 20GB Windows 2000 Advanced Serversql 7.0 on Database Server Windows 2000 Advanced Serversql 7.0 on Database Server

Running the Testsonce The Hardware Is Configured and The Software Is Installed, You Can Begin Running The Tests. Let IT Run for Twenty Minutes

And The Take Measurements. This Allows The Various Caches (Memory, Disk, IIS, SQL, AND DUWAMISH Online) To Get A Reasonably

Stable State. if you don't do this, the performance is very slow because the hit rats on the caches area limited

Loaded.

After the Warm-Up Period, Take Data for a Couple of Minutes, Then Increase the Load, Wait for the counter to attize, and

Take Data Again. The Team Used Windows 2000 Counters Through Perfmon Rather Than THE WAST Counters Because The Found To

Be more reliable.

After the Tests Are Completed, Move The Data Into An Excel Spreadsheet and Analyze IT.

Top of PageResults, Analysis, And Configuration SelectionWe Were Intested in The answers to a Number of Questions: Are Big Bricks or Little Bricks More Effective? What happensiff

You add more processors? What happens if you address more servers?

Big bricks vs. little bricksAs it turns out, the performance of similar server and workstation hardware is not much different from each other. The servershave more sophisticated hardware, such as RAID disk arrays and dual power supplies. This increases their reliability, which

Is Very Important for the Database Tier. on The Other Hand, The Web Farm Tier Has a Lot of Redundancy, So you May Want To

Consider Using The Less Expensive Machines.

SCALING TOMIRINE Application, Adding Processors To the Same Box Helped, But Only to a point. Increasing the number of

Processors from One To Two Resulted In PERFORMANCE. THESE RESULTS ARE Good ones, considering the

RELATIVE COST OF Processors and Computers and The Fact That The Dual-Processor Machine Is No more Difficult To Maintain Than

A Single-Processor Machine.

Figure 4: Adding a second processor adds about 60% Capacity (Two-Server Farm) See Full-Size Image.

HOWEVER, Adding An Additional TWO Processors Did Not Help Very Much Due To Limitations of The Duwamish Online Application.

(We're in the process of analyzing and overcoming these limitations-we'll Have An Article About How We Did it Later. We've

Already Gotten A 30% Improvement When Going from Two To Four Processors-Nice, But Not AS MUCH AS We'd Like.)

Adding Third and four, doeesn't add much capacity to the current application.

Figure 5: Adding a third or foursh processorsee ful-size image.

Note That Your Results Will Vary From The duwamish Online Team Results. Your Application May Scale To Four Processors OR

More. The only way to find out is to test.

Web farm scales linearly with added Web serversAlthough scaling up by adding more processors does not give limitless scalability, scaling out by adding more servers to yourWeb farm works quite well because it scales linearly. Two servers have twice as much capacity as one, ten gives US Ten Times

AS MUCH, ETC. (Note That assumes no other bottlenecks. if You Push Another Piece of The System Past ITS Limit, Such As

The Database Server or Network, Your Scalability Will End Until You Improve The Capacity of The Bottleneck.

Figure 6: Add More Web Servers Gives Linear ScalabilitySee Full-Size Image.

Your Network Can Be a BottleneckRecall The Testing Situation We Ran Into Where We Put All the Machines On One Network Using a hub-and swamped the network,

Giving Horrible Performance. Your Network Design Can Radically Affect Your Performance, So Be Careful and Be Sure To Measure

Your network usage.

Use switches, deted networks instead

The Solution Was To Do Two Things: Use a Dedicated Network for Communications with the Internet and One for Inter-Server

Communications, And Use Switches Rather Than Hubs. with Switches, The Capacity of Each Link IS 100Mbps Rather Than 100Mbps

For the entire network.

More Database Server Capacity Than We nextfinally, Duwamish Online IS Unable To Provide Any Information About How The Database Server Responds When It's Heavily Loaded

Because it was never what heavily stressed. The actual database service duwamish online deploys have only

(The Test Machine In The Chart Below Used Up to Four).

Figure 7: Database Server Does NOT SHOW Stress As It Reaches Maximum CapacitySee Full-Size Image.

THE LINES Are Relatively Flat At The Right End Because The Number of Pages Per Second Has Stropped Rising. This Can Also Bedue To Caching.

Hardware Selectionhardware Selection Was Relative SIMPLE. Compare The Performance Figures for Various Hardware Configurations with the THE

Constraints (AT Peak Load: 92.59 Pages / Second With Five Seconds Response Time and 50% Maximum CPU Utilization; at Normal

Load: 5.79 Pages / Sec With Three Seconds Response Time and 20% CPU Utilization.

Out of the entire set of tsted Machines and configurations, Here Are More a few Below. The Cells That Are Bold and

Underlined Indicate Working Configurations.

As You See, Configuration G Doesn't Work. AT Peak Time, OR AT 50% CPU Utilization, this Configuration Only Delivers 67.2

Pages / Sec (Calculated by Extrapolation). This Number is Lower Than The Number of Pages Needed (92.59), SO g Doesn't Work.

Configuration J DID NOT WORK EITHER. THIS Configuration Provided 60.5 Pages / Sec AT 50% CPU Utilization.

Configurations I and L Passed The Requirement Tests. L is a big bricks configuration and i is a little bricks configuration.

WITH Configuration I, IF CPU Usage Set To 70% As The Worst-Case Configuration I Gives 87% Extra Page / Sec At Peak Time for

Potential Growth (173 - 92.6) /92.6 = 87%). Even if the cpu is stressed to 92%, The Response Time Is Still Less THREE

Seconds and this provides Enough Room for Future Growth. More Important, DB USAGE Remains Very Low THROUGHOUT. BECAUSE Web

Servers Scale Out Very Well, this Presents A Better Safety Margin and is Less Expensive Than Configuration L.

Dual Processor Machines for Everything, Including The Database

Duwamish Online chose dual-processor machines for everything, including the database server. Four processors did not add muchbenefit to the Web farm. Finally, the database does not need more than two processors because it is not being stressed.

Top of PageConclusionwhat HAVE WE Learned in Planning Capacity for and testing duwamish online?

Test, Test, Testthere Is No Substitution For Doing Performance Testing THROUGHOUT YOUNEROPMENT CYCLE. You can use the results to pick the

Best Hardware and Tune IT, AS Well to makess..............

Web Farms Scale Linearlybarring Other Bottlenecks, You Can CAN Scale Your Web Farm To Handle As Many Users as you need.

Note That Add More Than Two Processors Doesn't Help Web Servers with this Version of Duwamish Online, But That Doesn't

Mean That Other Applications Won't scale-you'll Have to test on your ows. for more information on duwamish Online Web Server

Scalability, See The duwamish Online Diary At: (http://msdn.microsoft.com/vio/sampleapp.asp).

Did not have to do anything special for the db-no parting, no nothote That duwamish online didn't Have to do anything special for the all-in-one database on a relatively small server. If

Database Performance Was An Issue, We 'and / or Buy More Powerful Hardware. Because We buy

SQL Server 2000 (The World's Best-Performing Database Server Software), The Database Code Will Never Have To Be Rewritten and

Can always be scaled.

Must Watch for Bottlenecks in Memory, CPU, NetWork, DBAS You Build your application, Be Sure to Watch for Bottlenecks. Don't forget About CPU Utilization, Paging Count, Pages Per

Second, Response Time, Network Collisions.Monitor While RunningYou'll Also Want To Be Sure To Keep An Eye On Your Application As It Is Running. This Also Gives You The Opportunity To Get

REAL DATA About Peak Loads.

How To Scale Windows Dna Web ApplicationsFirst, You Need To Find The Bottleneck (s) To Determine Where The Bottlenecks Are. Then, IF Your Bottleneck Is in Your Web

Servers, you can add more servers to take care of the loading. it may also be possible to upgrade existing servers to have

Higher Capacity, Depending On your Application.

If it's your database this keeping you from scaling, you can several choices. You can have the server by adding more

Processors and memory. if That Fails, You Can Segment The Database, Replicating Read-Only and Read-Mostly Data To The Web

Servers. You can Also Have SQL Server 2000 Automatically Segment Your Database, Saving You Time and Trouble.

Finally, if it's your network what it's keeping you from scaling, redesign it using switches and subnet.

Top of PageAppendix: Wast Best PracticesWeb Application Stress Tool (WAST) • WAST IS A Web Stress Tool That Is Designed to Realistical Simulate Multiple Browsers That Are Requesting Pages from A Web

Application. It can be used to generate Customizable Loads on Various Internet Services, And Offers a Rich Set of Features

That Are Desirable for anyone INTERESTED IN GATHERING Performance Data for a Particular Web Site. Wast Is A Powerful Tool

Application with a reasonable number of test clients. Also, developers and customers Can Reproduce The Test Results. • More Information About Wast Is Available At:

http://www.microsoft.com/technet/archive/itsolutions/intranet/downloads/webstres.mspx. • There are various sophisticated Web tools, such as RadView's WebLoad and RSW's e-load, that are available commercially. TheDuwamish Online team Found Wast to Be adequate for their testing purposes. One Advantage of Using Wast, Is That The duwamish

Online Team Could Continue Sharing Their Experiences with The Customer About Using THIS.

WAST: BEST PRACTICESCLIENTS MACHINES: Estimate The Number of Clients Required for Generating The Desired Maximum Load. For One Series of Tests,

Try to use the same number of clients for a better Comparison Test.

Setting Multiplier To Stress the Servers: Estimate the maximum number of concurrent user requests required to push your web

Server Farm to 100% Utilization In a pre-test. During The duwamish Online Testing Process, IT HAS BEEN Found That As long as

The Test Clients CAN Generate Enough Load To Stress Out The Server, Setting The Multiplier To 1 Gives Better Results.

However, The Number of Test Clients Is Not Unlimited and as The Number of Threads Increses, Thread Thrashing Occurs.

When the Server Farm Needs to Be Stressed Out Without a Sufficient Number of Test Client Machines, a higher multiplier might

Be Needed. for Example, IF you find this with a Multiplier of 1, You Still Could Not Stress OUT

Your Server (Which You Could Tell From The Server's CPU Usage), You Could Use The Multiplier To Increase The Stress. However,

Current Releases of Wast Don't Do Accurate Measurements If The Multiplier Is Set Above ONE, SO Run One Machine with a

multiplier of one and do your measurements using that machine. For instance, you have nine test clients, and run 100 threadsper client on eight of them with multiplier of 5 (which gives you total 4000 "concurrent users") in your tests, and then Run

A Single Multiplier 1 Thread on the last client machine. Collect TTLB Data from this Last Client.

Using sessionTrace: Use sessionTrace To Record The Detail Communication Between Wast and the Web Server (s). When Defining a

New Wast Script, IT IS Important To Find Out IF All Urls Used In The Script Are Functioning As Expected and The Web Server IS

Returning the desired response. if not, there is a possibility That You May Obtain Improved Performance Results While THE Web

Server is Simply Returning Error Response.

Another Good Practice with Using SessionTrace Is That You Should Set SessionTrace To 1 with Type Reg_dword. TRACE CAN BE

TURNED ON registry / hkey_local_machine / software / microsoft / was. Finally, Remember to Turn SessionTrace Off (0) After

Validation of the new script. OtherWise, The Disk Space Will Get Filled Quickly.

FOLLOW HTTP Redirects Option: Do Not Use this option if The Script Has Already Recorded The Redirected Urls. This Implies

That if you check The Follow Http Redirects Option, The Redirected Pages Will Be counted TWICE.

THROTTLING: for Standard / Benchmark Test, Use 128 Isdn Throttling to Generate The average Bandwidth of Our Target users. By

Testing the duwamish online application, it is found that the more That THROTTLING WAS USED, The LONGER THE WARM-UP TIME

Was needed. for example, The more shottling, the limited..

© 2000 Microsoft Corporation. All Rights Reserved.

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as ofthe date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a

Commitment on The Part of Microsoft, and Microsoft Cannot Guarantee The Accuracy of Any Information Presented After The Date

Of publication.

THIS Document is for inforctional purposes only. Microsoft Makes No Warranties, Express or Implied, in this document.

Microsoft, BackOffice, MS-DOS, Outlook, Pivottable, PowerPoint, Microsoft Press, Visual Basic, Windows, Windows NT, And The

Office logo area Either registered trademarks or trademarks of Microsoft In The United States and / or Other Countries.

Macintosh is a registered trademark of Apple Computer, Inc.

转载请注明原文地址:https://www.9cbs.com/read-127977.html

New Post(0)