Business reference architecture B2C-debugging and testing

xiaoxiao2021-03-06  17

Chapter 8 Debugging and Testing

Microsoft Corporation May 2001 Summary: This chapter briefly introduces debugging and testing processes, including general information about debuging consolidatedretail.com applications, debugging PASP script output descriptions, test procedures, and assessment results. Some specific examples adopted in this chapter come from the actual test activity that is actually implemented when the application consolidatedretail.com. You can refer to this document when writing a test plan for a custom software based on this reference architecture. Introduction and develop other applications, developers 'tasks are to ensure that companies' e-commerce applications for consumers (B2C):

Correct business functionality and scalability reach requirement to ensure that the application is achieved, you must debug and perform performance tests. The first part of this chapter describes the steps to debug the consolidatedretail.com site and how to view and debug the XML output of the PASP script. Then introduce the type and level, function test process, performance test, and general guidance on assessment test results. Debugging the consolidateDretail.com Site debug site brings many challenges to developers. This is more prominent if you contain server-side logics such as ASP scripts. When you create a web-based application on your computer running Microsoft® Windows®, the preferred development environment is a Microsoft® Visual Studio® development system, which includes website development kits Microsoft® Visual InterDev®. Add Microsoft® FrontPage® Server Extensions to the website, and create a new Visual InterDev project based on this site, you can use the above environment to debug the B2C reference architecture application. For more information on using Visual InterDev, refer to the Visual InterDev documentation in the Microsoft® MSDN® Developer Library. To debug the XML output of the PASP script debug consolidateDretail.com, another challenge to face is how to view the XML output generated by the PASP script. The response data from these pages is intercepted by the XSLisapi filter and then display it using the specified style sheet. However, sometimes you need to check the XML generated by the script without applying the style sheet. To view XML output, the easiest way is to make * .asp copies for each * .pasp file in the site, then access these * .asp files with Microsoft Internet Explorer. Since the * .pasp file is not taken by the XSLisapi application, the XML response data will return to the browser and can be viewed by the source code of the result page. For many scripts, just specify the URL of the file to access, and for other scripts, you must pass parameters to be accessed in the query string attached to the URL. The following table describes how to view the XML results of each PASP file in the site. Note: Appendix A provides an XML output of the PASP page in the consolidatedretail.com site.

Acct.pasp: Save the page as Acct.asp, and then navigate to the consolidatedretail.com site and log in (otherwise it is redirected when you try to view acct.asp). This allows Internet Explorer to access Acct.asp by specifying the parametric URL (format http: // server name: 81 / acct.asp). To view XML, click Source File on the View menu. Addressbook.PASP: Save the page as AddressBook.asp, and then navigate to the consolidatedretail.com site and log in (otherwise it will be redirected when you try to view AddressBook.asp). This allows you to use Internet Explorer to access AddressBook.asp by specifying a parametric URL (format HTTP: // server name: 81 / addressbook.asp). To view XML, click Source File on the View menu. To get more meaningful results, first add at least one address to your address book first. Basket.PASP: Save the page as Basket.asp. You can then use Internet Explorer to access Basket.asp by specifying the parametric URL (HTTP: // Server Name: 81 / Basket.asp). To view XML, click Source File on the View menu. To get more meaningful results, please use this site to add some projects to your shopping basket. Category.PASP: Save the page as category.asp. Then you can use Internet Explorer to access Category.asp by specifying the URL with two parameters. (These two parameters are TXTCATALOG and TXTCATEGORY, the former is the directory name you want to browse, the latter is optional, specifying the specific class name in the directory.) For example, you can view the XML representation of the books directory by specifying the following URL : Http: // Server Name: 81 / category.asp? Txtcatalog = books To view the Games category in the books directory, you can use the following URL: http: //localhost/category.asp? Txtcatalog = books & txtcategory = Games Return the appropriate page Once, click the Source File on the View menu to view the XML code. ChangePasswd.PASP: Save the page as ChangePasswd.asp, and then navigate to the consolidatedretail.com site and log in (otherwise it will be redirected when you try to view ChangePasswd.asp). You can then use Internet Explorer to access ChangePasswd.asp by specifying the parametric URL (format HTTP: // server name: 81 / changepasswd.asp). To view XML, click Source File on the View menu. EditadDressbook.pasp: Save the page as EditadDressbook.asp, and then navigate to the consolidatedretail.com site and log in (otherwise you will see any address information). View the URL used in this page can include the following parameters:

TXTADDRESSTYPE - address type, such as payment addresses or delivery addresses. If no value is provided, the delivery address is used. TXTADDRESSID - The Global Unique Identifier (GUID) of the address. If you specify a value, return the corresponding address. For example, to see the XML generated when the user adds a new shipping address, use Internet Explorer navigation to the following URL: http: // server name: 81 / editaddressbook.asp To view the XML generated when the user adds a new payment address, please use Internet Explorer navigation to the following URL: http: // server name: 81 / editaddressbook.asp? TxtaddresStype = Billing To view the XML generated when the user edits a specific address, use Internet Explorer navigation to the following URL: http: // server name: 81 / editaddressbook.asp? Txtaddressid = address GUID After retrieving this page, click Source File on the View menu to view the XML code. ForgotPasswd.PASP: Save the page as ForgotPasswd.asp, then navigate to the consolidatedretail.com site and log in (otherwise it is redirected when you try to view forgotPasswd.asp). You can then use Internet Explorer to access ForgotPasswd.asp by specifying the parametric URL (format http: // server name: 81 / forgotpasswd.asp). To view XML, click Source File on the View menu. Index.Pasp: Save the page as index.asp. Then you can use Internet Explorer to access index.asp by specifying the parametric URL (format http: // server name: 81 / index.asp). To view XML, click Source File on the View menu. Login.PASP: Save the page as Login.asp. Turn off any session currently using the consolidatedretail.com site (otherwise it will be redirected to acct.pasp when you try to access login.asp). Then you can use Internet Explorer to access Login.asp by specifying a parametric URL (format http: // server name: 81 / login.asp). To view XML, click Source File on the View menu. MultiShipping.PASP: Save the page as multishipping.asp, and then navigate to the consolidatedretail.com site and log in (otherwise it is redirected to login.pasp when you try to access multishipping.asp). Please add at least one project to your shopping basket (otherwise it will be redirected to Basket.PASP when you try to access MultiShipping.asp). Then you can use Internet Explorer to access multishipping.asp by specifying the parametric URL (HTTP: // server name: 81 / multishipping.asp). To view XML, click Source File on the View menu. ORDERHISTORY.PASP: Save the page as ORDERHISTORY.ASP.

Then you can use Internet Explorer to access OrderHistory.asp by specifying a parametric URL (format HTTP: // server name: 81 / orderhistory.asp). To view XML, click Source File on the View menu. You can view this page without logging in, but in order to get more meaningful data, you should log in to the site before viewing this page and submit at least one order. ORDERHISTORYDETAIL.PASP: Save the page as ORDERHISTORYDETAIL.ASP, then navigate to the consolidateDretail.com site and log in (otherwise an error will occur when you try to access ORDERHISTORYDETAIL.ASP). Submit at least one order, then you can use Internet Explorer to view the URL of OrderHistoryDetail.asp by specifying a parameter (ie orders, it should be the GUID that identifies existing orders) (otherwise an error). Access the URL used by this page should be similar to the following example: http: // server name: 81 / OrderHistoryDetail.asp? Order = {0FA626B0-852E-4707-93D5-A00619C6A35B} After completing the page, click "View" "Source File" on the menu to view the XML code. ORDERSUMMARY.PASP: Save the page as OrderSummary.asp, and then navigate to the consolidatedretail.com site and log in (otherwise an error will occur when you try to access OrdersumMary.asp). Submit an order and confirm the delivery address, shipping method, and payment information. Then navigate to the http: // server name: 81 / OrdersumMary.asp. Click the Source File on the View menu to view the XML code. Payment.PASP: Save the page as payment.asp, and then navigate to the consolidateDretail.com site and log in (otherwise an error will occur when you try to access Payment.asp). Add a project to the shopping basket (otherwise it will be redirected to BASKET.PASP when you try to view payment.asp). Then navigate to the http: // server name: 81 / payment.asp to view payment.asp. Click the Source File on the View menu to view the XML code. Product.Pasp: Save the page as Product.asp. Then you can use Internet Explorer to access the product.asp page by specifying the URL with two parameters. These two parameters are TXTCATALOG and TXTPRODUCTID, respectively, the former is the directory you want to browse, the latter is the product of the product you want to view. There is also an optional TXTVARIANTID, which is the variable ID of the product. For example, you can view the XML representation of books named code by specifying the following URL: http: // server name: 81 / product.asp? Txtcatalog = books & txtproductId = code Click "Source File" on the "View" menu To view the XML code. Registration.pasp: Save the page as Registration.asp. Then you can use Internet Explorer to access the registration.asp by specifying a parametric URL (format HTTP: // server name: 81 / registration.asp).

To view XML, click Source File on the View menu. SearchResults.PASP: Save the page as SearchResults.asp. Then you can use Internet Explorer to access SearchResults.asp by specifying the URL with four parameters. These four parameters are: txtsearchphrase, you want to search the phrase; txtcatalog, you want to search for directory names; Option TXTSEARCHROWSTORTURN, you want to return to the result number on the user interface [ui]; option TXTSEARCHSTARTPOS, you want to start searching The starting line. For example, you can search for words in the books directory by specifying the following URL: http: // server name: 81 / searchresults.asp? Txtsearchphrase = age & txtcatalog = booksshipping.pasp: Save this page as shipping.asp, then navigate to ConsolidatedRetail.com Site and log in (otherwise it will be redirected to login.pasp when you try Shipping.asp). Submit at least one order (otherwise it will be redirected to BASKET.PASP), then you can use Internet Explorer to view Shipping.asp: http: // server name by specifying the following URL: 81 / shipping.asp Complete Retrieving this page You can view the XML code by clicking "Source File" on the View menu. ShippingMethod.PASP: Save the page as ShippingMethod.asp, and then navigate to the consolidatedretail.com site and log in (otherwise it is redirected to login.pasp when you try to access ShippingMethod.asp. Submit at least one order (otherwise it will be redirected to Basket.PASP), then you can use Internet Explorer to view ShipPingMethod.asp: http: // server name by specifying the following URL: 81 / shipplayMethod.asp Complete Retrieving this page , Click "Source File" on the "View" menu to view the XML code. THANKYOU.PASP: Save the page as THANKYOU.ASP, and then navigate to the consolidateDretail.com site and log in. Submit at least one order, follow the ordering program until ORDERSUMMARY.PASP is displayed. Then you can use Internet Explorer to view the THANKYOU.ASP: http: // server name by specifying the following URL: 81 / thankou.asp Complete Retrieving this page, click "Source File" on the View menu to view XML Code. UserProfile.Pasp: Save the page as UserProfile.asp, and navigate to the consolidatedretail.com site and log in. You can use Internet Explorer to view userprofile.asp: http: http: http: http :// server name: 81 / userprofile.asp Retrieve this page, click "Source File" on the View menu to view XML code . Develop test strategies to test side focus, because there are many possible test areas, each of which has different test types.

Since there is always resource restrictions (including time, person or capital restrictions), it is very important in the test area and test types and levels to be completed in terms of importance, and this is the focus of the initial test plan. Possible test areas The following is a possible area where you need a note: User Interface (UI) Test: These test checksumers and consistency. The inspection content includes screen display effects (font, size, color, and overall appearance), and data confirmation of all fields in all forms of the application. These two tests should be based on the software specification document. Business Logic Test: Functional Specification Document defines business logic that is expected to implement in actual operations. Therefore, you must use a set of test cases to check business logic. For the implementation of the reference architecture, this test process can be done from the UI or from the Commerce Server BizDesk utility (a management module). The test process should include testing of different types of users and different sites into paths. Back end test: Under ideal conditions, backend tests should be performed in the database. Since the reference architecture uses Microsoft Commerce Server 2000 integrated with Microsoft® SQL ServerTM 2000 table, the test team can use the Commerce Server object to interact with these tables. The test team can write a placeholder program to test the Commerce Server object in an isolated manner, and then compare the results generated by the XML output of the code. You can also compare in the UI layer. Possible Test Type Test Panel may perform the following types of tests:

Function Test ensures that the functionality provided by the system is consistent with the functional specification document. Regression Test Confirmation When a series of the same operations are repeated, the application responds the same. Security Test Guarantees that only users with appropriate permissions can use the functions specified in the system. Different security settings are established for each user in the test environment. Performance Test Make sure the application responds within the time range that the user can accept. The intensity test confirmation application can respond appropriately to multiple users and simultaneous activities. The number of users must agree in advance, and the hardware environment in which the system testing must meet the actual operating conditions. Automatic testing can be used for regression and functional testing. This test is useful if the system is stable and not frequently changed. Platform Test Confirmation Applications can run correctly in the operating system and browser combinations specified in the main test plan. The Internet Service Provider (ISP) Quick Test Confirmation Application can respond to requests issued by the ISP connection. End-to-end interface test checks all input, output, and system. This test ensures that the external system specified in the application and the functional specification document correctly interacts. Application Repeat Instance Test determines whether it will cause blocking or other issues when the client runs multiple copies of the same program. Input and boundary tests ensure that the system only accepts the correct input. This test ensures that the number of characters entered does not exceed the maximum number of characters specified in the field, as well as working properly under boundaries (such as valid range, null value, maximum, minimum, and Tab key switching order on the screen. Wait). Windows / Internet GUI Standard Test Verify Applications have standard views. Localization Test Guaranteed Applications Operate in a different locale. Euro Compatibility Test Guarantee Correctly display the euro. If the application is to receive currency values ​​from the European Economic and Money League (EMU). Conversion test detection requires conversion, all data that can be run normally. These conversions may come from changes needed for old system or new architecture. Installation / Upgrade Test Detects the installation / upgrade program to ensure that the product can be installed on an existing version. The test team can decide to test only the full version or test the upgrade installation version at the same time. Easy to use to ensure that the application is easy to use, there is no too much keys, and it is easy to understand. The best way to perform this test is to find some advanced, intermediate, and primary users, then listen to their views on the application availability. Free testing the system with unstructured scenes to ensure that it can respond correctly. To achieve this, you must ask others to perform a certain function without knowing the steps. Environmental safety tests ensure that the app can be installed and run in the actual operational environment. When performing this test, the security settings of SQL Server and Internet Information Services (IIS) must be the same as the settings of the actual operation. The network test determines the impact of different network conditions on the application. For example, by this test, you can find problems that may occur when using a low-speed network connection. Disaster Recovery (Backup / Restore) Test ensures that there is a catastrophic event, and users can restore the application and their data stores in accordance with certain steps. This test should be responsible by the operational support department. Based on the application-based failover function test ensures that there is a fault condition of an existing document record, the application's failover function works. User acceptance tests are typically performed by users with skills and backgrounds similar to target users. The goal is to determine the application to meet the user requirements and expectations (that is, testing for user-oriented requirements). Note that the test team does not actually perform the test, but may be supervised or designed. Memory overflow and memory leak test ensure that applications can run under memory capacity specified by technical documents. This test also detects the related memory leak problem by multiplexing and closing the application multiple times. Older version of the operating system porting test ensures that the application can run after installing the updated version of the operating system. Help Test ensure that the content provided by online help is related to the current problem and provide a solution. When verifying online help, the test team does not check the correctness of the business rules. In each of the above test areas, the test team must determine the test level required to complete. As follows:

High - very important, you need to thoroughly test this area. Middle - Execute Standard Test Low - If the time allows the next part, the next part focuses on functional testing. Function Test In the process of developing an e-commerce solution, you should carefully test each internal version to ensure that the application has the functionality described in the functional specification document. This involves ensuring that the application operates in a desired manner when the various user scenes determined in the application design. Test methods In most large and medium-sized projects, a test team will perform functional testing. Application generation and testing alternately, eventually obtaining the release version of the software. Figure 8-1 shows a typical application development and test cycle. For more information on each phase in the test cycle, see the corresponding section in this chapter. Please click here to view the full picture. Figure 8-1: Typical test cycle phase 1 - Writing test objectives and primary plan document testing processes, which is clearly tested in documentation, and how to implement these goals. It is very important to determine all relevant factors of the test and form a document, including test hypothesis, timetable, test priority, test level, responsibilities, expected results, and determination factors, risks, and avoidance. After completing the plan, you will get the main test plan document, which is a dynamic document throughout the test lifecycle. The source document required at this stage is the advanced version of the functional specification document and the code published schedule. Please refer to "Developing Test Policy" in front of this chapter, the test team should consider when preparing the primary test plan. Phase 2 - Write a detailed test plan detailed test plan to describe a variety of different use scenarios and entry paths for all users or accounts. These tests use scenarios based on the use scenario determined during application design. The detailed test plan also determines the priority of each scene to be tested. The source document required for this phase is a functional specification document and advanced application and architecture design. Phase 3 - Audit Detailed Test Plan Development Group must review the detailed test plan to ensure that it meets the test requirements of the application. After the test plan is approved, you can start testing. Phase 4 - Defining Test cases should generate detailed test cases according to approved detailed test programs, define the operations to be performed on the application, input data, and expected results, and the predetermined format that should be employed when the recording results are used. At this stage, the priority of the test case should be determined in accordance with the importance of the function to be tested. (Sometimes each scene in the detailed test plan is expanded into one or more cases in the detailed test case document.) In addition, you may need to make a test case execution sequential document, saving time. Phase 5 - Test Application In the actual test phase, you should test all application paths in the end-to-end way to ensure that they meet the functional specifications. The test team uses the defect tracking tool to report all the defects found during the test process. In addition, the test team may have to isolate these defects. The documentation required at this stage is a detailed test case. Phase 6 - Determine whether the generation / test cycle is completed after a round of testing is ready to publish an application is unlikely. Whether to repeat another round of generation and test depends on many factors, including the severity, budget restrictions and time periods of existing errors. Your project plan should allow several time-generated / tests to be repeated before publishing. Phase 7 - Hold the Identification Conference Test Team, the Project Management Team and the Development Team will discuss the status of defects in the appraisal meeting, and which developer is specified to solve it. Phase 8 - Eliminating the Error Development Panel must work together to eliminate all the errors determined at the appraisal meeting. After completing, return each error to the corresponding person in charge (submitted the error test engineer) to verify, if the verification is passed, this error is ended. After the error is eliminated, return each error to the corresponding person (verify the error test engineer), if the error has been fixed, otherwise further action is taken. Phase 9 - Writing Test Report Test Report The status information of the specific items listed in the test plan, and the defect description information classified by severity. This report is very important for issuing decision meetings (will discuss in the following section).

Stage 10 - After the release of the decision meeting test, the program management team held a release of a decision-making meeting to determine if the application can be released. In addition to the project management team, the test team and the development team should participate in this meeting. The files to be used in this meeting are primarily published standards (determined in the main test planning phase) and test reports. Performance Tests Before deploying an e-commerce solution in an operation environment, the application must be thoroughly tested to ensure that it meets performance and scalability requirements. Overall, the application should be tested in response time and throughput, verify that the expected number of users can provide acceptable performance levels. The response time response time is an indicator of the application performance from the perspective of a single user. It measures the time required to make a response from the user to the application. Acceptable response time is different from site, even related to web pages. For example, the user is expected to wait for the time when performing authentication, and may be longer than the time when the product is displayed. Although the guidelines of "The more you" seem to be the preferred design pattern, you should clearly make concessions to provide sufficient security and scalability in some cases. In e-commerce applications, two main factors affecting the response time are network delay and application processing time. Network delays can be shortened in various ways. E.g:

Deploy the application using a reasonable infrastructure architecture. For example, using a flourishing switch than the hub and select high-performance network hardware. Shorten the physical distance between the application layer. Reduce the function call between the components on the network. Cache data to avoid unnecessary database access calls. The application processing time is the calculation time required for the application to perform a particular task. You can reduce processing time by improving code writing and make sure the application uses the appropriate combination of parsing scripts and compiling code. Also, if possible, use asynchronous programming models to greatly optimize the response time. The response time is usually increased as the application load increases. In addition, some program errors (such as those that lead to memory leaks) can only be detected under large load conditions. Therefore, the expected load must be reasonably simulated when the response time test is performed. The throughput throughput is a more comprehensive evaluation of the application performance. It measures the ability of the application to handle the load brought by multiple parallel users. The throughput is usually measured at the number of pages per second or the number of requests per second. It is an indicator of the application scalable when accessing an application a large number of users. Raise the throughput strategy includes: extends outward (using multiple servers configured in the load balance cluster to share the user load), use the random value as the assignment key to allocate data on multiple database servers, using buffer technology (such as database connection) Buffer and COM Object Buffering) Reduce resource robbers, and expand up (increasing the hardware resources of the server to handle more loads). To accurately test throughput on the e-commerce site, you must summarize the type of activity that users will perform. In particular, you must determine the expected "purchase / browse" ratio (the percentage of users who expect online shopping users with only the percentage of users only). This ratio may vary widely with different types of sites (for example, in the B2C retail site, there may be only about 20% of users will have purchase behavior, and most users will do in the Internet bank solution. Some kind of transaction). To a large extent, this information can only be accurately determined after the site is put into operation, but you can use the most accurate estimate obtained according to the indicator of the site. When you simulate your user load for testing, you should reflect as many expected usage patterns to more accurately grasp the operation of the application in actual operations. Test should be performed based on the real situation. The infrastructure architecture of the test should be as close as possible to the operating environment you want to deploy the application. For example, you should use multiple web servers configured for a certain IP-based load balancing mechanism. You can't believe that only the performance indicators obtained from one computer! Keep in mind that security measures such as firewalls and encryption settings affect performance, and test environments should contain these measures. Performance Test Tools and Utilities There are many tools for collecting performance statistics. These tools include: monitoring tools (such as Microsoft Windows 2000 System Monitor and NetMon, SQL Server Profiler), system log files (such as IIS generated files), dedicated test tools (such as Microsoft Web Application Stress [WAS] tools, and others Some third-party intensity test tools are provided. Each tool has its own advantages and disadvantages, so it is necessary to accurately grasp the performance of the application and cannot rely on one tool. Instead, a variety of tool test applications should be used. The Web Application Stress (WAS) tool can be used to simulate a large number of parallel user loads to add applications. To apply it, you can record a range of HTTP requests for the site, and then let the WAS tool issued a request in the specified parallel user. The tool collects response time and throughput statistics, which can be used to evaluate the performance of the application.

WAS tools can be downloaded from http://webtool.rte.microsoft.com/default.htm (English) Free download, you can also get more information about using this tool to test web applications. When using similar WAS-like intensity testing tools, you should create several scripts (not a script) to simulate different user scenes. This way, when you need to determine the specific performance bottleneck, you can run a script; you can run multiple scripts at the same time when you simulate the actual load on your application. Most intensity test tools allow you to add each script to the system on relative intensity ratios, which makes you more accurately reflecting the use scenarios expected in operation. Performance Test Method B2C site users want this site to always run well. The performance, scalability, and overall reliability of the application are elements of web application design. Performance analysis methods include the following different steps:

Preparing Analysis Creating an intensity script Perform test analysis results Record and submission results The above will be discussed in the following sections. The first step of preparing analysis and analysis involves collecting information. This information should provide a copy application environment and understanding the details necessary for application usage, and you should inform you that all existing performance issues. Sources of this information include market forecasting, operating IIS logs, performance logs, and application functional specifications. Of course, many of this information can only be obtained on sites that have been put into operation. For new sites, you can use market forecasts and similar sites. The information you collected is critical to successfully performing performance analysis. It helps determine the requirements for the test environment and apply it from the analog environment to the analysis of the results of the analysis test. Before starting analysis, you should determine which content should be delivered. Performance analysis should be regarded as contract requirements between test teams and application owners. Normally, when performing performance analysis, the application owner may not know what they have to get from the analysis. Creating a Performance Analysis Deliveration list You can answer this question for them. Creating a copy of the operating environment To get the most accurate test results, the test equipment should simulate the current or expected operational environment, including hardware and software configuration. If a load balancing solution (such as Microsoft Network Load Balancing or Microsoft Windows NT® Load Balancing Service [NLB / WLBS]) is deployed, the test environment should reflect this. Test facilities should also be reflected in the server role and the number of servers allocated. For example, if you use a cluster with three IIS servers in your operation, use the matching configuration in the intensity test lab. The CPU, RAM, and disk configuration of each IIS server should also meet the actual operational conditions. The BIOS version of Service Pack, drives, and hardware must also match. Matching hardware and software allows you to get more accurate test data so that there is no need to perform an inference. Due to budget or other restrictions, you may not be able to create a test environment that is exactly the same as the operating environment. In this case, please pay attention to the difference when performing data analysis. To test the consolidateDretail.com site, deploy the application in a facility used to test as shown in Figure 8-2. Please click here to view the full picture. Figure 8-2: Test facility workstation is used to simulate the Internet client (one of them runs Windows 2000 Professional, and the other two run Windows 98), through a three-level switch to access the site. The IIS server on the Web layer transmits data through a secondary switch with the database server. All Commerce Server objects and pipes are deployed on the IIS server (i.e., there is no physical layer of a separate application server), and all site data (except direct mail database) stores SQL Server database servers after the secondary switch. on. Direct mail databases are deployed on the IIS server. The design purpose of the above facilities is to simulate the deployment environment of the application. When the application owner is involved, the application owner has conducted some investigations. Discussing performance issues with application owners can save your time. They can help you observe the performance of its applications more deeply. In particular, application developers may have specific considerations and knowledge that administrators cannot provide. If their studies have discovered bottlenecks, then your mission is just to verify these problems, and provide more detailed information for developers. It is necessary to understand the technology behind the application to continue to understand the technique behind the application. The deeper the application, the more permeate the performance / intensity analysis. For example, if you know that an application uses XML, you should master the performance debugging method of the XML.

For consolidatedail.com applications, the test team must be very familiar with the deployment and use of Commerce Server 2000 and XML and XSLisapi filters. Defining the transaction / user scene To successfully complete performance / intensity analysis, you must know the daily situation of the end user using the application. You will find a task often do much more than other tasks. Your performance / strength script should reflect this mode of use. Be sure to communicate with the market and product support staff when confirming the usage mode. Usually they have more contacted users and have a more profound understanding of these statistics. IIS log files are also good resources to help master access frequencies for application components or web pages. The log is very useful, not only to define user scenes in the script, but also to compare the actual page browsing distribution in the verification intensity test distribution and operations. For the consolidatedretail.com site, the expected usage mode is 80% browsing, and 20% is purchased. In addition, in the user who executed the purchase of purchases, it is expected that half of them is registered back users, and the other half is a new user who needs to register first before payment. Define the target definition of the goal of clear analysis, and include these goals in the test plan so that everyone will reach a consensus on which analyzes should be submitted. This reduces the risk of being forced to re-run the test script. Re-run the script will waste time and resources, and have an adverse effect on the analysis, because the test team tend to be driven by lack of time. Creating an intensity script collects the required information and prepares the test environment, the next step in performance / intensity analysis is to create an intensity script that can accurately simulate the expected site traffic. This can be done using historical data or market and business analysts obtained from the current version of the site. To generate a highly reliable strength script, consider the factors discussed in the following sections: Creating multiple scripts to handle multiple scenarios, should avoid using a single script to contain all scenarios. Using a single huge script will cause difficult to isolate specific scenarios that make the entire script run. For example, to simulate a normal e-commerce site, you may need a user to browse the directory and product scene, and a user adds the product to the shopping basket and pays the scene, and a user searching for the product. If the test team creates three separate scripts, the strength test can be performed separately, determine the bottleneck of each user scene, or simulate the mixed flow at the same time. Avoid recording and playing static websites have already become the past. Today's most sites (especially those for e-commerce destination) have full dynamic content. It is for this reason that you cannot simply simulate the site by logging and playing basic GET and POST commands. You may need to customize whether the site automatically generates some items, such as shopper ID, shopping basket ID, order id, and guid. Many test tools have functions that capture dynamically variable variables of each thread (virtual user), but you need to verify the results of the script to ensure that the variables are generated correctly. Many test tools also have the ability to import data from .CSV or .txt files. This feature allows you to export products and directory lists from the SQL database, and then use this file to increase the dynamics of the script (thus avoiding the same product repeatedly using the script). The WAS tool can create a series of variables for your script, for example, username, password, product, and category can be generated by variables. The operation of the verification intensity tool is before performing a wide range of tests, you should first verify that the strength tool uses the site accurately as a real user. To do this, you must understand the contents of each ASP page accessed and executed on the IIS server and SQL Server. At the time of tracking the ASP page, the IIS server log file and the SQL Profiler / Trace file are excellent resources available. A more accurate approach is to use the browser to browse the entire site and record all SQL commands and the stored procedures called for each page.

Also, pay attention to all web content displayed in the intensity script (such as GIF, XML, ASP, and HTML files) will appear in the IIS log. You can then play the script for a user and only this scene, and verify that the SQL trace file and the IIS log have recorded the corresponding server-side activity. Perform performance / intensity test In this step, you should prepare the server environment for running the application and analog client load scripts in advance. The third step in performance / intensity analysis is to run scripts and perform intensity tests. The following subsections outline some of the points of performance / intensity testing. Quick Test Site Quick Tests Determine the number of clients and threads that are discovered by the application system bottleneck. Microsoft recommends running a small amount of threads using multiple clients instead of using a single client to run multiple threads. The rapid test here refers to the operation between several short intensity tests to find the optimization ratio between the client and the virtual user thread. Optimization is that this ratio is on the server, not on the intensity client, causing performance degradation or bottleneck. Start collecting performance data When you get the correct ratio between clients and threads, start all servers system monitors, open each counter, and start testing. For tests that last 30 minutes, you can intervals in 15 seconds or 30 seconds. For longer tests, the size of the log file is minimized at intervals of 60 seconds to 300 seconds. Reset the IIS log to clear the IIS log before starting test Make the data analysis process easier. You can turn off the IIS Administrator Service (Iisadmin) by using the Iisreset command or the Net Stop iisadmin / y command. Then, delete the IIS log file in C: / WinNT / System32 / Log, restart the W3SVC service using the Net Start W3SVC command. Clear the Windows event system, security, and application log Clear Windows event log allows you to determine any exception error messages generated by the site during intensity testing. Configuring and launch SQL Profiler on SQL Server, start SQL Profiler / Trace and only add T-SQL and LOCKS events. This will display all SQL commands and stored procedures, read, write, and command cycles and any deadlocks. Note that the size of the SQL trace file will increase rapidly with the test. Therefore, only 30 minutes of tests can be collected at a time. For longer tests, Microsoft recommends running SQL Profiler at 30 minutes at the beginning, middle and ends of the test. If there are other SQL Server, you can set the tracking to record the information to the database (not the file). Creating a controlled environment If possible, try to perform intensity tests when there is no other activity on the IIS cluster or SQL Server. With a controlled environment, you can ensure that there is no abnormal error message from your intensity client, page browsing, network communication, or load. The analysis results entered the analysis phase after running testing and generating test data. First, you should verify that the intensity test has been simulated successfully and then performs a complete data analysis. This process will be briefly described in the following sections. Stop simulation and pause data acquisition stop running intensity scripts, system monitors, and SQL Profiler / Trace on all clients and servers in the test environment. Make sure the system monitor, SQL TRACE / PROFILER, IIS logs, and Windows event logs are saved in separate directory so you can archive and organize test data. Check the Windows event log to browse the Windows event log and make sure there is no exception message generated by your intensity script. An error as an intensity test result is acceptable. Analyze Performance Monitor Data System Monitor data helps determine indicators such as system CPU usage, memory usage, disk queue, and W3SVC counters.

转载请注明原文地址:https://www.9cbs.com/read-45369.html

New Post(0)