End-to-end test of IT architecture and applications

xiaoxiao2021-03-06  23

Introduction Just n't, the industrial standard test practice (developed for the quality of C / S architecture) is still focused on the client's front-end function test or the rear end scalability test and performance test of the server-side. Such "Separation" is mainly due to traditional C / S (client / server) architecture than current multi-layer architecture and distributed environments. In the standard C / S architecture, the problem either happens at the client, or occurs on the server.

Today, a typical computing environment is a complex, heterogeneous hybrid environment, its components and code from legacy systems, autonomous development or third parties, or standard components and code (icon 1). As the development of the web, the complexity of the architecture further increases, and typically there is a content layer between one or more back-end databases and the user-oriented representation. This content layer can provide content from multiple services (where these services are concentrated in the representation layer), and may include some of the business logic that exists on the front end of the conventional C / S architecture.

This complexity increases, develops with legacy system integration and cutting-edge technology, making software and system issues (including functional and scalability and performance issues), the main description, analysis, and positioning of software systems into the main development and release process. challenge. In addition, as SOAP / XML (Simple Object Access Protocol / Scalability Markup Language) is a standard data transfer format, the issue of XML data content is more and more important for .NET platform and J2EE platforms. Simply put, it is the complexity of the architecture and computing environment, resulting in the original test mode for C / S.

Figure 1: Typical multi-layer architecture now

The overall quality strategy is clear, a new, effective quality strengthening strategy is necessary to develop and deploy successful software development and deployment. The most effective strategy is to combine the overall testing of individual components in the environment. In this strategy, both component level and system levels must contain functional tests to ensure data integrity, but also include scalability and performance tests to ensure acceptable response time under different system loads.

In terms of evaluating performance and scalability, these parallel analysis patterns can help you find the advantages and defects of the system architecture, and determine which components that must be checked when resolving problems associated with performance and scalability. Similar functional test strategies (ie all data integrity verification), is being increasingly critical, because the current data may be derived from the dispersed data source. The testers can locate each potential error by evaluating data integrity inside and outside of the component (including any functional data conversion in the process), and the tester can locate each potential error and isolate system integration and defects into part of the standard development process. End to End Architecture Testing refers to this concept that tests all access points in the environment and integrates functional testing and performance testing in the component level and system level ( See Figure 2).

In a sense, end-to-end architecture test is essentially a "gray box" test, a test method for the length of the white box test and black box test. In white box testing, testers can access the underlying system components and have enough understanding. Although the white box test provides very detailed and valuable results, some "strength is not from the heart" in terms of detection integration and system performance. In contrast, the black box test requires little or does not require understanding of the internal working mechanism of the system, but focuses on end users - ensuring that users can get the correct results in time. Black box testing usually does not specify the cause of the problem, nor guarantees a certain code has been executed and efficiently running, and does not contain any memory leaks and similar problems. By "technique grafting" testing the white box test and the black box test, the end-to-end architecture test truly achieves the extraction.

Figure 2: End-to-end architecture test Contains all access points Feature Test and Performance Test For scalability testing and performance testing, access points include hardware, operating systems, application databases, and networks. For functional testing, the access point includes a front-end client, an intermediate layer, a content source, and a background database. Understand these, the term "architecture" is defined by how to interact between components and components and users in an environment. Advantages and defects from components from components depends on a particular architecture that organizes them together. It is exactly how the architecture will respond to this uncertainty that acts on its command, so that we need to perform end-to-end architecture testing.

In order to effectively realize end-to-end architecture testing, RTTS successfully developed a risk-based test automation method. The Test Automation Process (Test Automation Process, TAP) is based on years of successful test practice, using the best automatic test tool. This is an iterative test method, including five phases:

Project Evaluation Test Plan Create and Improve Test Case Writing Test Automation, Execution, and Track Test Results Evaluation

The single function and performance test required for end-to-end architecture tests is done in the "test automation, execution, and track" phase. As shown in Figure 3, this stage is constantly repeated, and the corresponding test is also refined during each iteration process.

Figure 3: RTTS test automation process (TAP) end-to-end test

Component level tests are obvious, first of all, you must develop a single component before you can "assemble" success system. Because the components can perform early test, the TAP mid-end test starts from the component test. In the component test, as the environment is established, appropriate tests are also implemented on each of the different individual components, respectively. Functional testing and performance tests are quite valuable in the component test phase, helping to diagnose a variety of defects before and build throughout the environment.

The function test component-level function test in the component test verifies the transaction performed by each component. This includes authentication of any data conversion required by the component or system and the business logic of the transaction processed by the component. In the development of application functions, infrastructure testing verifies and quantizes data traffic in the entire environment and performs functionality and performance testing in this way. Data integrity must be verified when data is passed between components. For example, XML test verifies the XML data content in transactions and verifies the formal XML structure (metadata structure) when needed. For component testing, automatic scalable test tools such as IBM Rational Robot can greatly reduce time and effort to test in the functional test of the GUI test and non-GUI components. Rational Robot's script language supports call to external COM DDLS, is ideal for non-GUI objects. In addition, the new Web and Java test functions included with Rational Suite TestStudio and Rational Team Test provide additional features that test the J2EE architecture and use Java language to write test scripts.

Component class scalability testing and performance testing is parallel with these functions that the component level scalability test is tested in the environment to determine the limit of its transaction (or capacity). Once there is enough application function to create a business-related transaction, the Transcation Characterization Testing is used to determine the various quantitative descriptions in the business transaction, including the consumable bandwidth and the background CPU, and memory usage. Resource Testing This concept extends this concept to multi-user testing to determine all resource consumption in the application and subsystems or modules. Finally, configuration tests can be used to determine which hardware, operating systems, software, networks, databases, or other modifications on other configurations can be optimized. Like functional testing, effective automated tools such as Rational Suite TestStudio and Rational Team Test can greatly simplify scalability testing and performance testing. In this case, the ability to create, plan, and driver multi-user testing and monitoring resource utilization is effective and successful completion of resource testing, transaction feature testing and configuration testing. After the system-level test system "assembly" is complete, the overall test of the environment can begin. Similarly, end-to-end architecture testing requires authentication of the entire environment and performance / scalability.

System-level function test integration is one of the top priorities. Integration Testing Checks whether the overall system has completed integration from the perspective of data. That is to say, whether the hardware or software component of each other is normal communication? If so, then whether the data passed in them is correct? If possible, the data should be accessed and verified during the intermediate phase transferred by the system components. For example, when the data is written to the temporary database, or when the data already exists in the message queue before being processed by the target component, the data should be verified. Accessing data between these component boundaries provides an important additional scale for data integrity verification and data issues. If a data error is detected between the two data transfer points, then defective components must be between the two transfer points.

System-level scalability testing and performance testing can answer the following to answer the scalability or completion of the environment by creating a test: How many users can access it at the same time when the system is still acceptable?

Can my high availability architecture work as designed? What happens when adding new applications or updates to the application being used? How should the system configured to support our expectations when first use? How to configure it after 6 months? One year later?

We can only get partial features - Is the design reasonable?

The answers to these issues can be obtained by certain testing techniques, including scalability / load testing, performance testing, configuration testing, concurrent testing, pressure and capacity testing, reliability testing, and failover tests.

In terms of system capacity, the overall environmental test is usually starting from scalability / load testing. This test method gradually increases the load in the target environment until some performance requirements, such as the maximum response time, or the specific resource is exhausted. The purpose of these tests is to determine the upper limit of transaction and user capacity, which often combines other test methods to optimize system performance.

Performance testing is related to scalable / load testing, which determines whether the environment meets the set load and transaction portfolio by testing a specific business scene. (Figure 4)

Parallel with component-level configuration tests are system-level configuration tests that provide weighted information under specific hardware and software settings, as well as metrics and other information needed for resource allocation.

Figure 4: Performance test: Can the system have a specific user load? Concurrency Testing (Figure 5) analyzes the effect when multiple users accesses the same segment, the same module, or database record. It authenticates and measures the level of system lock and deadlock and the use of single-threaded code in the system. From a technical perspective, concurrent testing can be classified as a functional test, but it is often used in conjunction with scalable / load tests because it requires multiple user or virtual users to drive the system.

Figure 5: Concurrent testing can identify deadlocks and other concurrent access issues

Stress Testing (Figure 6) When the system reaches a saturation (referring to resources such as CPU, memory consumption, etc.) is time testing the system to determine whether the behavior changes, or whether it will be unfavorable to the system, applications, and data. influences. Volume Testing is associated with pressure testing and scalability tests, which can determine the transaction capacity that the entire system can process. By pressure and capacity testing can be known that the system has elasticity when processing bursts increases or performs a large capacity activity, which does not include failures caused by memory leaks or queues.

Figure 6: Pressure testing can determine the effect of high capacity use

Once the application environment starts working and performs performance optimization, a long-term reliability test can be performed at 75% to 90% environmental utilization to find any problems associated with longer runtime. In an environment where redundant and load balancing, failover Testing (Figure 7) Theory Theoretical failure process and testing and measurement of overall failover processes and their impact on end users. In essence, failure transfer test answers such a question: "If a specific component is running, can users can continue access and processing in the smallest interrupt?"

Figure 7: Failure Transfer Test: What happens if the component X fails?

Finally, if a third-party software is used in the environment, or the components provided by the host vendor and other external sources, the SLA (Service Level Agreement Service Level Agreement) test can be used to ensure the end users specified in the two parties. Response time, as well as inflow and out of data. A typical protocol usually specifies the activity capacity and a specific longest response time within a given time range.

Once external data or software is in place, continuous monitoring of these sources is a wise practice, so that the remedial measures can be taken when the problem occurs, which will minimize the impact of the end user.

Like the component-level scalability test, Rational Suite Teststudio, Rational TeamTest and other similar tools provide some advanced, multi-user testing capabilities, which can be used to efficiently perform most of the above or all of them. Performance Testing.

An actual example may give an example of the best way to explain. Consider the following situation:

Build a public Web bookstore via Eretailer and use four Web services provided by the content layer. The first service provides a directory, including book name, introduction, and author. The second service provides current inventory information for all products. The third is the price server, which provides commodity pricing information, and provides shipping cost and tax information and complete the transaction based on the purchaser's location. The last service is used to save user profiles and history purchase records.

The representation converts the user into XML through the UI graphical interface input into XML and transmits the corresponding content server. Then, the response XML will be converted to HTML by means of a layer and serve the user session. The services of each content layer will update other services as needed. (See Figure 8) For example, when the user's history purchase record changes, the price server must update the corresponding user file service. Figure 8: Typical eretailer application access point

For the above-mentioned system, the starting point of one end-to-end test policy is to apply functional testing and scalability / load testing for each service of the content layer. The XML request is submitted to each content service, and the corresponding response XML document is captured, thereby evaluating its data content or response time. As these content services are integrated into the system, you can do in the integrated system by submitting transactions, functional testing and scalability / load testing to the web server. The transaction can be verified throughout the site, whether it is a functional test (query using SQL) or for scalable / load testing.

During the system development, a single test applied to all access points can be used to adjust the various services to make it function properly throughout the system - regardless of the data content (functionality) or performance (scalability) Forward. When the current end finds a problem (for example, through a browser), those test cases and data that are used to test individual components can help us quickly locate the error location.

The advantages of network modeling are part of the design process, whether in the first test phase before hardware acquisition, modeling of end-to-end testing can be expanded for different network architectures. Because it can help design more efficient and low error rates. The modeling of network infrastructure before deploying can help point out the bottleneck of performance, as well as errors in routing tables and configurations. In addition, the application transaction license acquired in the test can be entered into this model to identify and separate potential issues in the "Chattiness" and infrastructure of the application.

Conclusion end-to-end testing tests and analyzes the computing environment from a summary quality angle. The scalability and functionality of each component have been tested and integrated in the development phase and the previous quality assessment. This provides diagnostic information for the effectiveness of development, while providing a high degree of quality assurance for the issuance of the system. End-to-end test provides a comprehensive and reliable solution for managing complexity of today's architecture and distributed computing environments.

Of course, when you need a lot of testing and analysis, end-to-end test requires considerable expertise and experience to organize, manage and practice. But from a business perspective, organizations that apply end-to-end tests can be highly guaranteed by application software, system performance, and reliability. In the end, these organizations will benefit from the improvement of quality: better customer relationship, lower operating costs and huge income growth.

In the past six years, RTTS is one of the IBM Rational's partners, develops and improves its end-to-end test methods, and works with hundreds of customers to ensure the functionality, reliability, and retractable. Sex and network performance. Welcome to the RTTS website: www.rttsweb.com. .

Reference

You can see this article in our website on our world.

About the author Jeffrey Bocarsly, RTTS, Automation Function Testing Division Manager. Johanthan Harris, RTTS, Scalability Testing Division Manager Bill Hayduk, RTTS, professional service department supervisor

转载请注明原文地址:https://www.9cbs.com/read-72567.html

New Post(0)