Tomcat test (transfer TSS)

xiaoxiao2021-03-06  68

http://www.theserverside.com/reviews/thread.tss?thread_id=18243

Jakarta Tomcat Performance Benchmark

Introduction Jakarta Tomcat is an open-source application server that is produced by the Apache Software Foundation. Tomcat is the reference implementation for the Java Servlet and Java Server Pages technologies. For more information, see http://jakarta.apache.org/tomcat . My company was using the Tomcat servlet engine for a pilot project and experiencing great success. The pilot project involved a web application that had 5-10 concurrent users, but we were uncertain if Tomcat could scale to a larger number of concurrent users. Many of our other web applications are implemented using one of the most popular commercial application servers on the market (which we will call "Commercial J2EE App Server" for the remainder of this document). I was very interested to see how Tomcat would compare in a Production Environment. I Searched The Internet for Tomcat Performance or Scalability Benchmarks, But The Results WERE VERY LIMITED. Therefore, I DECIDED TODODUCE BENCHMARK. The Purpose of THIS BENCHMARK WAS To DETERMINE TOMCAT AS A Servlet Engine In a Production Environment. The Benchmark Results Are Compared with "Commercial J2EE App Server"

. The goal was to measure the relative response times of the two application servers, rather than trying to obtain the best absolute response time. Only the servlet engine components are being compared (ie no plug-in for HTTP servers, etc). To determine production viability, we need to investigate both response times and scalability. The response time is what a single user would observe, and it is expected to be sub-second. Scalability is determined by how consistent the response times are when additional concurrent users are added . As the number of concurrent users is increased, if the response time or the errors increase, then this indicates poor scalability. Response time can be defined as the length of time that a user must wait from the instant that they submit a request to the Instant That The View The Response from That Request. Response Time Can Be Divided Further Into Processing Time. Wee Wee Intested In The Full Response Time Measurem ents for this benchmark, therefore none of the separate response time components were measured. Configurations Test Client Windows 2000 Pentium 3 800 MHz 512 Mb RAM The Grinder v2.8.3 (Sun JRE 1.3.1) The test client was the machine that generated the requests . to the servlet engine The test client used The Grinder load-testing framework to create and execute test scripts, parameterize values ​​and simulate a variable number of concurrent users and test cycles For more information about The Grinder, see http:. // grinder. SourceForge.net. The "grinder.properties"

file that was used for this benchmark is included in Figure 1. Figure 1 -. grinder.properties file The application that was used for testing in this benchmark was a search engine that is based on the Jakarta Lucene search API, and is composed of a servlet, JSPs and XML configuration files. Server Configuration 1 Windows 2000 Pentium 3 1000 MHz 512 Mb RAM Commercial J2EE App Server (IBM JRE 1.3.0) Server Configuration 2 Windows 2000 Pentium 3 1000 MHz 512 Mb RAM Jakarta Tomcat v4.1.18 (Sun JRE 1.3.1) Results Response Time The response time was sub-second and fairly consistent for both "Commercial J2EE App Server" and Tomcat, until a certain threshold of concurrent users was achieved. The response time began to degrade at approximately 40 concurrent users . This number should not be considered as a limitation of either "Commercial J2EE App Server" or Tomcat, except in this particular hardware and software configuration. The important facts to note are that both servlet engines provided si milar consistent sub-second response times until that threshold was reached (although "Commercial J2EE App Server" is slightly faster), and that both servlet engines began suffering from overloading at approximately the same concurrent user level. Figure 3 apparently demonstrates that the performance was consistent from 100 to 200 threads, but in fact the response times were only similar because there were a large number of errors, and the total number of requests that were serviced with 200 threads was less than with 100 threads Figure 2 -. Response time table Figure 3 - Response Time Chart. Errors All of the Errors Reported WERE "Connection Refused"

转载请注明原文地址:https://www.9cbs.com/read-109484.html

New Post(0)