[ZT] APUSIC Application Server Performance Adjustment

xiaoxiao2021-03-06  37

From: Kingdee Middleware Published: 2003-04-07 APUSIC Application Server As a running platform for enterprise applications, the performance of the system is very important. When the application is quite demanding performance, consider whether you need to change the default settings of the system to enhance the performance of the server. First, you should consider whether the system's hardware environment (CPU is high, memory size, hard drive speed and network transmission rate, etc.) can meet the needs of the application. For complex large-scale distributed enterprise applications, the hardware environment should not only meet the minimum requirements of the APUSIC application server, and the hardware configuration should be improved to make the application run, such as CPU and memory usage should not be greater than 80%. On the other hand, it is also very significant in changing the configuration parameters of the software environment. This article will introduce how to optimize the software environment configuration to improve system performance, divided into two aspects: Java Virtual Machine (JVM) performance optimization and optimization of the APUSIC application server configuration. The performance optimization of the Java Virtual Machine (JVM) is composed of four different but interconnected parts: language itself, Class file format, Java API library, and JVM. When an Java program is executed, the source code is written in a Java language, which is compiled into a Class file format, run in the JVM. At the same time, the Java program is called the Java API library, access system resources. The JVM and Java API libraries form compilation and operational environments, just called Java platforms. The JVM is based on stack-based, regardless of the assembly language based on register (register based). JVM is an abstract computer architecture based on a dynamic stack, providing PUSH, POP to manipulate data. The main function of JVM is to load Class files and perform bytecode. The implementation of the Java platform is divided into four parts: 1. The execution of the byte code: JVM spends about half the time to explain the word code. 2. Garbage Collection (Garbage Recycling) 3. Thread management 4. Dynamic operation: translation of class loading, binding inspection, security check, dynamic class loading, abnormal capture, reflection mechanism, local method. Among them, the garbage collection of the object will occupy the run time, resulting in a short interrupt of the program. We can launch the APUSIC server through the command line, so you can have the selected setting command line parameters. The main purpose of using the command line parameter is to select the JVM type and the allocation policy of the JVM type and JVM runtime. Using Hotspot Hotspot JVM as an additional module of Java 2 SDK, using State-of-the-ART technology greatly improves system performance: 1. Adapt to compilation: HotSpot JVM analyzes performance bottlenecks ("Hot Spots") during the running process of the program, then compiles these and performance to improve the tight part. 2. Improved Garbage Collection 3. Thread Synchronization Optimization HotSpot JVM uses two machine characters as the Header of the object, not like most JVMs using three machine characters, which can save 10% of the memory space and accelerate all objects. Scan. Hotspot JVM also discards the concept of Handle, and the implementation of the object reference is through direct pointers, reducing memory usage and improves processing speed. Accessing the same variable like C language is equally efficient.

You can go to http://java.sun.com/products/Hotspot/2.0/download.html to download Java HotSpottm Server VM 2.0, and you can do it. If you need to install it separately for JDK and JRE. Hotspot JVM is divided into Client and Server versions, which are optimized for typical client applications and server applications. JDK1.3 contains Java Hotspot Client VM after installation, and installed on Java Hotspot Server VM. You can choose the JVM: · Java-Server: Java Hotspot Server: Java Hotspot Server: Java Hotspot Server VM, Java, Java HotSpot Client VM, Java-Classic: Java 2 Classic VM Default is to use the Hotspot Client VM. You can use java -server -version to view version information to determine if it is installed correctly. Select Client or Server HotSpot VM as long as you do different applications. For Server-Side applications, sometimes performance will increase by 20%, as long as it is simple to start Server on the command line, add -server. Garbage Collection HotSpot JVM provides three types of garbage collection algorithms, namely: 1. COPY / Scavenge Collection 2. Mark-Compact Collection 3. INCREMENTAL (TRAIN) Collection specific meaning I don't explain, interested in viewing the relevant documentation. A JVM throughput means that the time to go to GC consumption accounts for a percentage of total execution time. Therefore, 80% of the degassing content is that GC consumes 20% JVM processing time. When your application is running, the JVM GC will cause the program to pause. The ink memory is divided into two parts of New and OLD, as shown below: The New section includes the newly created object area and the two Survivor zones (SS # 1 and SS # 2), the newly created object assignment is in New, for a long time The object is moved to the OLD section. Perm is a permanent area that is assigned to the JVM, which can be set by command line parameters - xx: maxpermsize = 64m. When the New is filled, "auxiliary" GC is triggered, and the object that has a long enough time is moved to the OLD. When OLD is also filled, the "master" GC will be traveled to all objects in the stack memory. It can be seen that "master" GC will consume more time. Sufficient NEW will suit a lot of objects that need to create a short period of time, while Old is not enough to trigger "master" GC, greatly reducing performance. So, our task is how to set the size of the stack memory and how to plan the proportion of New and OLD regions to suit our application. "Assisted" GC uses the COPY / Scavenge Collection algorithm, "Main" GC uses Mark-Compact Collection. Heap Assignment Policy By command line parameters, we can set the size of the heap and allocate "new", "OLD". Some common parameters are as follows: For detailed parameter settings, see related documents.

How do I plan our stacking memory allocation strategy? There is no clear and detailed provisions, and can only be adjusted according to our specific applications, making performance optimization. This optimization method does not require programmers to change the code, but sometimes the effect will be obvious. Some operational recommendations are summarized below: • If GC has become a bottleneck, customize your stack memory allocation. Allocate as much memory to JVM. However, if too much, it will cause exchange between memory and hard drives, but reducing performance. You can assign 80% of available RAM to JVM. · If it is a Server-Side application, please add -server parameters. In this way, the default Newratio is 2, and Survivorratio is 25, which is suitable for most applications. You can also set it with newsize, maxnewsize. • Set the size of XMS and -XMX, etc., avoid adjusting the size of the stack memory after each GC. · The same truth is set to NEWSIZE, MAXNEWSIZE. · The size of "new" is best not greater than half of "old". For example, the APUSIC server can be launched by the following command line: java -server -xx: newsize = 128m -xx: maxnewsize = 128m -xx: survivorratio = 8 -XMS512M -XMX512M com.apusic.server.main apusic application server configuration optimization The following main description of the parameter descriptions of two files in APUSIC (these two files are placed in the% apusic_home / config directory) and the settings in the database. Apusic.conf changes the two parameters to: MaxClients parameter value settings to prevent denial of service attacks. When the parameter value is set, the service traffic can be restricted, and the role of preventing the denial of the service attack is placed, but when the user is accessible, this parameter is set to be small, but the performance will affect performance. Under normal circumstances, it is not considered to prevent the rejection service attack, this parameter is set to -1, indicating that the service traffic is not limited. MaxWaitingClients: With MaxClient, there is a similar function, which means that when many concurrent users access, the maximum number of customers allowed to wait. In general, it is not considered to prevent the rejection service attack. This parameter is set to a large number, such as 10000, if set, such as 50, when the number of concurrent service queues waiting for the response exceeds 50, make some services not respond Thus no response results. DataSources.xml min-spare-connection: Connection pool minimum capacity max-spare-connections: refers to the connection pool maximum capacity stmt-cache-size: Meets Cache capacity ResultSet-cache-size: Refers to the result cache capacity Resultset-Cache- Timeout: Refers to the result set cache timeout configuration as follows: ... The minimum capacity of the connection pool: is also the initial connection, this value should not be set too small, too small to constantly establish a connection. It is not appropriate to set too much, too much resource. The maximum capacity of the connection pool: The maximum number of connections that can be accommodated in the pool. When the number of connections in the connecting pool is not enough, you need to wait for the other used connections to be released, otherwise you can only wait. Set this value according to the actual situation of the application. The statement cache capacity: Caches the executed statement, when this statement is executed, it is not necessary to recompile, thereby improving performance. This value is set appropriately based on the size of the machine memory.

转载请注明原文地址:https://www.9cbs.com/read-78049.html

New Post(0)