A good beginning is half the battle. For J2EE, we know that when developing applications, decisions in architectural design phases will have a profound impact on the performance and scalability of the application.
Now when developing an application project, we have more and more note that performance and scalability are issued. The problem of application performance is often more serious than the problem of application functions. The former will affect all users, while the latter only affect those users who happen to use this feature.
As the person in charge of the application system, it has been required to "pay more to spend more," with less hardware, fewer network bandwidth, and more tasks. J2EE is currently preferred by providing component methods and universal middleware services. To build a J2EE app with high performance and scalability, you need to follow some basic architectural strategies.
Cache:
Simply put, the cache stores frequently accessed data, in the entire life cycle of the app, stores in a persistent memory or stored in memory. In the actual environment, a typical phenomenon is an instance of a cache in each JVM in a distributed system or an example of a cache in multiple JVMs.
Cache data is to improve performance by avoiding accessing persistence memory, otherwise excessive disk access and too frequent network data transmission.
copy:
Copy is to achieve overall overall throughput efficiency by creating multiple copies of the specified application service on multiple physical machines. In theory, if a service is copied into two services, then the system will handle twice the request.
Copy is to reduce the load of each service to improve performance by a plurality of instances of a single service.
Parallel processing
Parallel Processing A task is broken down into a simpler sub-task and can perform in different threads simultaneously.
Parallel processing is to improve performance by using the J2EE layer execution mode and multi-CPU characteristics. Handling multiple sub tasks in parallel compared to using a thread or CPU processing task, allowing the operating system to assign these subtasks in multiple threads or processors.
Asynchronous treatment
Application functions are typically designed as synchronous or serial way. Asynchronous processing only handles those very important task parts, and then immediately returns to the caller, and other task sections will be executed later.
Asynchronous processing is to improve performance by shortening the time that must be processed before returning the control to the user. Although they do the same thing, users don't have to wait until the entire process is completed, you can continue to issue a request.
Resource pool
Resource pool technology is a set of ready-to-have resources. These resources can be shared by all requests with the relationship between 1: 1 between request and resources.
The use of the resource pool is conditional. It is necessary to measure the cost of the following two ways: A. Maintain a set of resources that can be shared by all requests B, re-creating a resource for each request
When the current is less than the latter, the use of the resource pool is efficient.