A good beginning is half the battle. For J2EE, we know that when developing applications, decisions in architectural design phases will have a profound impact on the performance and scalability of the application. Now when developing an application project, we have more and more note that performance and scalability are issued. The problem of application performance is often more serious than the problem of application functions. The former will affect all users, while the latter only affect those users who happen to use this feature. As the person in charge of the application system, it has been required to "pay more to spend more," with less hardware, fewer network bandwidth, and more tasks. J2EE is currently preferred by providing component methods and universal middleware services. To build a J2EE app with high performance and scalability, you need to follow some basic architectural strategies. Cache: Simply, the cache stores frequently accessed data, in the entire life cycle of the application, these data stores in persistent memory or stored in memory. In the actual environment, a typical phenomenon is an instance of a cache in each JVM in a distributed system or an example of a cache in multiple JVMs. Cache data is to improve performance by avoiding accessing persistence memory, otherwise excessive disk access and too frequent network data transmission. Copy: Copy is to achieve overall overall throughput efficiency by creating multiple copies of the specified application service on multiple physical machines. In theory, if a service is copied into two services, then the system will handle twice the request. Copy is to reduce the load of each service to improve performance by a plurality of instances of a single service. Parallel processing parallel processing will decompose a task into a simpler sub-task and can be executed in different threads simultaneously. Parallel processing is to improve performance by using the J2EE layer execution mode and multi-CPU characteristics. Handling multiple sub tasks in parallel compared to using a thread or CPU processing task, allowing the operating system to assign these subtasks in multiple threads or processors. Asynchronous processing applications are typically designed to synchronize or serial. Asynchronous processing only handles those very important task parts, and then immediately returns to the caller, and other task sections will be executed later. Asynchronous processing is to improve performance by shortening the time that must be processed before returning the control to the user. Although they do the same thing, users don't have to wait until the entire process is completed, you can continue to issue a request. Resource pool resource pool technology is a set of ready-to-have resources. These resources can be shared by all requests with the relationship between 1: 1 between request and resources. The use of the resource pool is conditional. It is necessary to measure the cost of the following two ways: A. Maintain a set of costs b that can be shared by all requests B, and recreate the price of a resource for each request. The current person is less than the latter. The use of the resource pool is efficient.