8. Using container-managed transactions.
Learn the two phases of J2EE to submit business and use this way, not open your own transaction management. The container is almost always better in terms of transaction optimization.
The use of container-managed transactions (CMT) provides two key advantages (if there is no container support, this is almost impossible): a combination of work units and robust transaction behavior.
If your application code explicitly uses the start and end transactions (perhaps using javax.jts.usertransaction or even local resource transactions), the future requirement requires a combination module (perhaps part of the code reconstruction), this In case, you often need to change the transaction code. For example, if the module A starts a database transaction, update the database, subsequently submit transactions, and has the same processing, consider what is the case when you try to use the above two modules in module C? Now, the module C is executing a logical action, and this action will actually call two independent transactions. If the module B has failed in the execution, the transaction of the module A is still being submitted. This is the behavior we don't want to appear. If, the module A and module b use CMT, the module C can also begin a CMT (usually by a configuration descriptor), and the transaction in the module A and module B will be the implicit portion of the same transaction, This will no longer need a complex rewrite code.
If your application needs to access multiple resources in the same operation, you should use two phases to submit a transaction. For example, if you delete a message from the JMS queue, and then update the record based on this message. At this time, it is especially important to ensure that both operations will be executed or not executed. If a message has been deleted from the queue, the system does not update the record in the database related to this message, then this system is unstable. Some serious customers and business disputes are from inconsistent states.
We often see some client applications try to implement their own solutions. Perhaps the "revoking" to the queue when the application's code is failed at the database update. We don't advocate this. This implementation is much more complicated than your initial imagination, and there are many other situations (imagine if the application suddenly crashes during this operation). As an alternative, two phases should be used. If you use CMT and access two-phase submitted resources (eg JMS and most databases) in a single CMT, WebSphere will process all complex work. It will ensure that the entire transaction is executed or not executed, including system crash, database crash or other case. It is actually saved in the transaction log. When the application accesses multiple resources, how do we emphasize that the need to use the CMT transaction is not over.
9. JSP as the preferred layer of the representation.
XML / XSLT is only used when multiple representation of output types are needed, and the output type is supported by a single controller and backend.
We often hear some debate that why you choose XML / XSLT instead of JSP as a representation layer technology. The viewpoint of selecting XML / XSLT is that JSP "allows you to mix models and views", and XML / XSLT will not have this problem. Unfortunately, this view is not completely correct, or at least not as clear as white and black. In fact, XSL and XPath are programming languages. XSL is Turning-Complete, although it does not meet the programming language defined by most people, because it is rule-based and does not have control tools for programmers.
The current problem is that since it gives this flexibility, developers will use this flexibility. Although everyone agrees with JSP to make developers easily join the "similar model" in the view, and in fact, there may be some same things in XSL. Although things such as accessing the database from XSL are very difficult, we have seen some abnormal complex XSLT style sheets to perform complex conversions, which is actually model code. However, the most basic reason for JSP should be selected as the preferred indication technology is that JSP is now the most widely supported, and the J2EE view technology most widely understood. With the introduction of custom tag, JSTL, and JSP2.0, creating JSP is easier and does not require any Java code, and can clearly separate the models and views. In some development environments (such as WebSphere Studio) joined the powerful support for JSP (including support for debugging), and many developers found that the development of JSP is simpler than using XLS, some support JSP graphics tools and other features (Especially in the framework of JSF), the developer can develop JSP development in the way you have seen, and for XSL is sometimes not easy to do.
The last reason for cautious considers the use of JSP is the speed problem. The performance test of the comparison XSL and JSP relative speed made in IBM: In most cases, JSP is several times more than XSL when generating the same HTML, and even uses the compiled XSL. Although this is not a problem in most cases, this will become a problem with high performance requirements.
However, this can not be said that you will never use XSL. In some cases, XSL can represent a set of fixed data, and the ability to display the data in different forms in different ways is the best solution for displaying views. However, this is just an exception, not a general rule. If you only generate HTML to express every page, in most cases, XSL is an unnecessary technology, and it gives your developers much more than what it can solve.
10. When using HttpSession, try to save the status required by the current transaction, other things do not be saved in httpsession.
Enable session persistence.
HttpSessionS is very useful for storing application status information. Its API is easy to use and understand. Unfortunately, developers often have forgotten the purpose of httpsession - used to maintain a temporary user status. It is not an arbitrary data cache. We have seen too many systems to put a lot of data (reach megabytes) for each user's session. Well, if there is a user of 1000 login system, each user has 1MB session data, then 1G or more memory is required for these sessions. To make these HTTP session data less, otherwise, your application will fall. A approximately appropriate amount of data should be between each user's session data between 2k-4K, which is not a hard rule, 8K still has no problem, but it is obvious that the speed is slower than 2K. Be careful not to make httpsession become a place where data is stacked.
A common problem is to use the HTTPSession cache some information that is easy to create, if necessary. Since session is persistent, unnecessary serialization and write data are a very extravagant decision. Conversely, the hash table should be used to cache data, and a keyword that is referenced in the session is saved. This way, if you cannot log in to another application server, you can recreate the data. Don't forget to enable this feature when talking about session persistence. If you do not enable session persistence, or the server stops (server failure or normal maintenance) because some reason, the current user's session of all this application service will be lost. This is a very unhappy thing. Users have to log in and re-do what they have done. Conversely, if session persistence is enabled, WebSphere will automatically move the user (and their session) to another application server. Users don't even know that there will be this happening. We have seen some product systems that suddenly crash because there is an unbearable bug (not IBM code!), In which this is still good.
11. In WebSphere, use dynamic cache and use the WebSphere Servlet cache mechanism.
By using these features, system performance can be greatly improved, and the overhead is small. And does not affect the programming model.
It is well known to improve performance through caching. Unfortunately, the current J2EE specification does not include a mechanism for servlet / JSP cache. However, WebSphere provides support for pages and clip cache, which is implemented by its dynamic cache function and does not need to make any changes to the application. The strategy of its cache is declared, and its configuration is implemented by an XML configuration descriptor. Therefore, your application will not be affected and maintain compatibility and portability with J2EE specifications, and also get performance optimization from the Cache mechanism of WebSphere's servlet and JSP.
The improvement of performance obtained from the dynamic caching mechanism of Servet and JSP is obvious, depending on the feature of the application. COX and Martin [Cox] indicate a Summary (RSS) servlet for an existing RDF (Resource Description Format), and its performance can increase 10% when using dynamic cache. Note that this experiment only involves a simple servlet, and the growth of this performance may not reflect a complex application.
To more improve performance, the WebSphere Servlet / JSP result cache is integrated with the WebSphere plugin ESI Fragment processor, IBM HTTP Server Fast Response Cache Accelerator (FRCA), and Edge Server cache feature. For heavy readable workloads, you can get many additional benefits by using these features.
12. In order to improve the work efficiency of programmers, CMP entity beans as an O / R mapping preferred solution.
Optimize performance through the WebSphere framework (Readahead, cache, isolation level, etc.). If possible, selective application Some modes to improve performance, such as Fast-Lane Readers [Marinescu].
Object / Relationship (O / R) mapping is the basis of using Java to create an enterprise-level application. Almost every J2EE application requires some types of O / R mappings. J2EE vendors provide an O / R mapping mechanism that is portable, efficient, and can be well supported by some standards and tools in different vendors. This is part of the CMP (container management persistence) part in the EJB specification. Early CMP implementations are known for its poor performance and not supporting many SQL structures. However, with the emergence of EJB 2.0 and 2.1 specification, these issues are no longer a problem with some manufacturers adopted by some manufacturers, and with the appearance of IBM WebSphere Studio Application Developer.
The CMP EJB component is now widely used in many high-performance applications. WebSphere includes some optimization functions to improve the performance of the EJB component, and optimization features include: the cache and read-ahead capability for life cycles. Both optimization functions are optional options and do not need to modify or affect the application.
Lifecycle cache CMP status data in a cached state and provides time-based invalidity. The performance improvement from the lifecycle of the cached state can achieve the cache performance of option A, and can still provide stretchability for your application. Read-ahead capability and the relationship between container management use. This feature reduces the interaction with the database by arbitrarily retrieving related data in the same query. If the relevant data is to be accessed by using concurrent queries, this method can be improved. [Gunther] provides detailed description and details of performance improved through these features.
In addition, in order to fully optimize your EJB components, pay special attention when specifying the isolation level. Use the lowest isolation level as much as possible and still keep your data integrity. The lower isolation level can provide optimal performance and can reduce the danger of the database deadlock.
This is the most controversial best practice. There have been a lot of articles to commend CMP EJB, and the same deceased is too late. However, the most basic problem here is that database development is difficult. When you start using any persistent solution, you need to master the query and how the database locks these basics. If you choose to use CMP EJB, you have to make sure you have learned how to use them through some books (such as [Brown] and [Barcia]). There are some subtle interactions in locking and contention difficult to understand, but you will master it after you spend a certain amount of time and effort.
Conclude
In this short summary, we have introduced you to the core patterns and best practices in J2EE, which makes J2EE development into a manageable process. Although we have not given the necessary details of these modes in practice, we hope to give you enough guidance and guidance to help you decide what to do next.