Introduction In the past five years, many people have written books on J2EE best practices and articles. There are now about 10 (or more) books and many articles, which make a deep insight in how to build J2EE applications. In fact, this area is so much, and there are often some contradictions between these reference materials. When you experience these confused, these confusion will form an obstruction using J2EE itself. In order to provide some simple guidance for people who have this confused, we list the "most important 10" list below, they are the most important J2EE best practices. Unfortunately, 10 items are not enough to elaborate all content, especially when you develop web services as part of J2EE. Therefore, in order to develop J2EE, we decided to use the "most important 12" list rather than "the most important 10" list.
In order not to complicate problems, we use - the most important 10 ( 2) J2EE best practice ...
Best Practices
Always use the MVC framework. Automatic unit testing and test management is applied to each layer. Developed in accordance with specifications, rather than in accordance with application servers. J2EE security is planned from the beginning. Create what you know. When using the EJB component, always use session facades. Use stateless session beans instead of state session beans. Use container management transactions. JSP is the first choice for the representation layer. When using HttpSession, try to save the status required by the current transaction, other things do not save in httpsession. In WebSphere, start the dynamic cache and use the WebSphere Servlet Cache mechanism. In order to improve the work efficiency of the programmer, the CMP entity bean is the preferred solution for the O / R mapping.
1. Always use the MVC framework.
The MVC framework can clearly separate business logic, controller logic (servlets / struts), representation layer (JSP, XML / XSLT). Good layers can bring a lot of benefits. The MVC framework is so important for successful use J2EE, so that there is no other best practice to compare with it. Model - View-Controller (MVC) is the basis for designing J2EE applications. MVC simply divides the following sections of your program code:
The code responsible for business logic (ie models - usually implemented using EJB or ordinary Java objects). Responsible for the code displayed by the user interface (ie view-usually implemented through the JSP and tag library, sometimes using XML and XSLT). The code responsible for the application process (ie, the controller - usually uses the Java Servlet or class like the Struts controller).
Now there are many excellent reviews about MVC, and we recommend interested readers can refer to [Fowler] or [Brown] articles (see Resources), you can get a comprehensive and profound understanding of MVC.
If you don't follow the basic MVC framework, there will be a lot of questions during the development process. The most common problem is to add too much ingredients to the view section, for example, there may be use JSP tags to perform database access, or perform flow control in JSP, which is more common in small-scale applications. However, with later development, this will bring problems because JSP will gradually become more difficult to maintain and debug.
Similarly, we have often seen the case where the view layer is built to business logic. For example, a common problem is to apply directly to the business layer when the XML parsing technique used in building a view. The business layer should operate on the business object - rather than the specific data indicating that binds to the view. However, it is only a suitable component that does not necessarily mean that your application can get a suitable hierarchy. We often see all three applications containing all three items, JSP, and EJB components, however, its main business logic is implemented in the servlet layer, or application navigation is processed in JSP. You must make a strict code check and reconstruct your code to ensure that the application's business logic is processed by the Model Layer. The application navigation is only processed by the controller layer. Views (Views) just expressed the passing model objects in HTML and JavaScript.
2. Automatic unit testing and test management are used in each layer of the application.
Don't just test your graphical user interface (GUI). The layered test makes testing and maintenance work extremely simple. In the past few years, there is a considerable innovation in the field of methodology, such as newly appeared as agile (such as Scrum [Schwaber] and Extreme Programming [BECK1], see the lightweight method of reference) It has been well obtained. One common feature in almost all of these methods is that they all advocate automatic test tools that help developers regress Testing with less time and help them avoid inadequate regression. The test caused by the test can therefore be used to improve programmers' work efficiency. In fact, there is also a method called Test-First Development [beck2], which even has a first write unit test before developing the actual code. However, before you test the code, you need to split the code into some tested pieces. A "big mud ball" is difficult to test because it is not only a simple and easy-to-identification function. If you implement multiple functions, such code will be difficult to ensure its complete correctness.
One advantage of the MVC framework (and the MVC implementation in J2EE) is that the componentization of the element (actually, quite simple) makes cell testing for your application. Therefore, you can easily write test cases independently of entity beans, session beans, and JSP without having to consider other code. There are now many frameworks and tools for J2EE testing, which make this process easier. For example, JUnit is useful to test J2EE components for test J2EE components by JUnit (open source tool developed by junit.org) and CACTUS. [Hightower] explores how to use these tools in J2EE in J2EE.
Although all of these details how thoroughly test your app, we still see some people think that as long as they test the GUI (may be web-based GUI, or independent Java applications), they are comprehensive Tested the entire application. The GUI test is difficult to achieve comprehensive testing, with the following reasons. First, use the GUI test to completely test each path of the system, and GUI is only a way of impacting the system, there may be background operations, scripts, and various other access points, which also requires testing. However, they usually do not have a GUI. Second, the test of the GUI is a very coarse particle size test. This test is only the behavior of the test system on a macro level. This means that once there is a problem, the entire subsystem related to this problem is checked, which makes it very difficult to find Bug (defect). Third, GUI test usually only tests well in the later period of the development cycle, because only this time GUI gets a complete definition. This means that only potential bugs can only be found in later periods. Fourth, a general developer may have no automatic GUI test tool. Therefore, when a developer changes the code, there is no simple method to retest the affected subsystem. This is actually unfavorable to conduct a good test. If the developer has an automatic code-level unit test tool, developers can easily run these tools to ensure that changes made will not destroy the existing features. Finally, if an automatic build feature is added, it is very easy to add an automated unit test tool during the automatic building. When these settings are completed, the entire system can be regularly reconstructed, and the regression test is hardly involved. In addition, we must emphasize that the use of EJB and Web services, component-based development makes testing a single component becomes very necessary. If you don't have a "GUI" to test, you must perform a low-level test. It is best to start testing in this way, save the distributed component or web service as part of your application, you have to spend your efforts to tested.
In summary, by using an automated unit test, the system's defects can be quickly discovered, and these defects are also easily discovered, making the testing work more systematically, so the overall quality is also improved.
3. Developments in accordance with the specification, rather than in accordance with the application server.
To be familiar with the specification, if you want to deviate from the specification, you can do this after careful consideration. This is because when you deviate from the rules, what you do is often not what you should do. This is easy for you to make you unfortunate when you want to take the J2EE. We have found that there are some developers to study something other than J2EE allowed, they think so can "slightly" improve J2EE performance, and they will eventually find that this will cause serious performance issues, or in the future transplant (from A vendor will have problems in another vendor, or more common from a version to another. In fact, this transplant problem is so serious, so that [Beaton] refers to this principle as the basic best practice of transplantation.
Now there are several places if they don't directly use J2EE, will definitely produce problems. A common example is that developers replace J2EE security by using JAAS modules, rather than using built-in compliance with the specific scheduled application server mechanism. Be careful not to be separated from the authentication mechanism provided by the J2EE specification. If this specification is separated, this will be the main reason for the system has security vulnerabilities and vendor compatibility issues. Similarly, you should use the authorization mechanism provided by the servlet and the EJB specification, and if you want to deviate from these specifications, make sure to use the specification defined API (such as getCallerPrincipal ()) as the foundation of the implementation. In this way, you will be able to use the strong security infrastructure provided by the manufacturer, where business requirements need to support complex authorization rules. Other common problems include using a persistence mechanism that does not follow J2EE specifications (this makes transaction management difficult), using inappropriate J2SE methods (such as thread or singleton) in the J2EE program, and use your own method to resolve procedures Program-to-program communication, rather than using J2EE's inner support (eg JCA, JMS, or web services). The above design selection will cause numerous problems on the new version of the server that follows J2EE to other servers or ported to the same server. The only thing to depart from the specification is that when a problem is unable to solve in the specification. For example, the operational logic of schedules perform timing is a problem before EJB2.1 appears. In case of this, we recommend using the solution provided by the manufacturer when there is a solution provided by the manufacturer (such as the WebSphere Application Server Enterprise in WebSphere Application Server Enterprise). Scheduler tools, while using the tools provided by third parties when there is no vendor. If you use the solutions provided by the manufacturer, the application's maintenance and the transplantation to the new specification version will be the manufacturer's question, not your problem.
Finally, be careful not to use new technologies too early. It is too keen to use other parts that have not been integrated into the J2EE specification or the techniques that have not been integrated into the vendor's products often bring catastrophic consequences. Support is critical - if your vendor does not directly support a specific technology proposed in JSR, this technology has not been accepted by J2EE, then this technology should not be adopted. After all, most people in us are engaged in solving business problems, rather than advancing technology.
4. Plan J2EE security from the beginning.
Enable WebSphere security. This allows your EJB and URL to at least access all authorized users. Don't ask why - it is done. In the customers who work with us, the customers who have intended to enable WebSphere J2EE security have been very small, and this has always been surprised. It is estimated that only 50% of customers are intended to use this feature. For example, we have worked with some large financial institutions (banks, agents, etc.), they have not intended to enable security. Fortunately, this problem is resolved when deployed.
It is dangerous to use J2EE security. Suppose your application needs security (almost all applications need), you betting your developers can build their own security system, and this system is better than you bought from J2EE manufacturers. This is not a good bet, providing security for distributed applications is unusual. For example, you need to use a network security dollar to control access to EJB. In our experience, most of the safety systems built are unsafe, and have a significant defect, which enables the product system extremely fragile (please refer to Chapter 18 of [Barcia]). Some reasons that do not use J2EE security include: Worrying performance decline, believes other security, such as NETEGRITY SITEMINDER, can replace J2EE security, or do not know WebSphere Application Server security features and features. Don't fall into these traps, especially, although products like Netegrity Siteminder provide excellent security features, just it is impossible to protect the entire J2EE application. These products must be combined with the J2EE application server to protect your system.
Other common reasons why J2EE security is: the role-based model does not provide sufficient granular access control to meet complex business rules. Although the truth is like this, this should not be a reason not to use J2EE security. Conversely, J2EE verification and J2EE roles should be combined with specific expansion rules. If complex business rules need to make security decisions, write the corresponding code, and its security decisions should be based on direct use and reliable J2EE authentication information (user ID and role).
5. Create what you know.
Repeated development work will allow you to gradually master all J2EE modules. To start from creating a small and simple module instead of starting from the beginning, you will immediately involve all modules. We must admit that J2EE is a huge system. If a development team is just starting to use J2EE, it will be difficult to master it. There are too many concepts in J2EE and the API need to be mastered. In this case, the key to successfully master J2EE is starting from a simple step.
This approach can be achieved by creating small and simple modules in your application. If a development team creates a simple domain model and the back-end persistence mechanism (perhaps JDBC), it enhances their self-confidence, so they will use this domain model to go Master the use of the front end development of Servlet and JSP. If a development group finds it necessary to use EJB, they also use simple session Facades on the persistent EJB components managed by the container, or use JDBC-based Data Access Objects, DAOs. Instead of skiping these to use more complex constructions (such as messaging beans and JMS).
This method is not a new method, but there are very few developing groups to cultivate their skills in this way. Conversely, most development groups are to build all modules immediately, while involving view layers, model layers, and controllers in MVC, what they often fall into the pressure of progress. They should consider some agile development methods, such as extreme programming (XP), which use an incremental learning and development. There is a process called modelfirst [wiki] in XP, which involves building domain models as a mechanism to organize and implement user scenes. Basically, you have to build a domain model as the primary section of the user scene you want to implement, and then build a user interface (UI) as the result of the user scene on top of the domain model. This method is ideal for a development group to learn only one technique, rather than letting them face a lot of situations (or let them read a lot of books), which will crash them. Also, development of repetition of each application layer may contain some appropriate modes and best practices. If you start using some patterns such as data access objects and session Facades, you should not use domain logic in your JSP and other view objects.
Finally, when you develop some simple modules, you can perform performance testing for your application at the beginning of the beginning. If performance test is performed until the later application developed, this often has catastrophic consequences, just as [Joines].
6. Always use the session Facades when using the EJB component.
Never expose entity beans directly to any user type. The entity bean can only use the local EJB interface (LOCAL EJB Interfaces). When using the EJB component, use a session facades is a confirmation of undoubted best practices. In fact, this general practice is widely applied to any distributed technique, including CORBA, EJB, and DCOM. Fundamentally, your application's distribution "cross-zone" is below the underlying, the less time consumption of small blocks due to multiple repetitive network relays. The method to achieve this is to create a large-scale Facades object. This object contains a logical subsystem, so you can complete some useful business features through a method call. This method can not only reduce the network overhead, but also to create a transaction environment in EJB through a transaction environment can also greatly reduce the number of access to the database ([Brown] is discussed in detail. [Alur] The mode is specified. [Fowler] (and including EJB) and [Marinescu] are also described. See Resources).
The EJB local interface (starting from the EJB 2.0 specification) provides performance optimization methods for coexisting EJBs. The local interface must be explicitly accessed by your application, which requires a change in the code to prevent the application to change when configuring EJB later. Since session, Facades and the entire EJB it contains should be local for each other, we recommend using local interfaces using entity beans behind session facade. However, the implementation of session facades itself (typical examples such as stateless session beans) should be designed as remote interfaces.
For performance optimization, you can add a local interface to the session Facades. This uses this fact that in most cases (at least in the web application), your EJB client and EJB will also exist in the same Java Virtual Machine (JVM). Another situation, if the session facade is locally called, you can use the J2EE application server configuration optimization ("No Local Copies" in WebSphere. However, you must notice that these alternative scenarios will change the interaction method from the PASS-BY-VALUE to pass-by-reference. This may have a very subtle error in your code. When you use these scenarios, you should consider its feasibility at this. If you use a remote interface in your session (instead of local interface), you can also configure the same session Facades in a compatible manner in J2EE 1.4 as a web service. This is because JSR 109 (Web Service Deployment in J2EE 1.4) requires remote interfaces that use stateless session beans as an interface implemented by EJB Web services and EJB. This is worth it, because this can add the number of client types to your business logic.
7. Use stateless session beans instead of state session beans.
Doing so allows your system to stand the error. Use httpsession storage and user-related status. In our point of view, the concept of stateful session bean is outdated. If you think about it carefully, a stateful session bean is actually identical to a CORBA object in the architecture, nothing more than an object instance, binding to a server, and relying on the server to manage its lifecycle. If the server is turned off, this object does not exist, then this bean's client information does not exist.
The J2EE Application Server provides failover (Failover) provided by stateful session beans to solve some problems, but there is no state-of-state solution that is not expanded. For example, in WebSphere Application Server, requests for stateless session beans are implemented by balanced loading of member clusters deployed in a stateless session. Conversely, the J2EE application server cannot balance the request for state Beans. This means that the loading process of the server in your cluster will be uneven. In addition, using status session beans will add some status to your application server, which is also a bad practice. This increases the complexity of the system and makes problems complicated in the event of a failure. A key principle of creating a robust distributed system is to use stateless behavior.
Therefore, we recommend using a stateless session bean method for most applications. Any user-related state related to the user should be transmitted to the EJB in the form of parameters (and store it by using a mechanism such as HttpSession) or from persistence backend (eg, by using entity beans) ) Search as part of the EJB transaction. In appropriate, this information can be cached in memory, but pay attention to save this cache potential challenge in a distributed environment. The cache is very suitable for readup data.
In summary, you have to make sure that stretchability is taken into account from the beginning. Check all the ideas in the design and take into account whether you can run normally when your application is running on multiple servers. This rule is not only suitable for the above-mentioned application code, but also for the case of MBean and other management interfaces. Avoiding statusity is not just recommendations for IBM / WebSphere, this is a basic J2EE design principle. See [Jewell] Tyler Jewell's criticism with status beans, its views, and the above views.
8. Using container-managed transactions.
Learn the two phases of J2EE to submit business and use this way, not open your own transaction management. The container is almost always better in terms of transaction optimization. The use of container-managed transactions (CMT) provides two key advantages (if there is no container support, this is almost impossible): a combination of work units and robust transaction behavior.
If your application code explicitly uses the start and end transactions (perhaps using javax.jts.usertransaction or even local resource transactions), the future requirement requires a combination module (perhaps part of the code reconstruction), this In case, you often need to change the transaction code. For example, if the module A starts a database transaction, update the database, subsequently submit transactions, and has the same processing, consider what is the case when you try to use the above two modules in module C? Now, the module C is executing a logical action, and this action will actually call two independent transactions. If the module B has failed in the execution, the transaction of the module A is still being submitted. This is the behavior we don't want to appear. If, the module A and module b use CMT, the module C can also begin a CMT (usually by a configuration descriptor), and the transaction in the module A and module B will be the implicit portion of the same transaction, This will no longer need a complex rewrite code.
If your application needs to access multiple resources in the same operation, you should use two phases to submit a transaction. For example, if you delete a message from the JMS queue, and then update the record based on this message. At this time, it is especially important to ensure that both operations will be executed or not executed. If a message has been deleted from the queue, the system does not update the record in the database related to this message, then this system is unstable. Some serious customers and business disputes are from inconsistent states.
We often see some client applications try to implement their own solutions. Perhaps the "revoking" to the queue when the application's code is failed at the database update. We don't advocate this. This implementation is much more complicated than your initial imagination, and there are many other situations (imagine if the application suddenly crashes during this operation). As an alternative, two phases should be used. If you use CMT and access two-phase submitted resources (eg JMS and most databases) in a single CMT, WebSphere will process all complex work. It will ensure that the entire transaction is executed or not executed, including system crash, database crash or other case. It is actually saved in the transaction log. When the application accesses multiple resources, how do we emphasize that the need to use the CMT transaction is not over.
9. JSP as the preferred layer of the representation.
XML / XSLT is only used when multiple representation of output types are needed, and the output type is supported by a single controller and backend. We often hear some debate that why you choose XML / XSLT instead of JSP as a representation layer technology. The viewpoint of selecting XML / XSLT is that JSP "allows you to mix models and views", and XML / XSLT will not have this problem. Unfortunately, this view is not completely correct, or at least not as clear as white and black. In fact, XSL and XPath are programming languages. XSL is Turning-Complete, although it does not meet the programming language defined by most people, because it is rule-based and does not have control tools for programmers. The current problem is that since it gives this flexibility, developers will use this flexibility. Although everyone agrees with JSP to make developers easily join the "similar model" in the view, and in fact, there may be some same things in XSL. Although things such as accessing the database from XSL are very difficult, we have seen some abnormal complex XSLT style sheets to perform complex conversions, which is actually model code.
However, the most basic reason for JSP should be selected as the preferred indication technology is that JSP is now the most widely supported, and the J2EE view technology most widely understood. With the introduction of custom tag, JSTL, and JSP2.0, creating JSP is easier and does not require any Java code, and can clearly separate the models and views. In some development environments (such as WebSphere Studio) joined the powerful support for JSP (including support for debugging), and many developers found that the development of JSP is simpler than using XLS, some support JSP graphics tools and other features (Especially in the framework of JSF), the developer can develop JSP development in the way you have seen, and for XSL is sometimes not easy to do.
The last reason for cautious considers the use of JSP is the speed problem. The performance test of the comparison XSL and JSP relative speed made in IBM: In most cases, JSP is several times more than XSL when generating the same HTML, and even uses the compiled XSL. Although this is not a problem in most cases, this will become a problem with high performance requirements.
However, this can not be said that you will never use XSL. In some cases, XSL can represent a set of fixed data and can display the power of these data in different ways (see [Fowler]) to display the best solution for displaying views. However, this is just an exception, not a general rule. If you only generate HTML to express every page, in most cases, XSL is an unnecessary technology, and it gives your developers much more than what it can solve.
10. When using HttpSession, try to save the status required by the current transaction, other things do not be saved in httpsession.
Enable session persistence. HttpSessionS is very useful for storing application status information. Its API is easy to use and understand. Unfortunately, developers often have forgotten the purpose of httpsession - used to maintain a temporary user status. It is not an arbitrary data cache. We have seen too many systems to put a lot of data (reach megabytes) for each user's session. Well, if there is a user of 1000 login system, each user has 1MB session data, then 1G or more memory is required for these sessions. To make these HTTP session data less, otherwise, your application will fall. A approximately appropriate amount of data should be between each user's session data between 2k-4K, which is not a hard rule, 8K still has no problem, but it is obvious that the speed is slower than 2K. Be careful not to make httpsession become a place where data is stacked. A common problem is to use the HTTPSession cache some information that is easy to create, if necessary. Since session is persistent, unnecessary serialization and write data are a very extravagant decision. Conversely, the hash table should be used to cache data, and a keyword that is referenced in the session is saved. This way, if you cannot log in to another application server, you can recreate the data. (See [Brown2])
Don't forget to enable this feature when talking about session persistence. If you do not enable session persistence, or the server stops (server failure or normal maintenance) because some reason, the current user's session of all this application service will be lost. This is a very unhappy thing. Users have to log in and re-do what they have done. Conversely, if session persistence is enabled, WebSphere will automatically move the user (and their session) to another application server. Users don't even know that there will be this happening. We have seen some product systems that suddenly crash because there is an unbearable bug (not IBM code!), In which this is still good.
11. In WebSphere, use dynamic cache and use the WebSphere Servlet cache mechanism.
By using these features, system performance can be greatly improved, and the overhead is small. And does not affect the programming model. It is well known to improve performance through caching. Unfortunately, the current J2EE specification does not include a mechanism for servlet / JSP cache. However, WebSphere provides support for pages and clip cache, which is implemented by its dynamic cache function and does not need to make any changes to the application. The strategy of its cache is declared, and its configuration is implemented by an XML configuration descriptor. Therefore, your application will not be affected and maintain compatibility and portability with J2EE specifications, and also get performance optimization from the Cache mechanism of WebSphere's servlet and JSP.
The improvement of performance obtained from the dynamic caching mechanism of Servet and JSP is obvious, depending on the feature of the application. COX and Martin [Cox] indicate a Summary (RSS) servlet for an existing RDF (Resource Description Format), and its performance can increase 10% when using dynamic cache. Note that this experiment only involves a simple servlet, and the growth of this performance may not reflect a complex application.
To more improve performance, the WebSphere Servlet / JSP result cache is integrated with the WebSphere plugin ESI Fragment processor, IBM HTTP Server Fast Response Cache Accelerator (FRCA), and Edge Server cache feature. For heavy readable workloads, you can get many additional benefits by using these features. (See the performance of the performance described in [Willenborg]) 12. In order to improve the work efficiency of programmers, the CMP entity bean is the preferred solution for the O / R mapping.
Optimize performance through the WebSphere framework (Readahead, cache, isolation level, etc.). If possible, selective application Some modes to improve performance, such as Fast-Lane Readers [Marinescu]. Object / Relationship (O / R) mapping is the basis of using Java to create an enterprise-level application. Almost every J2EE application requires some types of O / R mappings. J2EE vendors provide an O / R mapping mechanism that is portable, efficient, and can be well supported by some standards and tools in different vendors. This is part of the CMP (container management persistence) part in the EJB specification.
Early CMP implementations are known for its poor performance and not supporting many SQL structures. However, with the emergence of EJB 2.0 and 2.1 specification, these issues are no longer a problem with some manufacturers adopted by some manufacturers, and with the appearance of IBM WebSphere Studio Application Developer.
The CMP EJB component is now widely used in many high-performance applications. WebSphere includes some optimization functions to improve the performance of the EJB component, and optimization features include: the cache and read-ahead capability for life cycles. Both optimization functions are optional options and do not need to modify or affect the application.
Lifecycle cache CMP status data in a cached state and provides time-based invalidity. The performance improvement from the lifecycle of the cached state can achieve the cache performance of option A, and can still provide stretchability for your application. Read-ahead capability and the relationship between container management use. This feature reduces the interaction with the database by arbitrarily retrieving related data in the same query. If the relevant data is to be accessed by using concurrent queries, this method can be improved. [Gunther] provides detailed description and details of performance improved through these features.
In addition, in order to fully optimize your EJB components, pay special attention when specifying the isolation level. Use the lowest isolation level as much as possible and still keep your data integrity. The lower isolation level can provide optimal performance and can reduce the danger of the database deadlock.
This is the most controversial best practice. There have been a lot of articles to commend CMP EJB, and the same deceased is too late. However, the most basic problem here is that database development is difficult. When you start using any persistent solution, you need to master the query and how the database locks these basics. If you choose to use CMP EJB, you have to make sure you have learned how to use them through some books (such as [Brown] and [Barcia]). There are some subtle interactions in locking and contention difficult to understand, but you will master it after you spend a certain amount of time and effort.
Conclusion In this short summary, we have introduced you to the core patterns and best practices in J2EE, which makes J2EE development into a manageable process. Although we have not given the necessary details of these modes in practice, we hope to give you enough guidance and guidance to help you decide what to do next.
References [Alur] Deepak Alur, John CRUPI and Danny Malks Complete "Core J2EE Patterns", Addison-Wesley, 2003 [Bakalova] R. Bakalova Wai WEBSPHERE DYNAMIC CACHE: Improving WebSphere Performance ", IBM Systems Journal, No. 2, 2004, volume 43 [Barcia] Roland Barcia and other written" IBM WebSphere: Enterprise Deployment and Advanced Configuration ", IBM Press, in 2004 [Beck1] Kent Beck wrote" Extreme Programming Explained: Embrace Change ", Addison-Wesley, in 1999 [Beck2] Kent Beck wrote" Test Driven Development by Example ", Addison-Wesley, in 2002 [Beaton] Wayne Beaton wrote" Migrating to IBM WebSphere Application Server, Part 1: Designing Software for change ", IBM DeveloperWorks [Brown] Kyle Brown and other co-author of" Enterprise Java Programming with IBM WebSphere (second Edition) ", Addison-Wesley, in 2003 [Brown 2] Kyle Brown, Keys Botzum written" Improving HttpSession Performance with smart serialization, IBM DeveloperWorks [COX] J. Stan Cox and Brian K. Martin Written "Exploiting Dynamic Caching In Was 5.0", Part1, E-Pro Magazine (2003 7/A Ectiography) [Fowler] Martin Fowler Written "Patterns of "written, Addison-Wesley, in 2002 [Gunther] Harvey Gunther and other" Enterprise Application Architecture Optimizing WebSphere 5.0 Performance Using EJB 2.0 caching and Read-Ahead hints ", WebSphere Developers Journal, March 2003 [Jewell] Tyler Jewell wrote the Stateful Session Beans: Beasts of Burden, onJava.com [Joines] Stacy Joines, Ken Hygh and Ruth Willenborg written "Performance Analysis for Java Websites", Addison-Wesley, in 2002 [Marinescu] Floyd Marinescu written "EJB Patterns" John Wiley & Sons,