Java theory and practice: concurrency to a certain extent make everything simple

xiaoxiao2021-03-06  70

Brian Goetz (Brian@quiotix.com) Chief Consultant, Quiotix Corp 2003 For each project, like many other application infrastructure services, there is usually no need to rewrite concurrent utility classes (such as work queues and thread pools). This month, Brian Goetz will introduce Doug Lea's Util.Concurrent package, which is a high quality, widely used, and an open source package for concurrent utilities. You can make your idea to this article through this article, with pencils and other readers. (You can also click on the "Discussion" to participate in the Forum on top or bottom this article.) When you need an XML parser, text indexer, and search engine, regular expression compiler, XSL processor, or PDF generator, we Most people never consider writing these utilities. Whenever you need these facilities, we use business implementations or open source implementations to perform these tasks. It is very simple - there is a good job work well, and it is easy to use, write these utilities. Safety, or even no results . As a software engineer, we are more willing to follow the belief of Aisak Newton - standing above the giant's shoulders, sometimes it is good, but not always like this. (In Richard Hamming Turing Award Lecture, he believes that computer scientists "self-reliance" should be more desirable.) Exploring the reasons for repeating the invention "wheel" for some low-level application framework services that are needed for almost every server application (such as logs Record, database connection, cache, and task scheduling, etc.), we see that these basic infrastructure services have been overwhered over again. Why do this happen? Because existing choices are not sufficient, or because customized versions should be better or more suitable for hand-on applications, I think this is unnecessary. In fact, custom versions developed for an application are often more suitable than widely available, and generic implementations are more suitable for the application, perhaps worse. For example, although you don't like log4j, it can complete the task. Although the logging system you develop may have some specific characteristics lack of log4j, for most applications, you will be difficult to prove that a perfect custom logging package deserves the cost of writing from the head, without using the existing, General implementation. However, many project teams will eventually write logging, connection or thread seasoning bags over and over again. On the surface, it seems that we don't consider one of the reasons why you write an XSL processor is that this will cost a lot of work. But these low-level framework service surfaces look simple, so they have written them. It doesn't seem difficult. However, they are difficult to work properly, and they are not like begin to look. These special "wheels" have been in the reunification of the invention. In the given application, it is often very small in the new tool, but when you encounter the same in countless other projects. This demand will gradually become large when the problem. The reason is usually like this: "We don't need a sound log record / scheduling / cache package, just need some simple packages, so only some packages that can meet our destination, we will adjust it for your own specific needs." But the situation is often, you quickly extend the simple tool written, and try to add more features until you write a perfect infrastructure service. At this point, you usually attach the procedures you have written, no matter whether it is good or bad. You have already paid all the price to build your own procedure, so in addition to the migration cost of the universal implementation, it is necessary to overcome this "payment cost" obstacle. The value of concurrent components is in writing scheduling and concurrent infrastructure classes.

The Java language provides a group of useful low-level synchronization primates: Wait (), notify () and synchronized, but specific use of these primitives require some techniques, you need to consider performance, deadlock, fairness, resource management, and how to avoid thread safety Many factors such as hazards caused by sex. The concurrent code is difficult to write, more difficult to test - even if the experts are sometimes in the first time, there will be errors. Doug Lea, which is an extremely excellent, free, concurrent practical package, including concurrent applications, and a lightweight task, a lightweight task, a lightweight task Concurrent set, atom arithmetic operation and other basic components. It is generally called Util.Concurrent (because it is actually packaged), the package will form the basis of Java Community Process JSR 166 is standardizing JAVA.UTIL.CONCURRENT package. At the same time, Util.Concurrent has been tested, and many server applications (including JBoss J2EE application servers) use this package. Filling the blank core Java class library has been slightly went to a group of useful advanced synchronization tools (such as mutually exclusive, signal and blocking, thread safety collection class). Candidinality of Java Language - Synchronization, Wait () and Notify () - too low for most server applications. If you want to try to get the lock, if it is time to get it in a given period, what happens? If the thread is interrupted, give up the attempt to get the lock? Create a lock that can have N threads? Support multiple ways of locking (such as twisted and reading each other)? Or use a way to get the lock, but release it in another way? The built-in locking mechanism does not directly support the above scenarios, but can build them on the basic concurrent primitives provided by the Java language. But doing so requires some techniques, and it is easy to make mistakes. Server application developers require simple facilities to perform mutually exclusive, synchronous event responses, across data communication, and asynchronous dispatch tasks. For these tasks, the low-level primitives provided by the Java language is difficult to use, and it is easy to errors. The purpose of the Util.Concurrent package is to fill this blank by providing a class for locking, blocking queues, and task scheduling, thereby manipulating some common error conditions or restrictions consumed by the task queue and the tasks in the run. The most widely used classes in the scheduled asynchronous task util.concurrent are classes that handle asynchronous event scheduling. In this column in July in July, we studied Thread pools and work queues, and many Java applications how to use the "Runnable Queue" mode scheduling small work unit. You can derive the backend thread to execute the task by simply creating a new thread. This approach is very attractive: new thread (new runnable () {...}) .Start (); although this It is very good, and it is very simple, but there are two major defects. First, create a new thread requires a certain resource, thus generating many threads, each will execute a short task, then exit, which means that JVM may do more work, create and destroy the thread consumption There are many resources consumed by resources than actual useful work. Even if the overhead of creating and destroying the thread is zero, this execution mode still has a second more difficult to solve the defects - how to limit the resources used when performing a certain task? If you suddenly come to a lot of requests, how to prevent a lot of threads at the same time? The server application in the real world needs to manage resources more carefully than this. You need to limit the number of asynchronous tasks simultaneously.

The thread pool solves the above two problems - the thread pool has the benefit of increasing scheduling efficiency and resource usage simultaneously. Although people can easily write work queues and thread pools in the Pool thread (in July, the sample code in the column article is used for this purpose), but written a valid task scheduler needs to be synchronized than simplely synchronization. Shared queues access more work. The task schedule program in the real world should be able to deal with the dead thread, kill the super-excess pool thread, so that they do not consume unnecessary resources, dynamically manage the size of the pool according to the load, and limit the number of queuing tasks. In order to prevent server applications from crash due to lack of memory, the last item (ie, the number of tasks that limit the queuing) is important. Limiting task queue needs to make decisions - how to deal with this overflow if the work queue overflows? Abandon the latest task? Abandon the oldest task? Blocking the thread being submitted until the queue has a space available? Perform a new task within the thread being submitted? There is a variety of practical overflow management strategies, each of which is suitable in some cases, while it is not suitable in other cases. Executorutil.concurrent defines an Executor interface to perform RunNable asynchronously, and also define several implementations of Executor, which have different scheduling features. Summity of excitor is very simple: Executor Executor = New QueuedExecutor (); ... runnable runnable = ...; executor.execute (runnable); the easiest implementation ThreadedExecutor creates a new thread for each runnable No resource management is available here - very similar to new thread (new runnable () {}. START () This common method. But ThreadedExecutor has an important benefit: By changing the Executor structure, it can be transferred to other execution models without having to slowly find all the new threads created throughout the application source code. QueuedExecutor uses a back-end thread to process all tasks, which is very similar to event threads in AWT and Swing. QueuedExecutor has a good feature: the task is performed in accordance with the order of the queue, because all tasks are performed in a thread, and the task does not need to synchronize all access to shared data. PooledExecutor is a complex thread pool implementation that not only provides the scheduling of the task in the WORKER THREAD pool, but also flexibly adjusts the size of the pool, but also provides thread lifecycle management, this implementation can limit the work queue The number of tasks to prevent the task in the queue from exhaust all available memory, and also provide a variety of available shutdown and saturation strategies (blocking, discarding, thrown, discarded, running in the caller, etc.). All EXECUTOR implements you manage the creation and destruction of the thread, including closing all threads while turning off the Executor, and also provides hook for thread creation processes so that applications can manage threads that it want to manage. For example, this allows you to put all the work threads in a specific ThreadGroup or give them descriptive names. Futureresult Sometimes you want to start a process asynchronously while you want to use the results of this process when you need this process later. The Futureresult utility class makes it easy. Futureresult indicates a task that may take a period of time and can perform this task in another thread, and the Futureresult object can be used as a handle of the process.

With it, you can find out if the task has been completed, you can wait for the task to complete, and retrieve the results. Futureresult can be combined with the Executor; you can create a Futureresult and discharge it into the Executor's queue while retaining a reference to Futureresult. Listing 1 shows a simple example of using Futureresult and Executor, which launches image coloring asynchronously and processes other processing: Listing 1. Futureresult and Executor Executor Executor = ... ImageRereSult FutureReiMage = new FutureResult (); Runnable command = futureImage.setter (new Callable () {public Object call () {return renderer.render (rawImage);}}); // start the rendering processexecutor.execute (command); // do other things while executingdrawBorders (); drawCaption (); // retrieve the future result, blocking if necessarydrawImage ((Image) (futureImage.get ())); // use futureFutureResult cache and may also be used to increase demand FutureResult Put the concurrency of the cache. By placing the Futureresult within the cache, rather than placing the result of the calculation itself, it can reduce the time of the write lock on the cache can be reduced. Although this approach cannot speed up the first thread to put a certain item into a cache, it will reduce the first thread to block other threads to access the cache time. It also uses results earlier in other threads because they can retrieve FutureTask from the cache. Listing 2 shows an example of use of the cache for FutureResult: Listing 2. Use FutureResult cache to improve public class FileCache {private Map cache = new HashMap (); private Executor executor = new PooledExecutor (); public void get (final String name) {FutureResult result; synchronized (cache) {result = cache.get (name); if (result == null) {result = new FutureResult (); executor.execute (result.setter (new Callable () {public Object Call () {return loadFile (Name);}}); cache.put (result);}} Return Result.get ();}} This method makes the first thread quickly into and exits the synchronization block, so that Other threads have obtained the first thread calculation as the first thread, and two threads do not have two threads to calculate the same object. Conclusion Util.Concurrent package contains many useful classes, you might think that some classes are as good as those you have written, maybe even better than before. They are high performance implementations of the basic components of many multi-thread applications and have experienced a lot of tests.

转载请注明原文地址:https://www.9cbs.com/read-84731.html

New Post(0)