J2SE 5.0 New Features Thread

xiaoxiao2021-03-06  41

J2SE 5.0 New Features Thread

Author: Wang Hong (hongwang_001@163.com)

1.1. Process, thread and thread pool

The so-called process is a relatively independent program executed in its own address space, is the cornerstone of the modern operating system. The current multi-task operating system will periodically divide the CPU time to each process, so that the operating system can perform more than one program at the same time.

The thread is a "single continuous control process" in the process, and a plurality of parallel threads can be included in a process. However, threads cannot exist alone. It is attached to the process, and can only be derived from the process. If a process is born out of two threads, the two threads share the global variables and code segments of this process, but each thread has their own stacks, so they have their own local variables.

After understanding the concept of threads, we can enter the topic. Now let's take a look at how the thread pool is? In fact, the principle of thread pool is simple. It is similar to the concept of buffer in the operating system. Its processing flow is as follows: Start a number of threads and let these threads are in sleep state, when the client has a new request, The thread of a sleep in the thread pool will be awakened to handle this request for the client. When this request is processed, the thread is restored to the sleep state. The introduction of this method reduces frequently created system burdens brought to the destruction thread, leaving more CPU time and memory to handle actual application logic.

1.2. Java thread overview

In the early stage of Java programming, a professor Doug Lea, a professor in the New York State University of Oswego, decided to create a simple library to help developers build applications that can better handle multithreading. This is not to say that you cannot be implemented in an existing library, but just like there is a standard network library, it is easier to handle multithreading yourself with a debugged, trusted library. With the help of ADDISION-WESLEY, "Concurrent Programming in Java: Design Principles and patterns", this library becomes more and more popular. Finally, the author DOUG LEA decided to try to make it a standard part of the Java platform - JSR-166. This library finally became the Tiger version of the java.util.concurrent package. Here we will introduce a detailed introduction to the new content introduced in J2SE (TM) 5.0.

1.3. COLLECTION section of the expansion

1.3.1. Queue interface

The Java.util package provides a new basic interface for Collection: java.util.queue. Although it is sure to add java.util.list as a queue to the corresponding two-end, this new Queue interface provides more methods for supporting, deleting, and checking a collection, as follows:

Public Boolean Offer (Object Element)

Public Object Remove ()

Public Object Poll ()

Public Object Element ()

Public Object PEEK ()

For the size limit in the queue, if you want to join a new item in a full queue, the new OFFER method can play a corresponding role. It is not a unchecked exception to call the add () method, but just get the false returned by Offer (). The REMOVE () and poll () methods are removed from the queue (head). Remove () behavior is similar to the original Collection interface, but new poll () does not throw an exception when calling with an empty collection, just returns NULL. Therefore, new methods are more suitable for more prone to other abnormal conditions. The latter two methods Element () and peek () are used to query elements at the header of the queue. Similar to the remove () method, when the queue is empty, Element () throws an exception, and PEEK () returns NULL. In J2SE (TM) 5.0, Queue has two implementations: through the new BlockingQueue interface and directly implementing the Queue interface. Below is a way to use LinkedList as Queue

1.3.1.1. Queue's implementation

Queue queue = new linkedlist ();

Queue.offer ("1");

Queue.offer ("2");

Queue.Offer ("3");

Queue.Offer ("4");

System.out.println ("Head of queue is:" queue.poll ());

More complicated is a new Java.util.AbstractRactQueue class. This type of work is similar to the Java.util.AbstractList and Java.util.AbstractSet class. When you create a custom set, you don't need to implement the entire interface, just inherit the abstraction implementation and fill in the details. When using AbstractQueue, it must be implemented for method Offer (), POLL (), and peek (). Methods such as add () and addall () are modified to use offer (), while Clear () and remove () use Poll (). Finally, Element () uses peek (). Of course, you can provide optimization implementation of these methods in subclasses, but don't do this. Moreover, you don't have to create your own subclasses, you can use several built-in (what), two of which are not blocking queues: PriorityQueue and ConcurrentLinkedQueue.

The new java.util.concurrent package adds the BlockingQueue interface and five blocking queue classes in the specific collection class available in Collection Framework. The javadoc of the BlockingQueue interface gives the basic usage of blocking queues, as shown below. The PUT () operation in the producer will block without space, while the consumer's Take () operation will block when there is no thing in the queue.

1.3.1.2. Use of BlockingQueue

Class producer imports runnable {

PRIVATE FINAL BLOCKINGQUE Queue;

PROCKINGQUE q) {queue = q;}

Public void run () {

Try {

While (true) {queue.put (product ());

} catch (interruptexception ex) {... handle ...}

}

Object produpe () {...}}

Class Consumer IMPLEments Runnable {

PRIVATE FINAL BLOCKINGQUE Queue;

CONSUMER (Blockingqueue q) {queue = q;}

Public void run () {

Try {

While (True) {consume (queue.take ());

} catch (interruptexception ex) {... handle ...}

}

Void Consume (Object X) {...}

}

Class setup {

Void main () {

BlockingQueue q = new somequeueImplementation ();

Producer P = New ProductER (q);

Consumer C1 = New Consumer (q);

Consumer C2 = New Consumer (q);

New thread (p) .start ();

New thread (c1) .start ();

New Thread (C2) .start ();

}

}

The five queues provide different:

1. ArrayBlockingQueue: A bounded queue supported by the array.

2, LinkedBlockingQueue: An optional bound queue supported by the link node.

3, PriorityBlockingQueue: A unbounded priority queue supported by priority stacks.

4. DelayQueue: A priority heap support, time-based dispatch queue.

5, SYNCHRONOSQUEUE: A simple aggregation mechanism using the BlockingQueue interface.

The first two class ArrayBlockingQueue and LinkedBlockingQueue are almost the same, but they are different in the backup memory. LinkedBlockingQueue does not always have capacity boundaries. LINKEDBLOCKINGQUEUE class with no size boundary will never have block queues when adding elements. The new DelayQueue implementation may be the most interesting one. Elements joining into the queue must implement new delayed interfaces, and there is only one way - long getdelay (java.util.concurrent.timeUnit unit). Because the queue size has no boundary, the addition can be returned immediately, but the elements cannot be removed from the queue before the delay time has passed. If multiple elements have completed delays, the earliest failure / failure time The longest element will be taken first, in fact this implementation is not so complicated. The following procedure is a concrete implementation of DelayQueue:

1.3.1.3. DELAYQUEUE implementation

Class setup {

Void main () {

BlockingQueue q = new somequeueImplementation ();

Producer P = New ProductER (q);

Consumer C1 = New Consumer (q);

Consumer C2 = New Consumer (q);

New thread (p) .start ();

New thread (c1) .start ();

New Thread (C2) .start ();

}

} Return (Nanodelay) ost .trigger == Trigger;

}

Public Boolean Equals (Nanodelay Other) {

Return (Nanodelay) .trigger == Trigger;}

Public long getdelay (timeUnit unit) {

Long n = trigger - system.nanotime ();

Return Unit.Convert (n, timeUnit.nanoseconds);

}

Public long Gettriggertime () {

Return Trigger;

}

Public string toString () {

Return String.Valueof (TRIGGER);

}

}

Public static void main (string args []) throws interruptedexception {

Random Random = new random ();

DELAYQUEUE Queue = New delayQueue ();

For (int i = 0; i <5; i ) {

Queue.Add (new nanodelay (random.nextint (1000)));

}

Long Last = 0;

For (int i = 0; i <5; i ) {

NanodeLay delay = (queue.take ());

Long TT = delay.gettriggertime ();

System.out.println ("Trigger Time:" TT);

IF (i! = 0) {

System.out.println ("Delta:" (TT - Last));

}

Last = TT;

}

}

}

This example is first is an internal class Nanodelay, which is substantially a number of nanoseconds, which utilizes the new nanotime () method of System. The main () method is then just putting the Nanodelay object in the queue and takes them again. If you want the queue item to do something else, you need to join the method in the implementation of the Delayed object, and call this new method after taking it from the queue. (Please extend Nanodelay to do some interesting things to add other methods.) Display the time difference between the two calls from the queue. If the time difference is a negative number, it can be considered an error, because it will never be taken from the queue from the queue after the delay time is over. The SynchronousQueue class is the easiest. It has no internal capacity. It is like a hand hand mechanism between threads. Add an element in the queue to wait for another thread consumers. When this consumers appear, this element is directly transmitted between consumers and producers, never add to the blocking queue.

1.3.2. List, set, MAP interface

The new java.util.concurrent.concurrentMap interface and the ConcurrentHashMap specific class extends the previous MAP interface, and ConcurrentHashMap is a direct specific implementation of ConcurrentMap. The new interface adds a set of thread security related basic operations: PutifabSent, Remove, Replace. The powging () method is used to add in the MAP. This method is parameter to the value of the key to add to the key in the ConcURrentMap, just like a normal PUT () method, but only the key can be added to the MAP only when the MAP does not contain this button. If the map already contains this button, the existing value of this button will remain. Like the powging () method, the Remove () method after the overload is two parameters - keys and values. When the call is called, this button is removed from the MAP when the key is mapped to the specified value. If you don't match, then this button is not deleted and returns false. If the value matches the current map content of the key, then the key is removed. For new CopyonWriteArrayList and CopyonWriteArraySet classes, all variable operations first get a copy of the rear array, change the copy, and then replace the copy. This approach guarantees that when traversing the collection of itself, never throws the ConcurrentModificationException. The traversal collection will be done with the original set, while the updated collection is used in later operations. These new collections, CopyonWriteArrayList, and CopyonWriteArraySet, most suitable for read operations, usually, greatly exceeded the write operation.

1.4. Thread pool

In terms of the practical implementation of the thread pool, the term "thread pool" is somewhat misunderstanding because the thread pool "obvious" implementation does not necessarily produce the results of our hopes in most cases. The term "thread pool" appeared before the Java platform, so it may be a product of less object-oriented methods. However, the term continues to be widely used.

We usually want the work queue combined with the same set of fixed work threads, which use Wait () and notify () to notify the wait for the new work that has arrived. This work queue is usually implemented to have some piece of linked list with related monitor objects. The following code implements a work queue with a thread pool.

Public Class WorkQueue, PUBLIC CLASS WORKQUEUE

{

PRIVATE FINAL INT NTHREADS;

PRIVATE FINAL POOLWORKER [] Threads;

PRIVATE FINAL LINKEDLIST Queue;

Public WorkQueue (int nthreads)

{

THIS.NTHREADS = NTHREADS;

Queue = new linkedList ();

Threads = new poolker [nthreads];

For (int i = 0; i

Threads [i] = new poolworker ();

Threads [i] .start ();

}

}

Public void execute (Runnable R) {

Synchronized (queue) {

Queue.Addlast (R);

Queue.notify ();

}

}

PRIVATE CLASS POOLWORKER EXTENDS THREAD {

Public void run () {

Runnable R;

While (true) {

Synchronized (queue) {

While (queue.isempty ()) {

Try

{

Queue.wait ();

}

Catch (InterruptedException Ignored)

{

}

}

R = (runnable) queue.removefirst ();

}

// if we don't catch runtimeException,

// the pool could Leak threads

Try {

R.Run ();

}

Catch (runtimeexception e) {

// you might want to log something here

}

}

}

}

}

Although the thread pool is a powerful mechanism to build multi-threaded applications, it is not risky to use it. Applications built with thread pools are prone to all concurrent risks that are easily affected by any other multi-threaded applications, such as simultaneous errors and dead locks, it is also easy to suffer from a few other risks specific to the thread pool, such as deadlocks related to the pool. Resource shortage and thread leakage.

In J2SE (TM) 5.0, Doug Lea has written an excellent concurrent utility open source library Util.Concurrent, which includes a mutually exclusive, semaphore, such as a collection of queue and hash tables in concurrent access. Class and several work queues are implemented. The PooledExecutor class in this package is an effective implementation of the thread pool that is widely used in the work queue. Util.concurrent defines an Executor interface to perform Runnable asynchronously, and also define several implementations of Executor, which have different scheduling features. Quine a task into Executor is very simple:

Executor executor = new queuedexecutor ();

...

Runnable runnable = ...

Executor.execute (Runnable);

PooledExecutor is a complex thread pool implementation that not only provides the scheduling of the task in the WORKER THREAD pool, but also flexibly adjusts the size of the pool, but also provides thread lifecycle management, this implementation can limit the work queue The number of tasks to prevent the task in the queue from exhaust all available memory, and also provide a variety of available shutdown and saturation strategies (blocking, discarding, thrown, discarded, running in the caller, etc.). All EXECUTOR implements you manage the creation and destruction of threads, including closing all threads when shutting down Executor,

Sometimes you want to start a process asynchronously while you want to use the results of the process when you need this process later. The Futureresult utility class makes it easy. Futureresult indicates a task that may take a period of time and can perform this task in another thread, and the Futureresult object can be used as a handle of the process. With it, you can find out if the task has been completed, you can wait for the task to complete, and retrieve the results. Futureresult can be combined with the Executor; you can create a Futureresult and discharge it into the Executor's queue while retaining a reference to Futureresult. The following example demonstrates a simple example of using Futureresult and Executor, which launches image coloring asynchronously and processes other processing:

1.4.1. Futureresult instance

Executor Executor = ...

Imagerenderer renderer = ...

Futureresult FutureImage = new futureresult ();

Runnable command = futureImage.setter (new callable () {public object call () {return renderer.render (Rawimage);

});

// Start the rendering process

Executor.execute (Command);

// Do Other Things While Executing

Drawborders ();

Drawcaption ();

// Retrieve The Future Result, Blocking If Necessary

DrawImage (futureImage.get ())); // USE FUTURE

It is also possible to use Futureresult to increase the concurrency of the high-speed cache on demand. By placing the Futureresult within the cache, rather than placing the result of the calculation itself, it can reduce the time of the write lock on the cache can be reduced. Although this approach cannot speed up the first thread to put a certain item into a cache, it will reduce the first thread to block other threads to access the cache time. It also uses results earlier in other threads because they can retrieve FutureTask from the cache. The following is an example of a Futureresult using a cache:

1.4.2. Use Futureresult to improve the cache

Public class filecache {

Private map cache = new hashmap ();

PRIVATE EXECUTOR EXECUTOR = New PooledExecutor ();

Public void get (final string name) {

Futureresult result;

Synchronized (cache) {

Result = cache.get (name);

IF (result == null) {

Result = new futureresult ();

Executor.execute (result.setter (new callable () {

Public Object Call () {Return LoadFile (Name);

}));

Cache.put (result);

}

}

Return Result.get ();

}

}

This approach allows the first thread to quickly enter and exit synchronization blocks, so that other threads get the result of the first thread calculation as the first thread, and it is impossible to appear two threads to calculate the same object.

1.5. Conclusion

The thread occupies a pivotal position in various applications, and its complexity in practical applications is not clear. Here is the newly added ingredient in J2SE (TM) 5.0, such as readers need more deeper For details, please refer to the relevant professional books.

转载请注明原文地址:https://www.9cbs.com/read-61938.html

New Post(0)