content:
Synchronized Quick Review of Synchronized Review Compare ReentrantLock and Synchronized Scalable Condition Variables This Unfair Connection Reference About the author's evaluation of this article
related information:
Java Theory and Practice Series SYNCHRONIZATION IS NOT The Enemyreducing ContentionIbm Developer Kits for the Java Platform (Downloads)
subscription:
DeveloperWorks News DeveloperWorks Subscribe (Subscribe CD and Download)
The new lock class improves synchronization - but can't abandon Synchronized now.
JDK 5.0 provides some very effective new options for developers to develop high-performance concurrent applications. E.g,
Java.util.Concurrent.lock class
ReentrantLock is used as a Java language
Synchronized function replacement, it has the same memory semantics, the same lock, but there is a better performance under dispute conditions, in addition, it has
Synchronized is not provided with other features. Does this mean that we should forget?
SYNCHRONIZED, turn only
Reentrantlock? The concurrency expert Brian Goetz has just returned from his summer vacation, and he will provide us with an answer.
Multi-threaded and concurrency is not new, but one of the innovations in Java language design is that it is the first mainstream language that directly integrates cross-platform thread models and regular memory models into the language. The core class library contains a Thread class that can be built, started, and manipulated threads, and the Java language includes a constructor that transfers concurrent concurrency constraints - Synchronized and Volatile. While simplified concurrently related to the platform, it will never make the written work of concurrency and become more cumbersome, just make it easier.
Synchronized Quick Review The Code Block is declared as synchronized, there are two important consequences, which typically refers to atomicity and visibility (Visibility). Atomicity means that a thread can only perform code that is protected by a specified monitor object (LOCK) once, preventing multiple threads from conflicting with each other when updating the shared state. The visibility is more subtle; it is to deal with the various abnormal behaviors optimized by memory caches and compilers. In general, threads can be seen in a way you don't have to let other threads (regardless of these threads in the register, in the processor-specific cache, or by instruction rearrangement or other compiler), unacceptse variables The value of the value, but if the developer uses synchronization, as shown below, the runtime will ensure that a thread is updated to the update of the existing synchronized block, when entering the same monitoring When another SYNCHRONIZED block protected by the LOCK, you can see the update made of variables immediately. Similar rules also exist on the Volatile variable. (For information on synchronous and Java memory model, see Resources.)
Synchronized (LOCKOBJECT) {
// Update Object State
}
Therefore, implementing synchronous operations need to consider everything you need to update multiple shared variables, you can't have a contention, you can't destroy data (assuming the boundary position of synchronization), and to ensure that other threads of the correct synchronization can see these variables The latest value. By defining a clear, cross-platform memory model (this model has made modifications in JDK 5.0, some errors in the original definition), build "one writing, run everywhere" by complying with the simple rules below. Class is possible: when you will be read by another thread, or you will be written by another thread next to another thread, then you must be synchronized.
But now, in the recent JVM, there is no social synchronization (when a thread has a lock, there is no other thread attempt to get the lock) performance cost is still very low. (Nor like this; the synchronization in the early JVM has not been optimized, so many people think so, but now this has become a misunderstanding, people think that it is not for contention, and the synchronization has a high performance cost. .)
The improvement to SYNCHRONIZED is so good, is it? So why JSR 166 group spent so much to develop Java.util.Concurrent.lock framework? The answer is very simple - synchronization is good, but it is not perfect. It has some functional restrictions - it can't interrupt a thread that is waiting to get the lock, and cannot be locked by vote. If you don't want to wait, you can't get the lock. Synchronization also requires lock release only in the same stack frame as the stack frame where the lock is located, in most cases, this is no problem (and it is very good with abnormal processing), but there are some non-block structures. Locking is more appropriate.
The Lock framework in the ReentrantLock class java.util.concurrent.lock is an abstraction that is locked, which allows the lock implementation as a Java class, not as a language characteristic. This has left space for multiple implementations of LOCK, and various implementations may have different scheduling algorithms, performance characteristics, or lock semantics. The ReentrantLock class implements the Lock, which has the same concurrency and memory semantics as Synchronized, but adds some of the characteristics of a similar lock vote, timing lock waiting, and interrupt lock waiting. In addition, it also provides better performance in fierce considerations. (In other words, when many threads want to access shared resources, JVM can spend less time to schedule threads and use more time to perform threads.)
What does the Reentrant lock mean? Simply, it has a getting-related acquisition counter. If a thread with a lock is a lock, then the counter add 1, then the lock needs to be released twice to get true release. This imitates Synchronized semantics; if the thread enters the SYNCHRONIZED block protected by the monitor that has already owned by the thread, it allows the thread to continue, when the thread exits the second (or subsequent) Synchronized block, do not release the lock, only the thread exits The lock is released when the first SYNCHRONIZED block protected by the monitor.
When viewing the code example in Listing 1, you can see that Lock and Synchronized have a distinct distinction - Lock must be released in the FinalLip. Otherwise, if the protected code will throw an exception, the lock may never be released! This is different from what it seems to look, but in fact, it is extremely important. Forgot to release the lock in the finally block, you may leave a timed bomb in the program. When one day bomb explosion, you have to spend a lot of effort to find the source. Using synchronization, JVM will ensure that the lock will be automatically released. Listing 1. Protect the code block with the ReentrantLock.
Lock Lock = New ReentrantLock ();
LOCK.LOCK ();
Try {
// Update Object State
}
Finally {
Lock.unlock ();
}
In addition, the ReentrantLock implementation is more scalability than the current SYNCHRONIZED implementation. (In future JVM versions, Synchronized's contentility is likely to be improved.) This means that when many threads are playing the same lock, the overall spending using ReentrantLock is usually much less than Synchronized.
Compare ReentrantLock and Synchronized scalability TIM Peierls built a simple linear and serve as a simple evaluation, using it to measure relative scalability between synchronized and locks. This example is very good, because PRNG is actually doing each time, this benchmark is actually measured a reasonable, real SYNCHRONIZED and LOCK application, not testing pure paper talks or what Normal of code (just like many so-called benchmarks.)
In this benchmark, there is a Pseudorand interface, which only has a method nextrandom (int Bound). This interface is very similar to the functionality of java.util.random classes. Because PRNG uses the latest generated number as input when generating the next random number, and the final generated number is maintained as an instance variable, its focus is that the code segment of the update is not preempted by other threads, so I want to Use some form of lock to ensure this. (Java.util.random class can also do this.) We built two implementations for Pseudorandom; one using syncronized, another using java.util.concurrent.reentrantlock. The driver generates a large number of threads, each thread is crazy to compete for time slice, and then calculate how many wheels can be performed per second. Figures 1 and 2 summarize the results of different threads. This evaluation is not perfect, and only runs on two systems (one is double Xeon run hyper thread Linux, the other is a single processor Windows system), but it should be sufficient to express the scalability of Synchronized and ReentrantLock. The advantage is.
Figure 1. Synchronized and LOCK throughput, single CPU
Figure 2. throughput rate of Synchronized and LOCK (after normalization), 4 CPUs
The graphs in Figures 1 and 2 show the throughput rate in units of modified per second, and different implementations are adjusted to the 1 thread synchronized. Each implementation is relatively rapidly concentrating on a steady state, which typically requires the processor to be fully utilized, and spend most of the processor time on processing actual work (the number of computer randhes), only small Some time spent on thread scheduling expenses. You will notice that the SYNCHRONIZED version is dealing with any type of contention, and the performance is quite poor, and the Lock version spends quite small in scheduling expenses, thus leaving space for higher throughput, more effective CPU utilization. Conditional Variable Root Object contains certain special methods for communicating between wait (), notify (), and notifyall (). These are advanced concurrency features, many developers have never used them - this may be a good thing, because they are quite subtle, it is easy to use. Fortunately, as JDK 5.0 introduced Java.util.Concurrent, developers are almost no place to use these methods.
There is an interaction between the notification and the lock - in order to WAIT or NOTIFY on the object, you must hold the lock of the object. Just like Lock is a summary of synchronization, the LOCK framework contains summary of WAIT and Notify, which is a Condition. The Lock object acts as a factory object that is bound to this lock condition variable, which is different from the standard WAIT and Notify methods. For the specified LOCK, there can be more than one condition variable is associated with it. This simplifies the development of many concurrent algorithms. For example, a Condition's JavadoC displays an example of a boundless buffer implementation. This example uses two conditional variables, "not ful", "not Empty", which only uses only one WAIT settings than each LOCK. Method readability is better (and more effective). The Condition method is similar to WAIT, Notify and NotifyAll methods, named AWAIT, Signal, and Signalal, because they cannot override the corresponding methods on the object.
This is not fair If you see Javadoc, you will see that a parameter of the ReentrantLock constructor is a Boolean value, which allows you to choose a fair (FAIR) lock or an unfair (Unfair). The fair lock allows the thread to get the lock in the order of the request lock; not public lock, allow bargaining, in which case threads can sometimes be locked first than the other threads of the first request lock.
Why don't we let all the locks are fair? After all, fair is a good thing, unfair is not good, isn't it? (When the children want a decision, we will always call "this unfair". We think is very important, the children also know.) In reality, fair to ensure that the lock is very robust to lock, there is a big performance cost. To ensure fair books, bookkeeping and synchronization, it means that the fair lock that is competed is lower than that of unfair lock. As the default setting, fair settings should be set to False unless it is important to make a strict accordance with the order of thread queuing.
So how is it synchronized? Is the built-in monitor lock fair? The answer makes many people feel shocked, they are unfair, and they are always unfair. But no one complained that the thirst is hungry, because JVM guarantees that all threads will eventually get the locks they wait. To ensure statistical fairness, this is already sufficient for most situations, and this cost is much lower than absolute fairness. So, by default, ReentrantLock is "unfair", this fact is just a surfaceization of the event in the synchronization. If you don't mind this when you sync, you don't have to worry about it when you Reentrantlock. Figures 3 and 4 include the same data as Figs. 1 and 2, just adding a dataset for performing a random number based detection, this detection uses a fault lock, not the default negotiation lock. As you can see, fair is cost. If you need fair, you must pay the price, but please don't use it as your default.
Figure 3. Synchronization, negotiation lock and fair lock of synchronization, negotiation lock and fair lock when using 4 CPUs
Figure 4. Relative throughput of synchronization, negotiation and public lock when using 1 CPU
Are you better everywhere? It seems that ReentrantLock is better than Synchronized, it can do it, it can do it, it has the same memory and concurrency semantics as synchronized, but also has the characteristics of Synchronized, but also has more Good performance. So, should we forget Synchronized, no longer treated it as a good idea that has been optimized? Or even reentrantlock to rewrite our existing synchronized code? In fact, several Java programming introduction books use this method in their multi-threaded chapters, completely use Lock to make examples, and only use synchronized as history. But I think this is too good to do good things.
Still abandoning Synchronized Although Reentrantlock is a very moving implementation, it has some important advantages, but I think I am eager to take Synchronized as if it is absolutely a serious mistake. The lock class in Java.util.Concurrent.lock is a tool for advanced users and advanced situations. In general, unless you have a clear need for a high level of Lock, or have clear evidence (not just a doubt) indicate that synchronization has become scalable bottleneck, otherwise it should continue to use SYNCHRONIZED.
Why advocate conservative in an obvious "better" implementation? Synchronized still has some advantages because of the lock class in Java.util.Concurrent.lock. For example, when using Synchronized, you can't forget the release lock; when you exit the synchronized block, JVM will do this for you. You easily forget to release the lock with the Finally block, which is very harmful to the program. Your program can be tested, but there will be a deadlock in actual work. It will be difficult to point out the reason (this is why it does not allow primary developers to use LOCK a good reason.)
Another reason is because the JVM can include lock information when generating a thread dump is generated when the JVM is managed and released by Synchronized management. These are very valuable for debugging because they can identify the source of deadlocks or other abnormal behavior. The LOCK class is just a normal class, and JVM doesn't know which thread has a Lock object. Moreover, almost every developer is familiar with SYNCHRONIZED, which works in all of the JVM all versions. Before JDK 5.0 becomes a standard (two years from now on), use the Lock class will mean that the characteristics to be utilized are not every JVM, and not every developer is familiar. When did you choose Reentrantlock instead of Synchronized, what should we use ReentrantLock? The answer is very simple - when you do need some of Synchronized, this is like a time latch, you can interrupt the lock waiting, a block-free structure lock, multiple condition variables or lock votes. ReentrantLock also has the advantages of scalability and should be used in highly striking situations, but keep in mind that most Synchronized blocks have never been argued, so they can put high quality. I recommend using Synchronized to develop until Synchronized is not suitable, not just assume that if you use the ReentrantLock "performance is better." Keep in mind that these are advanced tools for advanced users. (Moreover, real senior users prefer the simplest tools that can be found until they think that simple tools do not apply.). As always, we must first do things well, then consider whether it is necessary to do faster.
Conclusion Lock framework is synchronized compatible alternative, which provides many features that Synchronized did not provide, and its implementation provides better performance under consideration. However, these obvious benefits are not enough to be replaced with ReentrantLock instead of SYNCHRONIZED. Instead, you should make a choice based on whether you need the ability of ReentrantLock's ability. In most cases, you should not choose it - Synchronized works well, you can work on all JVMs, more developers understand it, and not too easy to make mistakes. Only use it when you really need LOCK. In these cases, you will be happy to have this tool.
Reference