THINKING: Method for driving SYNCHRONIZE

xiaoxiao2021-03-06  100

There are some don't understand it when I saw it, and I checked some information and had a great help to myself. I understand the usage of synchronized is:

First, Synchronized applicable, object, role, and necessity and side effects

Operation: Multi-threaded Connected Access Resources: To set side effects for resources (such as variables, structures, files, etc.): Synchronization causes delay waiting, do not use without multi-threaded environments, using this keyword to ensure security, but At the same time, efficiency will be reduced. example? Simple: 1: Multiple clients (JSP? Servlet?) Access a static global variable Object xxx = ... getApplicationObject (); synchronized (xxx) {// Update this variable}

Two: Some containers can also be used, such as Vector and HashTable use Synchronized keywords.

Three: Array (1000) You give him a value, you will use synchronized

Because the value is taken for a certain period of time, this period does not access an array.

-----------------

Synchronized keyword, which includes two usage: Synchronized method and synchronized block. 1. Synchronized method: Declare the Synchronized method by adding the synchronized keyword in the method declaration. Such as: public synchronized void accessval (int newval); Synchronized method controls access to class member variables: Each class instance corresponds to a lock, each synchronized method must obtain the lock of the class instance of the class instance, otherwise The gear is blocked. Once the method is executed, the lock is exclusively until the lock is released until it returns from the method, and the block is blocked, and the lock can be re-entered. This mechanism ensures that at the same time for each class instance, all of which is in the member function of Synchronized, is in an executable state (because at most one can get the corresponding lock), effectively avoid the class member. Variables Access Conflict (as long as all methods that may access class member variables are declared as synchronized). In Java, not only a class instance, each class also corresponds to a lock so that we can also declare the class's static member function as synchronized to control its access to the static member variables of the class. Synchronized method: If a big method declares that Synchronized will greatly affect efficiency, typically, if the thread type method Run () is declared as synchronized, since it has been running during the entire life of the thread, It will lead to the call to any SYNCHRONIZED method in this class will never succeed. Of course, we can declare this issue by placing the code of the access class member variables in a special method, and call this problem in the main method, Java provides us with better solution, that is SYNCHRONIZED block. 2. Synchronized block: Declare the Synchronized block by synchronized keyword. Syntax is as follows: synchronized (syncObject) {// the code to allow access control} synchronized block is a block of code, which must be the object code SyncObject (as described above, can be a class instance or class) of the lock in order Execution, the specific mechanism will be described as previously described. Since any code block can be arbitrarily specified, the flexibility is higher. -------------------------

### Static method SYNCHRONIZED problem ###

In Java, use the synchronized keyword to make a synchronization of threads. One method sets the SYNCHRONIZED keyword automatically becomes a synchronization method to enter the method equivalent to the current object as the signal amount, lock. Unlock when exiting. Java also provides WAIT () and Notify (), and NotifyAll () method assisted synchronous WAIT () will unlock the current semaphore, so that the thread enters the plug. It is until you use the same semaphore () or notifyall () to awaken it.

Threads have different states at different times, and when they are being executed, we call Running status when they occupy the CPU. The other status thread is naturally unable to run. It is likely that there are multiple threads that cannot be run, but they can't run, but they are different. Then there is a "pool pool" because the thread that cannot be run is exist. Since the virtual machine thread is scheduled, it is in the Runnable pool, which means that all the best, only the CPU, the thread that blocks the use of Synchronized will wait in the Object's Lock Pool pool. After the synchronization method is called, the thread will wait in the WAIT pool. Here you should pay attention, first wait () method can get the execution indicating that the current thread is running (in the running status), which means that it will definitely have an object lock; second, when calling the wait () method When entering the blocking state, the current thread will release the object Lock (!!) third, when the notify () or notifyAll () method wakes up this thread, it will enter the Object's Lock Pool pool to re-obtain the object lock. The state diagram is shown below. Schedule (Runnable) <-------------> (Running) ^ | / | | - / | SYNCHRONIZED WAIT () --MUST HAV LOCK ACQUIRE LOCK | / Release Lock | | / | | / | (BLOCKED IN) ---------- (Object's) <---- (Object's) (Lock Pool) <---------- (Wait Pool) Notify () ------------------------

A special resource-Object-Ava provides a built-in mechanism to prevent their conflicts. Since we usually set data elements to a PRIVATE (private) class, then only access to those memory, just to set a specific method to Synchronized (synchronous), it can effectively prevent conflicts. At any time, only one thread calls a SYNCHRONIZED method for a specific object (although the thread can call multiple object synchronization methods). The simple synchronized method is listed below: Synchronized void f () {/ * ... * /} Synchronized void g () {/ * ... * /} Each object contains a lock (also called " Monitor "), it automatically becomes part of the object (there is no need to write any special code for this). When you call any Synchronized method, the object will be locked, and any other Synchronized method of that object cannot be called unless the first method has completed its own work and unlock it. In the above example, if f () is called for an object, the G () cannot be called for the same object, unless F () is complete and unlocked. Therefore, all SYNCHRONIZED methods for a particular object share a lock, and this lock can prevent multiple methods from writing to universal memory (such as multiple threads at the same time). Each class also has its own lock (as part of the class Class object), so the Synchronized Static method can be locked in a class to prevent contact with STATIC data. ××× static method is to lock the class, the ordinary method is to lock objects -------------

Import java.net. *; import java.io. *;

public class SyncTest extends Thread {int whichfunc = 0; public static void main (String [] args) throws Exception {SyncTest syn1 = new SyncTest (1); SyncTest syn2 = new SyncTest (2); syn1.join (); syn2. Join ();} public synctest (int which) {whichfunc = which; start ();} public void run () {try (whichfunc == 1) {func1 ();} else if (Whichfunc == 2) {FUNC2 ();}} catch (exception e) {E.PrintStackTrace ();}}

private static int order = 0; private synchronized static void func1 () throws Exception {System.out.println ( "this is func1, value is" order ); Thread.sleep (2000); System.out.println ( "end Of func1, value is " order);} private synchronized static void func2 () throws exception {system.out.println (" this is func2, value is " ); thread.sleep (2000); system.out. Println ("End of Func2, Value IS" Order);}} Compared to FUNC1 and FUNC2 If both Synchronized Static, both are synchronized If both are STITIC instead of synchronized, if they cannot be synchronized if both Just synchronized instead of STITIC cannot be synchronized because they are not the same object.

If both are synchronized, but there is one is not static and the other is, and cannot be synchronized.

So I think Synchronized is in front of the Static method, is synchronized in the global in the overall situation before the STIC method. Synchronized acts in front of a non-Static method, is synchronized for a particular object. http://www.javaworld.com/javaworld/jw-04-1999/JW-04-Toolbox.html But non-static variables can also simulate STATIC methods, for example, FUNC1 is changed to Private Void Func1 for the two examples. ) throws exception {synchronized (this.getClass ()) {system.out.println ("this isfunc1, value is" order ); thread.sleep (2000); System.Out.println ("End of Func1, Value IS " order);}} The effect is the same. Non-Static methods must be relying on objects, and the Static method does not require an object semantically. Synchronized issues are valid for the entire JVM, this must be understood. Synchronized is actually a problem with cumbers and release of a lock. For non-Static methods, the scope of Synchronized is just the objects relying rather than globally, and the Static method Synchronized scope is a class of Class objects. Since this object is a global unique, STATIC method is only one. Access him. In this sense, we can also look at the Static method to be a non-Static method, just a way of a Class object. Therefore SYNC is related to static. Method and block by Sync Only One THREAD is calling him? ? ? When two object instance thread calls an Sync's non-Static method, SYNC does not play any effect, which is theoretical and practical. -------------------------------------------------- ----------------------- Write multithreaded Java app

How to avoid the most common problems in current programming

Alex Roetter (Aroetter@cs.stanford.edu) Software Engineer of Teton Data Systems February 2001

The Java Thread API allows programmers to write applications with multi-process mechanism advantages, and maintain the interaction of users while processing tasks in the background. Alex Roetter introduces the Java Thread API and an overview of multithreaded problems, as well as solutions for common problems

Almost all draw programs written in AWT or SWING need multi-threaded. However, multithreading procedures can cause many difficulties. Developers who have just started programming often find them tortured by some problems, such as incorrect program behavior or dead locks. In this article, we will explore the problems that use multithreading and propose solutions for common traps. What is the thread? A program or process can contain multiple threads that can perform corresponding instructions based on the program's code. Multi-threaded seems to be in parallel to perform their respective work, just like running multiple processes on a computer. When you implement multithreading on a multi-process machine computer, they do in parallel. Different from the process, thread sharing address space. That is, multiple threads can read and write the same variable or data structure. When writing multithreaded programs, you must pay attention to whether each thread has interfered with other threads. Programs can be regarded as an office. If you do not need to share office resources or communicate with others, all staff will work independently. If a staff wants to talk to others, and when the staff is "listening" and both say the same language. In addition, when the copying machine is idle and in the available state (not only half of the copying work, there is no problem such as paper blocking), the staff can use it. In this article, you will see that threads collaborate in the Java program seem to be a staff member working in a good organization. In multi-threaded programs, threads can be obtained from the ready-to-read queue and run on the available system CPU. The operating system can move the thread from the processor to the ready to queue or block queue, which can be considered that the processor "hangs" the thread. Similarly, the Java Virtual Machine (JVM) can also control the movement of the thread - in collaboration or preemptive model - move the process from the ready-to-write queue to the processor, so the thread can start executing its program code. Collaborative thread model allows threads to determine when to give up the processor to wait for other threads. The program developer can accurately determine when a thread will hang by other threads, allowing them to work with the other party. A disadvantage is that some malicious or write threads consume all available CPU times, causing other threads "hungry". In the preemptive thread model, the operating system can interrupt the thread at any time. It is usually interrupted after it has been running (it is a so-called time piece). Such a result is naturally no threads that can be unfairly long-term. However, it is possible to interrupt the thread at any time will bring other troubles to the program developer. Also use the office, assuming that a staff will use a copy in front of another person, but the print job left when it is not completed, and the other person can then have the previous staff to stay on the copy. data. The preemptive thread model requires the thread to share resources correctly, and the collaborative model requires the thread sharing execution time. Since the JVM specification does not specify a thread model, the Java developer must write programs that can run correctly on both models. After understanding some aspects of threads and inter-thread communication, we can see how to design programs for these two models. Threads and Java languages ​​To create threads using Java language, you can generate a THREAD class (or its subclass) object and send a START () message to this object. (The program can send a START () message to any class object derived from the Runnable interface.) Definition of each thread action is included in the RUN () method of the thread object. The RUN method is equivalent to the main () method in the traditional program; threads continue to run until Run () returns, at this time, the thread is dead. Most applications require threads to communicate with each other to synchronize their actions. The easiest way to implement synchronization in the Java program is to lock. To prevent simultaneous access to shared resources, threads can be locked and unlocked to the resource before and after using resources.

If you want to lock the copy machine, only one staff member has a key at any time. A copier cannot be used if there is no key. The locking of the shared variable allows the Java thread to communicate and synchronize quickly and easily. If a thread is a lock to an object, you can know that there is no other thread to access the object. Even in the preemptive model, other threads are not able to access this object until the threads of the locked thread are awakened, complete the work and unlock. Those threads trying to access a locked object usually enter the sleep state until the thread unlocking is locked. Once the lock is opened, these sleep processes will be awakened and moved to the ready-to-read queue. In Java programming, all objects are locked. Threads can be used to get the lock using the synchronized keyword. The code blocks of the given classes for a given class can only be performed by one thread at any time. This is because the code requires the lock of the object before executing. Continue to our metaphor for the copy machine, in order to avoid copying conflicts, we can simply synchronize the copy resource. As the following code example, only one staff member is allowed to use copy resources at any time. Modify the copy machine by using the method (in the Copier object). This method is synchronous method. Only one thread can perform synchronous code in a Copier object, so those who need to use the Copier object must wait in line. Class CopyMachine {

Public Synchronized Void Makecopies (Document D, INT NCOPIES) {

// only one thread Executes this at a time

}

Public void loadpaper () {

// Multiple Threads Could Access this at Once!

Synchronized (this) {

// only one thread Accesses this at a time

// Feel Free To Use Shared Resources, Overwrite Members, ETC.

}

}

}

The Fine-Grain lock is usually a relatively rough method. Why do you want to lock the entire object without allowing other threads to use other synchronization methods in the object to access shared resources? If an object has multiple resources, it is not necessary to lock all threads outside only to let a thread use some of the resources. Since each object is locked, you can use the virtual object to be locked as follows:

Class FinegrainLock {

Mymemberclass x, y;

Object xlock = new object (), ylock = new object ();

Public void foo () {

Synchronized (xlock) {

// Access X here

}

// Do Something Here - But don't use shared resources

Synchronized (Ylock) {

// Access Y Here

}

}

Public void bar () {

Synchronized (this) {

// Access Both x and y here

}

// Do Something Here - But don't use shared resources

}

}

If you synchronize in the method level, the entire method cannot be declared as the synchronized keyword. They use a member lock, not an object-level lock that the SYNCHRONIZED method is available. Signal volume is usually, there may be a number of resources that need multiple threads need to access the number of resources. I want to run a number of threads that answer client requests on the server. These threads need to connect to the same database, but only a certain number of database connections can only be obtained at any time. How can you effectively assign these fixed number of database connections to a large number of threads? A method of controlling access to a set of resources (except for simply unlocking), is the use of a well-known semaphore. The semapcular count will encapsulate a group of resources available. The signal is implemented on the basis of a simple upper lock, which is equivalent to a counter that can make threads, and initialize the number of available resources. For example, we can initialize a semaphore to the available database connections. Once a thread obtains a signal, the number of database connections can be obtained. The thread consumes the resource and releases the resource, the counter will add one. When all resources control of the semaphore have been occupied, if there is a thread attempt to access this semaphore, it will enter the blocking state until there is available resource released. The most common usage of semaphors is to solve the "consumer-producer problem". When a thread is working, this problem may occur if another thread is accessible to the same shared variable. Consumer threads can only be able to access data after production of producer threads. With the amount of semaphore to solve this problem, you need to create a signal initialized to zero, so that the consumer thread occurs when this signal is accessed. Whenever the unit is completed, the producer thread sends signals (release resources). Whenever the consumer thread consumes the unit production results and requires a new data unit, it tries to get the semaphore again. Therefore, the value of the semaphore is always equal to the number of data units that can be consumed by the production. This approach is much more efficient than the use of consumer threads to keep check whether there is a data unit. Because the consumer thread wakes up, if no available data unit is found, it will re-enter the sleep state, such an operating system overhead is very expensive. Although the signal is not supported directly by Java language, it is easy to implement it on the basis of the lock. A simple implementation is as follows: Class Semaphore {

PRIVATE INT COUNT

Public semaphore (int N) {

THIS.COUNT = N;

}

Public synchronized void acquire () {

WHILE (count == 0) {

Try {

Wait ();

} catch (interruptedexception e) {

// Keep Trying

}

}

count -;

}

Public synchronized void release () {

COUNT ;

Notify (); // Alert a Thread That's Blocking On this Semaphore

}

}

Common locked problems unfortunately, using the lock will bring other problems. Let us see some common problems and corresponding solutions:

Dead lock. The deadlock is a classic multi-threaded problem, because different threads are waiting for those locks that are not unlikely to be released, resulting in all works that cannot be completed. Suppose there are two threads, representing two hungry people, they must share the knife and fork and take turns. They all need to get two locks: shared knives and locks of shared forks. If the thread "a" gets a knife, the thread "b" has obtained fork. Thread A will enter the blocking state to wait for the fork, and the thread B is blocked to wait for the knife from the A. This is just an example of artificial design, but although it is difficult to detect during runtime, this type often occurs. Although it is very difficult to detect or scrutinize the various situations, as long as the system is designed according to the following rules, it is possible to avoid deadlocks: All threads get a set of locks in the same order. This method eliminates the problem of the resources of X and Y wait for the other party's resources. Multiple locks form a group and put it in the same lock. In the example of deadlocks, you can create a lock of a silver object. The lock must be obtained before obtaining a knife or fork. The available resources that will not be blocked can be marked. When a thread gets a lock of the silverware object, you can judge whether the object lock in the entire silver collection can be obtained by checking the variable. If so, it can get the associated lock, otherwise, it is necessary to release this lock of the silverware and try again later. Most importantly, carefully designing the entire system carefully before writing code. Multithreading is difficult, and the detailed design system can help you to discover deadlocks before starting programming.

Volatile variable. Volatile keyword is a Java language for optimizing compiler. The following code is:

Class volatiletest {

Public void foo () {

Boolean flag = false;

IF (flag) {

// this could happen

}

}

}

An optimized compiler may determine whether the statement of the IF section will never be executed, and this part is not compiled. If this class is accessed by multi-threaded, the FLAG is set after a thread in front, and can be reset by other threads before it is tested by the IF statement. Use the volatile keyword to declare the variable, you can tell the compiler When compiling, you do not need to optimize this part of the code by predicting the variable value.

Unable to access threads sometimes do not have problems with object locks, threads still have possible access to blocking states. IO is the best example of this type of problem in Java programming. This object should still be accessed by other threads when blocking IO calls within the object. This object is usually responsible for canceling this blocking IO operation. Threads that cause the blocking call often fail to fail the synchronization task. If other methods of the object are also synchronized, this object is equivalent to being frozen when the thread is blocked. Other threads cannot send messages (for example, cancel IO operations) because they cannot get locks of the object. It is important to ensure that those blocking calls are not included in the sync code, or confirmed that there is a non-synchronization method in an object with synchronous blocking code. Although this method takes some attention to ensure the results code safely, it allows the object to be able to respond to other threads after blocking the object's thread.

Design judgment for different thread models is a grab or collaborative thread model, depending on the implementation of the virtual machine, and is different depending on various implementations. Therefore, Java developers must write programs that can work on both models. As mentioned earlier, the thread can be interrupted in any part of the code in the preemptive model, unless it is an atomic operation code block. Once the code segment in the atomic operating code block is executed, it is necessary to perform before the thread is exchanged. In Java programming, a variable space assigned a less than 32-bit is an atomic operation, and it is not an atomic assignment of two 64-bit data types like Double and long. Using the lock to properly synchronize the access to shared resources, it is enough to ensure that a multi-thread program works correctly under the preemptive model. In the collaborative model, whether the thread is guaranteed to abandon the processor, and the execution time of other threads is not plundered, it is entirely on the programmer. Call Yield () method The current thread can be removed from the processor into the ready-to-read queue. Another method is to call the SLEEP () method, allowing the thread to abandon the processor, and sleep within the time interval specified in the SLEEP method. As you think, these methods will be placed in place in the code and cannot guarantee normal work. If the thread is having a lock (because it is in a synchronization method or code block), this lock cannot be released when it calls yield (). This means that even if this thread has been hang, wait for this lock release from other threads that still can't continue to run. In order to alleviate this problem, it is best not to call the Yield method in the synchronous method. Packing those that need to be synchronized in a synchronization block, there is no non-synchronous method, and Yield is called outside of these synchronization code blocks. Another solution is to call the wait () method, allow the processor to abandon the lock it currently owns. This method can work well if the object is synchronized at the method level. Because it only uses a lock. Wait () will not be abandoned if it uses a Fine-Grained lock. In addition, a thread that blocks the wait () method, only when other threads call notifyAll (). Threads and AWT / SWING In the Java program that creates a Swing and / or AWT package, the AWT event handle runs in its own thread. The developer must pay attention to avoid tie these GUI threads with a time-consuming calculation work, because these threads must be responsible for handling user time and redrawing the user graphical interface. In other words, once the GUI thread is busy, the entire program looks like there is no response state. Swing threads notify those Swing Callback (such as Mouse Listener and Action Listener) by calling the appropriate method. This method means that Listener should use the Listener Callback method to generate other threads to complete this job in any case. The purpose is to let Listener Callback return more quickly, allowing Swing threads to respond to other events. If a Swing thread is not able to run synchronously, respond to events and redraw output, how can other threads securely modify Swing status? As mentioned above, Swing Callback runs in a Swing thread. So they can modify Swing data and painted to the screen. But what should I do if it is not the change caused by Swing Callback? Use a non-Swing thread to modify Swing data is unsafe. Swing provides two ways to solve this problem: InvokeLater () and InvokeAther ().

In order to modify the Swing state, just simply call one of the methods, let Runnable objects do these work. Because Runnable objects are usually their own threads, you may think that these objects are executed as threads. But it is actually unsafe. In fact, Swing places these objects in the queue and performs its RUN method at a moment. This will be able to safely modify the Swing status. Summarize the design of the Java language so that multi-threads are necessary for almost all applets. In particular, IO and GUI programming require multi-thread to provide users with a perfect experience. If you are in accordance with some of the basic rules mentioned in this article, you can carefully design the system before starting programming - including its access to shared resources, you can avoid many common and difficult thread traps. data

Refer to the API specification manual on the Java 2 platform (version 1.3): Java 2 API documentation. More about JVM for threads and latch processing, you can refer to Java Virtual Machine specification manual. Allen Holub Taming Java Threads (Apress, June 2000) is a great reference you may also want to read Allen's article. The problem of the most weakness of the great language.

About author

Alex Roetter has several years of experience in writing multithreaded applications with Java and other programming languages, gains a bachelor's degree in computer science at Stanford University. You can contact Alex via aroetter@cs.stanford.edu. ========================== Easily use thread: Synchronization is not the enemy we need to synchronize, and how much is the current price? Brian Goetz (Brian@quiotix.com) Software Consultant, Quiotix 2001 July

Unlike many other programming languages, Java language specification includes clear support for threads and concurrency. The language itself supports concurrency, which makes the designation and management of shared data constraints and timing of the thread operation becomes simpler, but this does not make concurrent programming complexity easier to understand. The purpose of this three-part series is to help programmers understand some of the main issues of multi-threaded programming in Java languages, especially the impact of thread safety on the performance of Java programs.

Please click on the top or bottom discussion on the top of the article to discuss the "Java Thread: Trick, Tricks, Tips and Techniques" on Brian Goetz, and communicate with the author and other readers to communicate your idea of ​​this article or the entire multi-thread. Note that the forum discussed all the problems encountered while using multithreading, and is not limited to the content of this article.

Most programming languages ​​do not speak threads and concurrent problems; because these problems have been left to platforms or operating systems to detail. However, Java Language Norms (JLS) clearly includes a thread model and provides a number of language elements for developers to ensure their programs. The clear support of thread is advantageous and disadvantageous. It makes us easier to use thread function and convenience when writing programs, but it also means that we have to pay attention to thread safety written, because any class is very likely to be used in a multi-threaded environment. Many users first discovered that they had to understand the concept of threads, not because they are writing programs that created and manage threads, but because they are using a multi-thread tool or framework. Any developer who used the Swing GUI framework or written in a small service program or JSP page (regardless of whether or not, it has been troubled by the complexity of the thread. Java designers want to create a language that can run well in modern hardware, including multiprocessor systems. To achieve this, manage the coordinated work of the line is mainly pushed to software developers; the programmer must specify the location of the shared data between the thread. In the Java program, the main tool used to manage the coordination of the thread is the synchronized keyword. In the absence of synchronization, JVM can freely perform timing and sorting operations within different threads. In most cases, this is what we want, because this can improve performance, but it also brought an additional burden to the programmer, they have to identify this performance improvement will endanger the correctness of the program Sex. What does SYNCHRONIZED really mean? Most Java programmers's understanding of synchronous blocks or methods is completely based on the use of mutex (mutually exclusive quantity) or define a critical section (a code block that must be initiated). Although Synchronized's semantics does include mutual exclusion and atomicity, it is much more complicated before the pipeline enters and occurs after the tunnel exits. Synchronized semantics really guarantees that only one thread can access the protected segment at a time, but also includes the rules of the synchronous thread in the main memory. It is understood that a good way to understand the Java Memory Model (JMM) is to imagine each thread into a processor running in mutually separation, all processors access the same main memory, each processor has its own cache, but these cache May not always be synchronized. In the absence of synchronization, JMM allows two threads to see different values ​​on the same memory address. When it is synchronized with a pipe (lock), once the application adds a lock, JMM will immediately ask the cache to fail, and then refresh it before it is released (write the modified memory location back to the main memory) ). It is not difficult to see why synchronization will affect the performance of the program; frequent refresh cache costs will be large. Using a good running route If synchronization is not appropriate, consequence is very serious: it will cause data chaos and contention, resulting in crashing, incorrect results, or not expected operation. Worse, these situations may rarely happen and have an incident (making problems difficult to monitor and reproduce). If the test environment is very different, whether it is different, or the difference is different, it is possible to make these problems not appear at all in the test environment, thereby got an error conclusion: our program is correct And in fact these problems have not yet appeared. Strong definitions are a specific situation: two or more threads or processes read or write some shared data, and the end result depends on how these threads are scheduled. Strike can result in unforeseen results and hidden program errors.

On the other hand, improper or excessive use of synchronization can cause other problems, such as poor performance and deadlock. Of course, although the performance difference is not as serious as the data is chaotic, it is also a serious problem, so it is equally ignored. Writing excellent multithreading programs requires a good running route, sufficient synchronization allows your data without chaos, but does not need to be abuse to assume the risk of deadlocks or unnecessary weakening procedures. How big is the price of synchronization? Since the process of cache refreshing and setting failure, synchronization blocks in the Java language are usually more costs more than the critical segment devices provided by many platforms, which are usually implemented with an atomic "Test and Set bit" machine instruction. Even if a program only includes a single-thread running on a single processor, a synchronized method call is still slower than the non-synchronous method. If you still have a lockup, you will need a much higher price, as you need several thread switches and system calls. Fortunately, with the continuous improvement of the JVM of each version, it has improved the overall performance of the Java program, and it also reduces the cost of synchronization, and there may be further improvements in the future. In addition, the performance cost of synchronization is often exaggerated. A well-known source of information has been introduced that a synchronous method call is 50 times slower than a non-synchronous method. Although this sentence is likely to be true, it will also generate misleading, and many developers have also avoided synchronization even when needed. Strictly calculate synchronization performance loss in strict accordance with percentage, because a synchronized synchronization of unopened is a fixed performance loss to a block or method. The percentage of performance losses caused by this fixed delay depends on how much work has been done in the synchronization block. Synchronous call to an empty method may be 20 times slower than the non-synchronization call of an empty method, but how long will we call an empty method? When we use more representative small methods to measure synchronous losses, the percentage will soon drop to tolerance. Table 1 put some of this data together. It lists some different instances, different platforms and different JVM next synchronized methods to call losses relative to an asynchronous method. Under each instance, I run a simple program that measures the runtime required for 10,000,000 times, I call two versions of synchronization and non-synchronization, and compare the results. The data in the table is the ratio of the running time of the synchronous version relative to the non-synchronous version; it shows the performance loss of synchronization. Each run call is one of the simple methods in Listing 1. Table 1 shows the relative performance of the synchronization method to call relative to the non-synchronous method; in order to measure performance loss with absolute standard, it is necessary to consider the factors that improve the JVM speed, which is not reflected in the data. In most tests, the higher version of each JVM will make the overall performance of JVM greatly improve, and it is possible to further improve when the Java virtual machine of 1.4 is available. Table 1. Performance loss of synchronization

JDKstaticEmptyemptyfetchhashmapGetsingletoncreateLinux / JDK 1.19.22.42.5n / a2.01.42Linux / IBM Java SDK 1.133.918.414.1n / a6.91.2Linux / JDK 1.22.52.22.21.642.21.4Linux / JDK 1.3 (no JIT) 2.522.582.021.441.41. 1Linux / JDK 1.3 -server28.921.039.01.879.02.3Linux / JDK 1.3 -client21.24.24.31.75.22.1Linux / IBM Java SDK 1.38.233.433.41.720.735.3Linux / gcj 3.02.13.63.31.22.42.1Solaris / JDK 1.138 . 620.112.8N / A11.82.1Solaris / JDK 1.239.28.65.01.43.13.1Solaris / JDK 1.3 (NO JIT) 2.01.81.81.01.21.1solaris / jdk 1.3 -client19.81.51.11.32.11.7solaris / jdk 1.3 -server1 .82.353.01.34.23.2 Listing 1. Simple method used in the benchmark test public static void staticempty ()}

Public void empty () {}

Public Object Fetch () {Return Field;

Public Object singleleton () {

IF (Singletonfield == NULL)

Singletonfield = New Object ();

Return Singletonfield;

}

Public Object hashmapget () {

Return hashMap.get ("this");

}

Public Object Create () {

Return New Object ();

}

These small benchmarks also clarify the challenges of explaining performance results in the event of a dynamic compiler. For 1.3 JDK in and without JIT, the huge differences on the numbers need to give some explanations. For those very simple methods (EMPTY and FETCH), the essence of the benchmark (it just performs a compact loop that doesn't make a compact loop) allows JIT to dynamically compile the entire loop, compress the runtime to almost no level. But in an actual program, can JIT depend on many factors, so no JIT-free timing may be more useful when making fair comparisons. In any case, for more enriched methods (Create and Hashmapget), JIT cannot improve the situation of non-synchronization as a simple method. In addition, the JVM can be optimized from the important part of the test. Similarly, the difference between comparable IBM and Sun JDK reflects that IBM Java SDK can more optimize the non-synchronous cycle, not a higher cost. This can be seen in the pure timing data (not available here). From these numbers, we can draw the following conclusions: Although there is a performance loss, while running many ways to run, the loss can drop to a reasonable level; most cases are probably Between 10% and 200% (this is a relatively small number). So, although synchronization is unwise (this will also increase the possibility of deadlock), but we don't need to be so fear synchronization. Simple testing here is that a cost to be unopened synchronization is smaller than the cost of creating an object or looking for a HashMap. Due to the early books and articles suggesting that there is no huge performance cost, many programmers do their best to avoid synchronization. This fear has led to many problems, such as Double-Checked Locking (DCL). Many of the books and articles about Java programming are recommended, it seems to avoid unnecessary synchronization, but in fact it is not used, should avoid using it. The reasons for DCL is very complicated, which has exceeded the scope discussed in this article (to be in-depth, see the link in the reference). Don't fight for the assumption that the synchronization is used correctly. If the thread is really participating in contention, you can feel the impact of synchronization on the actual performance. And the performance loss between synchronous and contention synchronization is very large; a simple test program points out that the contention synchronization is 50 times slower than the synchronization. Combine this fact with the observation data taken above, it can be seen that at least 50 objects are creating 50 objects using a consideration synchronization. Therefore, in debugging the use of synchronization in the application, we should strive to reduce the number of actual contention, and it is not simply trying to avoid synchronization. Part 2 of this series will focus on reducing contention technologies, including reduced the size of the lock, reducing the size of the synchronization block, and the number of shared data between the thread. When do you need to synchronize? To make your program threading security, you must first determine which data will be shared between threads. If the data is being written, it may be read by another thread, or the data being read may have been written by another thread, then this data is shared, and must be synchronously. Some programmers may be surprised to find that these rules are simply checking if a shared reference is not empty. Many people will find these definitions strictly. There is a universal point of view that if just read an object's field, it is not necessary to lock, especially in the case of JLS to ensure the atomicity of the 32-bit operation, it is even more. But unfortunately, this view is wrong.

Unless the field is declared as Volatile, JMM will not require the following platform to provide the cache consistency and sequentiality between the processor, so it is very likely that in some platforms, there is no synchronization. data. For more detailed information, see Resources. After determining the data to be shared, you have to make sure how to protect those data. In a simple case, simply declare them as Volatile to protect the data field; in other cases, you must have a lock before reading or writing the share, a good experience is clearly indicating what locks are used to protect the given Fields or objects, and record it in your code. It is also worth noting that simply synchronous accessor methods (or declared fields for Volatile) may not be sufficient to protect a shared field. Can consider the following example: ...

Private Int foo;

Public synchronized int getfoo () {returnif;}

Public synchronized void setfoo (int f) {foo = f;

If a caller wants to increase the foo attribute value, the following code is not a thread safe:

...

Setfoo (getfoo () 1);

If the two threads attempt to increase the Foo attribute value, the result may be that the value of the FOO increases by 1 or 2, which is determined by the timing. The caller will need to synchronize a lock to prevent this contention; a good way is to specify which locks in the Javadoc class, such callers don't need to guess them. The above situation is a good example, which means that we should pay attention to the data integrity of multi-level particle size; synchronous accesser method ensures that the caller can access the consistent and most recent version of the attribute value, but if you want the future value of the attribute The current value is consistent, or multiple attributes are consistent with each other, we must synchronize composite operations - may be in a coarse-grained lock. If the situation is uncertain, consider using the synchronous package Sometimes, when writing a class, we don't know if it is in a shared environment. We hope that our class is a thread safe, but we don't want to give a synchronization of the class that is always used in a single-threaded environment, and we may not know how much the right lock size when using this class. Fortunately, we can achieve the above two purposes at the same time by providing synchronous packaging. The Collections class is a good example of this technology; they are non-synchronous, but each interface defined in the framework has a synchronous package (for example, collections.synchronizedmap ()), which is packaged with a synchronized version Each method. Conclusion Although JLS gave us a tool that can make our program threaded tools, thread safety is not a piece of pies falling in the sky. Using synchronization will suffer performance loss, and improper use of synchronous use will make us assume data confusion, the result is inconsistent or deadlock. Fortunately, JVM has great improvements in the past few years, greatly reduces performance loss associated with the correct use of synchronization. By carefully analyzing how data is shared in the thread, appropriately synchronize the operation of shared data, allowing your program to be both threads, and will not bear too much performance burden.

Reference

Please click on the top or bottom of the article to go to Brian Goetz, discussions about "Java Threads: Tips, Tips and Technology". JAVA Performance Tuning (O'Reilly & Associates, 2000) written in Jack Shirazi can provide guidance for performance issues on the Java platform. The reference information provided with this book provides a good performance debugging skill. Dov Bulka's Java Performance and Scalability, Volume 1: Server-Side Programming Techniques (Addison-Wesley, 2000) provides a lot of design techniques and 诀窍, helping you enhance your application performance. Steve Wilson and Jeff Kesselman Java Platform Performance: Strategies and Tactics (Addison-Wesley, 2000) provide technology for generating fast, valid Java code for experienced Java programmers. Brian Goetz's nearest written book "Double-Checked Locking: Clever, But Broken" (JavaWorld, February 2001) Detailed JMM and described an amazing consequences of unused synchronization without using a particular case. A recognized multi-threaded authority Allen Holub reveals that most of the techniques used to reduce synchronization burdens in his article "Warning: JavaWorld, February 2001). Peter Haggar describes how to get multiple locked in order to avoid dead locks (DeveloperWorks, Sep 2010 in September 2000). In his article "Write multi-threaded Java Application" (developerWorks, Sep 2001), Alex Roetter introduced the Java Thread API, summarizing the problems involved in multi-threaded, and provides solutions for general issues. Doug LEA's Concurrent Programming in Java, 2nd Edition (Addison-Wesley, 1999) is an authoritative book about multiple threaded programming in the Java language. "Synchronous and Java Memory Model" is extracted from the actual meaning of Synchronized from Doug LEA. Bill Pugh's Java memory model provides a good starting point for you to learn JMM. "Double Checked Locking Is Broken" declaration describes why DCL is useless when implemented in a Java language. Chapter 17, Bill Joy, Guy Steele and James Gosling, 2nd Edition (Addison-Wesley, 2000) describes the deep details of the Java memory model. This article of IBM describes how to optimize lock in WebSphere to make different transactions and read the same state and check data integrity when updating. IBM T.J. Watson Research Center has a whole project group invested in performance management. Please find more reference materials in the developerWorks Java technology area.

About the author Brian Goetz is a software consultant and has been a professional software developer in the past 15 years. He is quiotix, a chief consultant in software development and consulting firm in Los Altos, California. Please contact Brian via brian@quiotix.com. =============================================== If I am a king : Suggestions on solving the Java programming language thread problem Allen Holub Freelancer October 2000

Allen Holub pointed out that the thread model of the Java programming language may be the weakest part of this language. It is not suitable for actual complex programs, and it is not an object-oriented. This paper recommends major modifications and supplements to Java languages ​​to address these issues.

The thread model of the Java language is the most difficult part of this language. Although the Java language itself supports thread programming is a good thing, it is too small to support the syntax and package of threads, and can only apply to a minor application environment. Most books on Java thread programming have pointed out the defects of the Java thread model, and provide a first aid kit (BAND-AID / Bond Creative Creative) class library. I call these classes as a first aid kit because the issues they can solve should be included by the Java language itself. In the long run, in syntax rather than the class library method, a more efficient code will be produced. This is because the compiler and Java virtual device (JVM) can optimize the program code together, and these optimizations are difficult or unable to implement the code in the class library. In my "Taming Java Threads" book and in this article, I further recommend some modifications to the Java programming language itself so that it can truly solve these thread programming issues. The main difference between this article and my book is that I have made more thinking when writing this article, so it has improved the proposal in the book. These recommendations are just trying - just my own ideas for these issues, and achieve these ideas requires a lot of work and the evaluation of peers. But this is a beginning, I intentionally set up a special working group to solve these problems. If you are interested, please send e-mail to threading@holub.com. Once I really start, I will send you a notice. The suggestions proposed here are very bold. Some people recommend subtle and small amounts of modifications to Java Language Norms (JLS) (see Resources) to solve the current blurred JVM behavior, but I want to make more thorough improvements. In practical draft, many of my recommendations include introducing new keywords for this language. Although usually do not break through an existing code that is correct, if the language is not to remain unchanged, it must be able to introduce a new keyword. In order to make the introduced keyword do not conflict with the existing identifier, I will use one ($) character, and this character is illegal in the existing identifier. (For example, using $ TASK, not TASK). At this point, the command line switch of the compiler is required to provide support, which can use these keywords, rather than ignore the dollar symbol. Task's concept of Java thread model is that it is not object-oriented. Object-oriented (oo) designers do not consider problems at the angle of thread; they consider synchronous information asynchronous information (synchronous information is processed immediately - until the information processing is completed, return message handle; asynchronous information will be processed in the background Time - return message handle after the end of the information processing). The Toolkit.GetImage () method in the Java programming language is a good example of asynchronous information. The message handle of GetImage () will be returned immediately without having to wait until the entire image is retrieved by the background thread. This is an object-oriented (OO) processing method. However, as mentioned earlier, Java's thread model is non-objective. A Java programming language thread is actually just a run () process, which calls other processes. Here, there is no object, asynchronous or synchronization information, and other concepts. For this issue, a solution to the in-depth discussion in my book is to use an Active_Object. The Active object is an object that can receive asynchronous requests, which is handled in a later manner within a period of time after receiving the request. In the Java programming language, a request can be packaged in an object. For example, you can transfer an instance implemented by the runnable interface to this Active object, the Run () method of the interface encapsulates the work that needs to be completed.

This runnable object is discharged into the queue by this Active object. When the turn is executed, the Active object uses a background thread to execute it. Asynchronous information running on an Active object is actually synchronized because they are removed from the queue in order by a single service thread. Therefore, use an Active object to eliminate most synchronization problems in a more process-friendly model. In a sense, the entire SWING / AWT subsystem of the Java programming language is an Active object. The only secure way to transfer a message to a Swing queue is to call a method similar to swingutilities.invokelater () so that a runnable object is sent on the Swing event queue. When the turn is executed, the Swing event handles thread will Will deal with it. Then my first suggestion is to add a Task concept to the Java programming language to integrate the Active object into the language. (Task concept is borrowed from Intel's RMX operating system and ADA programming language. Most real-time operating systems support similar concepts.) A task has a built-in Active object distributor and automatically manages those processed information. All mechanisms. Define a task and define a class that is basically the same, only need to add an Asynchronous modifier to indicate these methods in the background before the method of task is required. Please refer to the classification method based on the ninth chapter of my book, then look at the following file_io class, which uses the Active_Object class discussed in "Taming Java Threads" to implement asynchronous write: interface exception_handler

{Void Handle_Exception (Throwable E);

}

Class file_io_task

{Active_Object Dispatcher = new activ_Object ();

Final OutputStream File;

Final Exception_Handler Handler;

FILE_IO_TASK (String file_name, exception_handler handler)

Throws oException

{file = new fileoutputStream (file_name);

this.handler = handler;

}

Public void write (Final Byte [] Bytes)

{

// The following call asks the Active-Object Dispatcher

// to enqueue the Runnable Object on Its Request

// Queue. A Thread Associated with the Active Object

// dequeues the runnable objects and executes them

// one at a time.

Dispatcher.dispatch

(New runnable)

{public void run ()

{

Try

{byte [] COPY new byte [bytes.length];

System.Arraycopy (bytes, 0,

COPY, 0,

BYTES.LENGTH;

File.Write (Copy);

}

Catch (Throwable Problem)

{handler.handle_exception (item);

}

}

}

);

}

}

All write requests are queued in the input queue of Active-Object with a DISPATCH () process. Any exception (Exception) that occurs when processing these asynchronous information in the background is processed by an Exception_Handler object, which is transmitted to the constructor of File_IO_TASK. When you want to write content to the file, the code is as follows:

FILE_IO_TASK IO = New file_io_task

("foo.txt"

New Exception_Handler

{PUBLIC VOID HANDLE (THROWABLE E)

{E.PrintStackTrace ();

}

}

);

// ...

IO.WRITE (Some_BYTES);

This kind of class-based processing method is too complicated - for a simple operation, the code is too miscellaneous. After introducing $ TASK and $ ASYNCHRONOUS keywords to Java language, you can override the previous code as described below:

$ Task file_io $ error {$ .printstacktrace ();

{

Outputstream file;

File_io (string file_name) throws oException

{file = new fileoutputStream (file_name);

}

Asynchronous public write (byte [] bytes)

{file.write (bytes);

}

}

Note that the asynchronous method does not specify a return value because its handle will be returned immediately without waiting for the requested operation processing. Therefore, there is no reasonable return value at this time. For derived models, $ TASK keywords, and Class are as synonymous: $ TASK can implement interfaces, inherit, and other tasks of inheritance. The method labeled asynchronous keyword is processed by $ TASK in the background. Other methods will be run synchronously, just like in the class. The $ TASK keyword can be modified with an optional $ error clause (as shown above), indicating that any exception that cannot be captured by the asynchronous method itself will have a default handler. I use $ to represent the thrown anomalies. If you do not specify a $ error clause, a reasonable error message will be printed (it is probably the stack tracking information). Note that in order to ensure the safety of threads, the parameters of the asynchronous method must be constant (immutable). The runtime system should ensure that this invariance is guaranteed by the semantics (simple replication is usually not enough). All Task objects must support some pseudo information (Pseudo-Message), for example:

Some_task.close () Any asynchronous information sent after calling is generated a TaskClosedException. However, the waiting messages on the Active object team can still be provided. The Some_Task.Join () call program is blocked until this task is turned off, and all unfinished requests are processed. In addition to common modifiers (public, etc.), Task keying should also accept a $ POOLED (N) modifier, which causes Task to use a thread pool instead of using a single thread to run asynchronous requests. n Specify the size of the required thread pool; if necessary, this thread pool can be increased, but it should be reduced to the original size when it is no longer needed. Pseudo-Field $ pool_size Returns the original n parameter value specified in $ POOLED (N). In Chapter 8 of Taming Java Threads, I give a server-side Socket handler as an example of a thread pool. It is a good example of the task of using the thread pool. Its basic idea is to generate an independent object, and its task is to monitor the Socket of a server-side. Whenever a client is connected to the server, the server-side objects capture a pre-created sleep thread from the pool and set this thread to serve the client. The Socket server produces an additional customer service thread, but these additional threads will be deleted when the connection is closed. Recommend the recommended syntax of the Socket server as follows: Public $ POOLED (10) $ TASK Client_Handler

{

PrintWriter log = new printwriter (system.out);

Public Asynchronous Void Handle (socket connection_to_the_client)

{

Log.println ("Writing");

// Client-Handling Code Goes here. Every Call To

// Handle () IS Executed on Its Own Thread, But 10

// threads area pre-created for this purpose. Additional

// Threads area created on an as-needed beis, But Are

// discarded when handle () returns.

}

}

$ Task Socket_Server

{

Serversocket Server;

Client_handler client_handlers = new client_handler ();

Public Socket_server (int port_number)

{Server = New Serversocket; PORT_NUMBER

}

Public $ asynchronous listen (Client_Handler Client)

{

// this Method is Executed on Its Own Thread.

While (True)

{Client_Handlers.Handle (Server.Accept ());

}

}

}

// ...

Socket_server = new socket_server (the_port_number);

Server.listen ()

Socket_server object uses a separate background threading process asynchronous listen () request, which encapsulates the "Accept" cycle of Socket. When each client is connected, the listen () requests a client_handler to handle the request by calling handle (). Each handle () request is executed in their own thread (because this is a $ POOLED task). Note that each asynchronous message transmitted to $ Pooled $ Task is actually handled by their own thread. Typically, since a $ POOLED $ TASK is used to implement a self-operative operation; so for a potential synchronization problem associated with access status variables, the best solution is to use this this to use this to use this. Unique copy. That is to say, when sending an asynchronous request to a $ POOLED $ TASK, a clone () operation will be executed, and the THIS pointer of this method will point to the object. Communication between threads can be implemented by synchronous access to the STATICI. Improve Synchronized Although $ Task eliminates the requirements of synchronous operations, not all multi-threaded systems are implemented with tasks. Therefore, it is also necessary to improve the existing thread module. Synchronized keyword has the following shortcomings: You cannot specify a timeout value. Unable to break a thread that is waiting for the request lock. Multiple locks cannot be requested safely. (Multiple locks can only be obtained in order.) Solving these issues is to extend the synchronized syntax, allowing it to support multiple parameters and accept a timeout description (specified in the parentheses below). Here is the syntax I want:

Synchronized (X && Y && Z) is locked by x, y, and z objects. Synchronized (x || y || z) gets the lock of the X, Y or Z object. Synchronized ((x && y) || z) Some of the extensions of the previous code. Synchronized (...) [1000] sets 1 second timeout to get a lock. Synchronized [1000] f () {...} gets this lock in the F () function, but there can be 1 second timeout. Timeoutexception is the RuntimeException derived class, which is thrown after waiting for the timeout. Timeout is needed, but it is not enough to make the code strong. You also need to have the ability to wait for the waiting lock from the outside. Therefore, after transmitting an interrupt () method to a wait-locking thread, this method should throw a SynchronizationException object and interrupt the waiting thread. This exception should be a derived class of RuntimeException, so you don't have to deal with it. The main problem with the recommended changes to Synchronized syntax is that they need to be modified on binary code levels. This code currently implements Synchronized using access to monitoring (ENTER-MONITOR) and exit-monitor instructions to implement SYNCHRONIZED. These instructions do not have parameters, so they need to extend the definition of binary code to support multiple locked requests. But this modification will not be easier than modifying the Java virtual machine in Java 2, but it is a downwardly compatible Java code. Another solving problem is the most common deadlock situation, in which case two threads are waiting for the other party to complete a certain action. Imagine an example below (assuming):

Class brroken

{Object Lock1 = New Object ();

Object lock2 = new Object ();

Void a ()

{Synchronized (LOCK1)

{Synchronized (Lock2) {// do something

}

}

}

Void b ()

{Synchronized (LOCK2)

{Synchronized (LOCK1)

{/ / do something

}

}

}

Imagine a thread to call A (), but after obtaining Lock2, it is deprived before obtaining LOCK2. The second thread enters operation, calls B (), obtained LOCK2, but since the first thread occupies Lock1, it cannot obtain LOCK1, so it is then waiting. Status. At this point, the first thread was awakened, it tried to obtain LOCK2, but because of the second thread occupied, it was not available. The deadlock appears. The syntax of the Synchronize-ON-Multiple-Objects below can solve this problem:

// ...

Void a ()

{SYNCHRONIZED (LOCK1 && LOCK2)

{

}

}

Void b ()

{Synchronized (Lock2 && Lock3)

{

}

}

The compiler (or virtual machine) will rearrange the order of the request lock, so that the LOCK1 is always first obtained, which eliminates the deadlock. However, this method is not necessarily successful for multi-threads, so it has to provide some methods to automatically break the dead lock. A simple approach is to release the obtained locks often waiting for the second lock. That is to say, take the following wait method, not always waiting:

While (True)

{TRY

{Synchronized (some_lock) [10]

{// do the work here.

Break;

}

}

Catch (Timeoutexception E)

{Continue;

}

}

If each program of the waiting lock uses a different timeout value, you can break the deadlock and one of the threads can run. I recommend replacing the previous code with the following syntax:

SYNCHRONIZED (Some_lock) []

{// do the work here.

}

The Synchronized statement will wait forever, but it will often abandon the locked locks to break the potential deadlock. Ideally, the timeout value of each repeated wait is a random value than the previous phase. There are also some questions that improve WAIT () and Notify () Wait () / Notify () system:

Unable to detect WAIT () is normal return or due to timeout returns. The traditional condition variable cannot be used to implement a "signal" state. Too prone to nested monitoring (MONITOR) locks. Oversessure detection issues can be resolved by redefining Wait () to return a Boolean variable (instead of void). A TRUE return value indicates a normal return, and the false indicates that the timeout returns. The concept of state-based condition variable is important. If this variable is set to a FALSE state, the wait thread will be blocked until this variable enters the TRUE state; any wait thread waiting for the condition variable waiting for TRUE will be released. (In this case, Wait () call does not block.). This feature can be supported by expanding the synchormics of Notify () as follows:

NOTIFY (); release all waiting threads without changing the status of the condition variable below. Notify; set the condition of the condition variable to True and release any wait for a process. Thereafter, the call to Wait () does not block. Notify (false); set the condition of the condition variable to False (which will then block the call for Wait ()). Nested monitoring locks is very troublesome, I don't have a simple solution. Nested monitoring lock is a deadlock form that this nested monitor blockade occurs when a lock has a thread that does not release the lock before he is suspended. Below is an example of this problem (or assuming), but the actual example is very much: Class Stack

{

LinkedList List = New LinkedList ();

Public Synchronized Void Push (Object X)

{synchronized (list)

{List.Addlast (x);

NOTIFY ();

}

}

Public Synchronized Object Pop ()

{synchronized (list)

{IF (list.size () <= 0)

Wait ();

Return List.removeLast ();

}

}

}

In this example, two locks are involved in GET () and PUT (): one on the Stack object and the other on the LinkedList object. Let's consider that when a thread is trying to call a vacant POP () operation. This thread gets these two locks and then calls Wait () release the lock on the Stack object, but does not release the lock on the List. If the second thread attempts to press an object in the stack, it will always hang on the Synchronized (List) statement and will never be pressing an object. Since the first thread is waiting is a non-empty stack, a deadlock occurs. That is to say, the first thread will never return from Wait () because it causes the second thread to run to the notify () statement because it occupies the lock. In this example, there are many obvious ways to solve the problem: for example, synchronization is used for any method. But in the real world, the solution is usually not so simple. A possible way to release all locks acquired when Wait () releases the front thread, and then re-press the original acquisition order when the wait condition is satisfied. However, I can imagine that the code that uses this way is simply unable to understand, so I think it is not a truly possible way. If you have a good way, please send me E-mail. I also hoped to wait until the next day of complex conditions. E.g:

(A && (B || C)). Wait ();

Where A, B and C are any object. Modifying the Thread class also supports the ability to preemptively and collaborative threads in some server applications, especially if they want to achieve the highest performance. I think the Java programming language is too far from the simplified thread model, and the Java programming language should support POSIX / Solaris "Green Thread" and "Lightweight Process" concept (in "(Taming Java Threads" The first chapter discussed). That is to say, some Java virtual machines (such as Java virtual machines on NT) should be in its internal simulation collaborative process, other Java virtual machines should simulate the progress thread. And to Java virtual It is easy to join these extensions. A Java's Thread should always be seized. This means that a Java programming language thread should work just like Solaris. Runnable interface can be used to define a Solaris style Green thread ", this thread must be able to transfer control to other green threads running in the same lightweight process. For example, the current syntax: class my_thread implements Runnable

{public void run () {/*...*/}

}

New thread (new my_thread);

It can effectively generate a green thread for the Runnable object and bind it to the lightweight process represented by the Thread object. This implementation is transparent to existing code because its validity is exactly the same. To use this method, use this method, just pass several Runnable objects to Thread's constructor, you can extend the existing syntax of the Java programming language to support multiple green threads in a single lightweight thread. (Green threads can be collaborated with each other, but they can be run in a green process (Runnable object) on other lightweight processes (THREAD objects).). For example, the following code creates a green thread for each Runnable object, which share the lightweight process represented by the Thread object.

New thread (new my_runnable_object (), new my_other_runnable_Object ());

Existing coverage Thread objects and implement Run () habits continue to be valid, but it should be mapped to a green thread that is bound to a generic process. (Default Run () method in the thread () class will effectively create the second RunNable object inside.) Collaboration between threads should add more features to support inter-threads. Currently, the PipedInputStream and the PipedOutputStream class can be used for this purpose. But for most applications, they are too weak. I recommend adding the following functions to the Thread class:

Increase a WAIT_FOR_START () method, which is usually blocked until a thread's Run () method is started. (If the waiting thread is released before the RUN is called, this is not a problem). With this method, a thread can create one or more auxiliary threads and ensure that these secondary threads are running before creating a thread continues to perform operations. (To the Object class) Add $ Send (Object O) and Object = $ receive () method, which will use an internal blocking queue to transfer objects between threads. The blocking team should be automatically created as a by-product called by the first $ send (). $ Send () call will join the object to the queue. $ Receive () call usually in the blocking state until an object is added to the queue, then it returns this object. The variables in this method should support the settings of the team and the outgoing operation timeout: $ Send (Object O, Long Timeout) and $ Receive (Long Timeout). The concept of inside the read and write lock should be built into the Java programming language. The reader lock has a detailed discussion in "Taming Java Threads" (and elsewhere), in summary: A read-write lock supports multiple threads to access an object at the same time, but only one thread can modify this object at the same time, and You cannot modify when access is performed. The syntax of the read and write lock can borrow synchronized keyword: static object global_resource;

// ...

Public void a ()

{

$ r (global_resource)

{// While in this block, Other Threads Requesting Read

// Access to global_resource will get it, but threads

// Requesting Write Access Will Block.

}

}

Public void b ()

{

$ Writing (Global_Resource)

{// blocks Until All ONGOING READ OR WRITE OPERATIONS ON

// Global_Resource Are Complete. No Read or Write

// Operation or Global_Resource Can Be Initiated While

// WITHIN THIS BLOCK.

}

}

Public $ reading void c ()

{// Just Like $ Reading (this) ...

}

Public $ Writing Void D ()

{// Just Like $ Writing (this) ...

}

For an object, you should only support multiple threads to enter $ Reading block when there is no thread in the $ Writing block. When reading a read operation, a thread that tries to enter the $ WRITI block will be blocked until the reading thread exits $ reading block. When there are other threads at a $ Writing block, threads attempt to enter $ Reading or $ Writing block are blocked until this write thread exits $ WRITI block. If the read and write thread is waiting, the reading thread will be performed first. However, this default method can be changed using the $ Writer_Priority property to modify the definition of the class. Such as:

$ WRITE_PRIORITY CLASS IO

{

$ Writing Write (Byte [] DATA)

{// ...

}

$ bt [] read ()

{// ...

}

}

The object created by the access section should be illegal current cases, the JLS allows the access part to create objects. For example, the thread created in a constructor can access the object being created, which makes this object not completely created. The results of the following code cannot be determined:

Class brroken

{private long x;

Broken ()

{new thread ()

{public void run ()

{x = -1;

}

} .start ();

X = 0;

}

}

Setting the x -1 thread can be simultaneously performed and the thread set X is 0. Therefore, the value of X cannot be predicted at this time. One solution to this problem is that the thread created in this constructor is not returned before the constructor is not returned, and the RUN () method is to be prohibited by making its priority than calling the New NEW. That is to say, the start () request must be postponed before the constructor returns. In addition, Java programming languages ​​should allow synchronization of constructor. In other words, the following code (in current case is illegal) will work like expected:

Class Illegal

{private long x;

Synchronized broker ()

{new thread ()

{public void run ()

{synchronized (Illegal.This)

{

X = -1;

}

}

} .start ();

X = 0;

}

}

I think the first method is more concise than the second, but it is more difficult to achieve it. The Volatile keyword should be like the expected work JLS requires the request for the Volatile action. Most Java virtual machines simply ignore this part of this part, this should not be. In the case of a multi-processor, many hosts have problems, but it should be solved by JLS. If you are interested in this, the Bill Pugh of the University of Maryland is committed to this work (see Referring). Access issues If there is a lack of good access control, it will make thread programming very difficult. In most cases, if the thread can only be called from the synchronization subsystem, there is no need to consider the thread security (Threadsafe) problem. I suggest that the concept of access to Java programming languages ​​is limited;

Package keywords should be used to limit the package access. I think the existence of default behavior is a flaw of any computer language. I feel confused in this default permission (and this default is "package" level rather than "private ( private) "). In other aspects, Java programming languages ​​do not provide equivalent default keywords. Although the definite word using explicit packages will destroy the existing code, it will make the code's readability and eliminate potential errors for the entire class (for example, if the access is ignored, not being ignored instead of being Not intentionally ignored). Retroducing Private Protected, its function should be like the current protected, but should not allow access to the package. Allow the Private Private Syntax Specify "Access to Implementation" for all external objects, even the current object is the same class. For "." The only reference (implicit or explicit) on the left should be this. Extend the grammar of the public to authorize it to enable specific class access. For example, the following code should allow the Fred class to call some_method (), but the object to other classes, this method should be private.

PUBLIC (FRED) VOID SOME_METHOD ()

{

}

This recommendation is different from the "Friend" mechanism of C . In the "Friend" mechanism, it authorizes all the private parts of another class. Here, I recommend a strict control of a limited method collection. With this method, a class can define an interface for another class, and this interface is invisible to the rest of the system. A significant change is:

Public (fred, wilma) void Some_method ()

{

}

All domain definitions should be private unless the domain reference is an object or static factory basic type of IMMutable. For a class of direct access violates two basic rules for OO design: abstraction and package. From the thread perspective, the direct access field is allowed to make it easier to simultaneously access it. Add $ Property keyword. Objects with this keyword can be accessed by a "bean box" application, which uses the Introspection API defined in the Class class, otherwise the private private is the same. The $ proty property can be used in domains and methods so that the existing JavaBean getter / setter method can be easily defined as attributes. Immutability Due to the need to synchronize the invariant object, the constant concept (the value of an object is not changed after creating) in multi-thread conditions. In Java programming speech, there are two reasons for the realization of invariance.

For a constant object, it can be accessed before it is not fully created. This access can generate incorrect values ​​for some domains. The definition of constant (all domains of classes is FINAL) is too loose. For objects specified by Final reference, although reference itself cannot be changed, the object itself can change the state. The first question can be resolved, and the thread is not allowed to start executing in the constructor (or the start request cannot be performed before the constructor returns). For the second question, this problem can be resolved by defining the final modifier to a constant object. That is to say, for an object, only all the domains are final, and all reference objects are also Final, this object is really constant. In order not to break the existing code, this definition can be strengthened using the compiler, that is, only one class is explicitly unchanged, this class is invariant. Methods as below:

$ Immutable Public Class Fred

{

// All Fields in this class must be final, and if the

// Field Is a reference, All Fields in the reference

// Class Must Be final as well (Recursively).

Static int x constant = 0; // use of `Final` Is Optional WHEN $ Immutable

// is present.

}

With the $ IMMUTABLE modifier, the final modifier in the domain definition is optional. Finally, when an internal class (Inner Class) is used, an error in the Java compiler makes it unconservative objects. When a class has an important internal class (my code is often often available), the compiler often does not display the following error message:

"Blank Final Variable 'Name' May Not Have Been Initialized.

IT Must Be Assigned a value in an inTILALIZER, OR in Every Constructor. Employment of empty final has initialized in each constructor, or there will be this error message. Since the introduction of internal classes in version 1.1, the compiler There is always this error. In this release (three years later), this error still exists. Now, it is time to correct this error. There is a problem with the instance level accesses of the class level, there is a problem, That is, class (static) methods and instances (non-static) methods can directly access class (static) domains. This access is very dangerous, because the synchronization of instance methods does not get the lock lock, so a synchronized static Methods and a SYNCHRONIZED method or can also access the domain of the class at the same time. An obvious way to correct this problem is that only the Static access method can be accessed only in an instance method to access non-invariant static domain. Of course, this requirement needs to be compiled Under the runtime check. Under this regulation, the following code is illegal:

Class brroken

{

Static long x;

SYNCHRONIZED Static void f ()

{x = 0;

}

Synchronized void g ()

{x = -1;

}

}

Since F () and g () can run in parallel, they can simultaneously change the value of X (resulting in an unprofit result). Keep in mind that there are two locks here: the Static method requires the lock that belongs to the Class object, rather than static methods require locks that belong to such instances. When accessing non-constant STATIC domains from an instance method, the compiler should meet any of the following two structures:

Class brroken

{

Static long x;

Synchronized Private Static Accessor (Long Value)

{x = Value;

}

SYNCHRONIZED Static void f ()

{x = 0;

}

Synchronized void g ()

{Accessor (-1);

}

}

Or, the compiler should get the read / write lock:

Class brroken

{

Static long x;

SYNCHRONIZED Static void f ()

{$ Writing (x) {x = 0};

}

Synchronized void g ()

{$ Writing (x) {x = -1};

}

}

Another method is (this is also an ideal method) - The compiler should automatically use a read / write lock to synchronize access non-constant Static domain, so that programmers don't have to worry about this problem. After all the non-background threads are terminated, the background thread is suddenly ended. Some global resources are created in the post (such as a database connection or a temporary file), while these resources are not turned off or deleted at the end of the background thread. For this issue, I recommend setting rules to make the Java virtual machine do not close the application in the following cases:

There is any non-background thread that is running, or: Any background thread is being executed a synchronized method or a synchronized code block. The background thread can be turned off immediately after it performs the SYNCHRONIZED block or the synchronized method. Re-introducing Stop (), suspend () and resume () keywords may not be feasible, but I hope not to abolish Stop () (in Thread and ThreadGroup). However, I will change the semantics of STOP () so that it does not destroy the existing code when calling it. However, about STOP (), keep in mind that STOP () will release all locks after the thread is terminated, which may potentially enable threads working on this object into a state of unstable (local modification). Since the stopped thread has released all the locks on this object, these objects cannot be accessed. For this problem, you can redefine the behavior of Stop () so that the thread is only terminated immediately when no lock is occupied. If it occupies a lock, I suggest that it will terminate it after this thread releases the last lock. This behavior can be implemented using one and an abnormal mechanism. The stop thread should set a sign and test this flag immediately when all synchronization blocks are exited. If this flag is set, an implicit exception is thrown, but this exception should no longer be captured and will not produce any output when the thread ends. Note that Microsoft's NT operating system does not have a high stop of an external indication (ABRUPT). (It does not notify the STOP message to dynamically connect libraries, so it may cause system-level resource vulnerabilities.) This is what I recommend using similar abnormal methods simply cause Run () returns. The actual problem brings to this and abnormal similar processing method is that you must insert the code after each Synchronized block to test the "stopped" flag. And this additional code will reduce system performance and increase the length of code. Another way I think is to stop STOP () to delay "" in this case, when the next call wait () or yield () is terminated. I also want to join a isStopped () and stopped () method to Thread (this point, Thread will work like ISINTERRUPTED () and Interrupted (), but will detect the state of "stop-requested". This method is not common to the first one, but it is feasible and does not produce overload. Suspend () and resume () should be put back into the Java programming language, they are very useful, I don't want to be treated as a kindergarten. Since they may have potential dangers (when they are hanging, a thread can take a lock) and it is not reasonable. Please let me to decide whether to use them. If the received thread is being locked, Sun should use them as a run-time exception to call Suspend (); or a better way to delay the actual suspend process until the thread releases all Lock. The blocked I / O should be able to interrupt any blocked operations correctly, rather than only let them wait () and SLEEP (). I discussed this issue in the Socket section in Chapter 2 of Taming Java Threads. But now, for an I / O operation on a blocking socket, the only way to interrupt it is to close this socket, and there is no way to interrupt a blocking file I / O operation. For example, once a read request is started and enters the blocking state, the thread has been blocked by the blocking state unless it actually reads something. Do not interrupt read operations both the clearance file handle.

Also, the program should support the timing of I / O operation. All objects that may have blocking operations (such as InputStream objects) should also support this method: InputStream s = ...

S.SET_TIMEOUT (1000);

This is equivalent to the Socket class's setsotimeout (time) method. Similarly, it should be supported to transfer the timeout as a parameter to the blocking call. ThreadGroup class ThreadGroup should implement all methods of changing thread state in Thread. I especially wanting it to implement the Join () method so I can wait for the termination of all threads in the group. Summarizing the above is my suggestion. Just like I said in the title, if I am a king ... (). I hope these changes (or other equivalents) will eventually be introduced into the Java language. I really think that Java language is a great programming language; but I also think that Java's thread model is still not perfect, this is a unfortunate thing. However, Java programming languages ​​are evolving, so there is also an improved prospect. Reference

转载请注明原文地址:https://www.9cbs.com/read-124441.html

New Post(0)