Conacity of thread
The next issue related to the OS platform (this is also a problem with the Java program that is not related to the platform), it is necessary to determine the definition of concurrency and parallel on the platform. Concurrency multithreaded system will always give people a feeling of running at the same time, in fact, these tasks are done to be interlaced together into many blocks. In a parallel system, the two tasks are actually simultaneous (here is true while simultaneously, rather than fast interleaving execution of the parallel fake icon), which requires multiple CPUs. Figure 1.1:
Figure 1.1 Concurrency vs Parallelism
Multi-threading does not speed up the speed of the program. If your program does not need frequent waiting for IO operations, then multi-thread programs are more slower than single-threaded programs. However, under the multi-CPU system is reversed.
The main reason for the Non-platform independence of the Java thread system is to achieve thoroughly running threads, and if not using the system thread model provided by the OS, it is impossible. For Java, in theory, the JVM is allowed to simulate the entire thread system, thereby avoiding me in the previous article (tame Java Thread 2), and the time consumption mentioned in the OS core. However, this also excludes the parallelism in the program because if any operating system level thread is not used (this is to keep the platform independence), the OS will treat the JVM instance as a single-threaded program, A single CPU is assigned to execute it, resulting in an operation of running on a multi-CPU, and only one JVM instance is running separately, and it is impossible to have two Java thread true parallel operations (sufficient use of two CPUs).
Therefore, to realize the parallel operation, only two JVM instances, run different programs separately. It is better to make JVM to map the Java thread to the OS-level thread (a Java thread is a system thread, let the system make up the system, which gives full play to the system's handling ability, so there is no existence An issue running on a CPU). Unfortunately, thread mechanisms implemented by different operating systems are different, and these differences have arrived at a point where they cannot be ignored.
Problems due to platforms
Below, I will definitely be mentioned earlier by comparing Solaris and WindowsNT to the thread mechanism.
Java, in theory, at least 10 thread priority levels (if there are two or more threads in the on Ready state, the thread with high priority will be executed first). In Solaris, 231 priorities are supported, of course, there is no problem with 10 levels of Java.
At NT, only 7 priority divisions must be mapped to Java 10 grades. This will have many possibilities (priority 1, 2 may be equivalent to the priority of NT in Java, the priority 8, 9, 10 is equal to the 7th level in NT, there are many possibilities) . Therefore, there is a lot of problems when relying on priority to schedule the thread in NT.
More unfortunate still is behind! The thread priority under NT is still not fixed! This is more complicated! NT provides a mechanism called priority boosting. This mechanism allows the programmer can call a C language system Call (Windows NT / 2000 / XP:. You can disable the priority-boosting feature by calling the SetProcessPriorityBoost or SetThreadPriorityBoost function To determine whether this feature has been disabled, call the GetProcessPriorityBoost Or getthreadPriorityBoost function.) To change the thread priority, but Java cannot do this. When the priority boosting function is opened, the NT enhances the priority of the thread according to the approximate time of the system calls related to the I / O correlation. In practice, this means a thread's priority may be over your imagination, because this thread happens in a busy moment. The purpose of thread priority boosting mechanism is to prevent a background process (or thread) from affecting the UI display process in the front desk. Other operating systems also have a complex algorithm to reduce the priority of the background process. A serious side effect of this mechanism is to make us unable to judge the ready-to-run Reading. In this case, something will tend to become worse.
In Solaris, it means that in all UNIX systems, or in all contemporary operating systems, each process or thread has priority in addition to Microsoft. The high priority process is not interrupted by a low priority process, in addition, priority of a process can be restricted and set by administrator to prevent a user process from interrupting an OS core process or service. NT cannot be supported here. A NT process is the address space of a memory. It does not have a fixed priority and cannot be pre-arranged. All of them will be scheduled, and if a thread runs under a process in which it is no longer in-memory, this process will be switched into memory. The priority of the process in the NT is simplified to several priority classes distributed within the actual priority range, that is, they are not fixed, and are initiated by the system. Such as 1.2:
Figure 1.2 Windows NT Priority Architecture
The columns in the above figure, representing the priority of the thread, only 22 is used by all procedures (others can only use NT themselves). The row represents the priority class mentioned earlier.
A thread running on the "IDLE" level can only use 1-6 and 15. These seven priority levels, of course, that level, but also depends on the thread settings. A thread running in the "Normal" level and does not get a focus will use the priority of 1,6 - 10 or 15. If there is a focus and the process makes it still "Normal" level, the thread inside will run in 1, 7 - 11 or 15. This means a thread with a priority but in the "IDLE" process, it is possible to be preempted by a low priority but run in the "Normal" level, but this is limited to the background process. It should also be noted that a process running in the "High" priority class has only 6 priority grades and other priority classes are 7.
NT does not limit any restrictions on the priority class of the process. Running any thread on any process, you can fully control the entire system through the priority boosting mechanism, and the OS core does not have any defense. On the other hand, Solaris fully supports process priority mechanisms, as you may need to set your screen saver priority to prevent it from hindering the operational of the system's important process. Because in a critical server, the priority of the priority should not occupy the high priority thread execution. It can be seen that Microsoft's operating system is not suitable for high reliability servers. So how do we avoid it when we are programming? For this unlimited priority setting and uncontrollable priority boosting mechanism (for Java programs), there is no absolutely secure approach to perform Java programs to rely on priority scheduling threads. A compromised method is that when using the setPriority () function to set the thread priority, only Thread.max_priority, thread.min_priority and thread.norm_priority do not specify the priority parameters. This limitation can at least avoid the problem of level 10 maps as level 7. In addition, it is also recommended to determine if it is NT through the system properties of Os.Name. If it is to turn off the priority boosting mechanism by calling a local function, the IE of JVM PLUG-IN without using Sun is running. Java programs are also unable to effect (Microsoft's JVM uses a non-standard, local implementation). Finally, it is recommended that everyone will set the priority of most threads to Norm_Priority when programming, and rely on the thread scheduling mechanism. (I will talk about this problem later)
cooperation! (Cooperate)
In general, there are two thread modes: collaborative and preemptive.
Collaborative multi-threaded model
In a collaborative system, a thread holds the control of the processor until it will decide to give up (maybe it will never give up the control). Multiple threads have to cooperate with each other. Otherwise, only one thread can be executed, and others are hungry. In most collaborative systems, the scheduling of threads is generally determined by priority. When the current thread waits for control, the priority high in the thread will be controlled (a special case is the Window 3.x system, but it is also a collaborative system, but there is not a lot of thread execution progress adjustment, get focus Forms get controls).
A major advantage of collaborative systems relative to the first-style system is that it is running fast and low cost. For example, a context switch-control is converted from a thread to another thread - can be completely completed by the user mode subsystem library without entering the system core state (in NT, it is equivalent to 600 mechanical instruction time) . Under the collaborative system, user-state context switching is equivalent to C language call, setjump / longjump. A large number of collaborative threads are running at the same time, nor will it affect performance, because everything is mastered by programmers, more no need to consider synchronization issues. Programmers will ensure that its thread does not abandon the control of the CPU before the goal is not completed. But the world is not perfect, and life is always full of regrets. The collaborative thread model also has its own hard injury:
1. Under the collaborative thread model, the user programming is very troublesome (in fact, the system is transferred to the user). Split a long operation into many small pieces, a thing that needs to be very careful.
2. Collaborative threads cannot be executed in parallel.
Pre-predecessor multi-threaded model
Another option is to pretend the model model. This mode is like a clock in the system, which is triggered to trigger the thread. That is, the system can recollify the control of the CPU from the thread, where the control is given to other threads. The time interval between two switches is called a time slice. The efficiency of the predecessor system is not as high as collaborative, because the OS core must be responsible for managing threads, but so that users do not need to consider so many other questions when they are programmed, simplify the user's work, and make the program more reliable, because threads are hungry Another problem. The most critical advantage is that the predecessor model is parallel. Through the previous introduction, you can know that collaborative thread scheduling is done by the user subroutine, not OS, so you can make up your program has concurrently (as shown in Figure 1.1). In order to achieve the true parallel purpose, there must be an operating system intervention. Four threads are more efficient on four CPUs to run higher than the four threads.
Some operating systems, like Windows 3.1, only support collaborative models; there are some, like NT only support a first model model (of course, you can also call to simulate collaborative models on NT by using libraries using user mode. A library called "Fiber", but unfortunately Fiber is full of BUGS, and there is no thorough integration into the underlying system.) Solaris provides the world may be the best (of course, the worst) thread model It supports collaboration and supports preemptive.
Mapping from core threads to user processes
The last question to be solved is the mapping of the core thread to the user's process. NT uses a one-to-one mapping mode, see Figure 1.3.
Figure 1.3 NT thread model
NT's user thread is equivalent to the system core thread. They are mapped directly to each processor directly to the OS and always pretty. All thread operations and synchronization are completed by core calls. This is a very straightforward model, but it is neither flexible and inefficient.
Figure 1.4 The performance of the Solaris model is more interesting. Solaris adds a concept called a lightweight process (LWP - LightWeight Process). The LWP is an adjustability unit that can run one or more threads. Parallel processing only on LWP. In general, LWP is stored in the buffer pool and assigned to the corresponding processor on demand. If a LWP is required to perform some tasks, it must bind a specific processor to prevent other LWPs from using it.
From the user's point of view, this is a thread model that is collaborate and preemptive. Simply, a process has at least one LWP for all thread sharing it contains. Each thread must make other threads (collaborate) through the Yield, but a single LWP can be a lwp of other processes. This achieves parallel results at the level level, while threads in the process are in collaboration.
One process is not limited to only one LWP, and threads under this process can share the entire LWP pool. A thread can be bound to a LWP by two ways:
1. Bind one or more threads to a specified LWP via programming. In this case, the threads under the same LWP must work together, but these threads can also first take the threads under other LWPs. This means that if you limit a LWP that can only be bound to a thread, then it becomes the NT's predecessor thread system.
2. Automatically bind it through the user's scheduler. From a programming point of view, this is a more confusing situation because you can't assume that the environment is collaborative, or a predecessor.
The Solaris thread system gives the user's greatest flexibility. You can concurrent collaborative systems and slower in speeds.
It is indeed a parallel predecessor system, or in the compromise between the two. But is the world of Solaris really perfect? (My consistent argument appears again, huh, huh!) All the flexibility is equal to a Java programmer, because you can't determine the thread model used by JVM. For example, early Solaris JVM takes a strict collaborative mechanism. JVM is equivalent to a LWP, and all Java threads share this only LWP. Now Solaris JVM uses a thorough predecessor model, all threads exclusively LWP. So what is the poor programmers we do? We are so small in this world, even if the JVM uses the thread mechanism that uses the mode that cannot be determined. In order to write the platform independent code, you must make two contradictions on the surface:
1. A thread can be preemptive at any time by another thread. Therefore, you must be careful to use the synchronized keyword to ensure that non-atomic operations are running correctly.
2. A thread will never be preceded unless it will give up control. Therefore, you must accidentally perform some operations that abandon control to run other threads, appropriate use of Yield () and SLEEP () or using blockage I / O calls. For example, when your thread performs 100 times or a fairly intensive intensive operation, you should take the initiative to sleep for hundreds of milliseconds, come to the low priority thread to operate. note! The Yield () method will only give the control right to threads equivalent or higher than your thread.
Figure 1.4 Solaris thread model
to sum up
Since many OS-level factors have caused Java programmers to write a thorough platform-independent multi-threaded program, troubles frequently (~~~~~~ I want to make a pleasure, to hold back!). We can only press the worst case, for example, you can only be robbed at any time, so you must use Synchronized; you have to assume that your thread will never be robbed, if you don't give up yourself, so you It is also necessary to use Yield () and SLEEP () or blocking I / O operations. There is also the introduction of me at the beginning: Never believe the thread priority, if you want to do the platform independence!