.NET multi-threaded programming (1): multi-task and multi-threaded
In the .NET multi-threaded programming this series, we will talk about all aspects of multi-threaded programming. First of all, I will introduce you to the concept of multi-thread in this article, and the basic knowledge of multi-threaded programming; in the next article, I will talk one by one. Multi-threaded programming knowledge on the NET platform, such as the important classes and methods of system.threading namespace, and for some examples of programs.
introduction
Early computing hardware is very complicated, but the function of operating system is determined and simple. At that time, the operating system can only perform a task at any point in time, which is only one program at the same time. The execution of multiple tasks must be taken to perform, and the queue is waiting in the system. Due to the development of the computer, the system functions are required to be more powerful. At this time, the concept of time-operated operations: Each running program has a certain processor time, when this occupancy time is over, waiting for the queue waiting for the processor resource The next program began to put into operation. Note that the procedure here is not running after a certain processor time, may need to be allocated again or multiple times. Then it can be seen from here, this execution method is obviously parallel execution of multiple programs, but in macro, we feel that multiple tasks are executed at the same time, so the concept of multitasking is born. Each running program has its own memory space, its own stack and environment variable settings. Each program corresponds to a process, representing a big task. A process can start another process, this started process called a child process. The implementation of the parent process and sub-process only has logical success, and there is no other relationship, that is, their execution is independent. However, it may be a big program (representing a big task), you can split into a lot of small tasks. For functional needs, it is possible to speed up the speed of running, you may need to implement multiple tasks at the same time (each task) Assign a multi-thread to perform the corresponding task). For example, you are looking at some wonderful articles through your web browser, you need to download a good article, you may have a very wonderful article you need to collect, you use your printer to print these online Article. Here, one side of the browser downloads the article in the HTML format, while printing an article. This is a program to execute multiple tasks at the same time, and each task assigns a thread to complete. Therefore, we can see that the ability of a program to perform multiple tasks is achieved by multi-thread.
Multi-thread VS multitasking
As mentioned above, multitasking is relatively related to the operating system, refers to the ability to perform multiple programs at the same time, although this, but in fact, it is impossible to implement more than two in the case of only one CPU. program. The CPU does a high-speed switching between the program, so that all procedures can get smaller CPU times in a short period of time, so that from the user's perspective, it seems that it is executing multiple programs at the same time. Multi-thread relative to the operating system refers to the ability to perform different parts of the same program at the same time, each executed part is threaded. So when writing an application, we have to design well to avoid mutual interference when different threads are performed. This helps us design a robust program so that we can add threads at any time.
Thread concept
The thread can be described as a microtic, it has the starting point, the order of execution, and a end point. It is responsible for maintaining your own stack, which is used for exception handling, priority scheduling, and other information needed to resume the thread execution. From this concept, it seems that there is no difference between threads and processes. It is actually the difference between threads and processes:
A complete process has its own independent memory space and data, but the thread in the same process is shared memory space and data. A process corresponds to a program, which consists of some running threads independently in the same program. The thread is sometimes referred to as a lightweight process running in a parallel run in the program, and the thread is called a lightweight process because it relies on the context environment provided with the process, and uses the resources of the process.
In one process, the scheduling of threads has a preemptive or non-preemption pattern.
In the preemptive mode, the operating system is responsible for assigning the CPU time to each process. Once the current process is assigned to its own CPU time, the operating system will determine which thread that will take up the CPU time. Therefore, the operating system will periodically interrupt the currently executing thread, assign the CPU to the next thread in the waiting queue. So any thread cannot exclusively CPU. Each thread takes up the CPU time depending on the process and operating system. The process assigned to each thread is very short, so that we feel that all threads are executed simultaneously. In fact, the system runs for each process with 2 milliseconds, and then scheders other threads. It also maintains all threads and cycles, allocates a very small amount of CPU time to thread. The switching and scheduling of threads is so fast, so that it feels that all threads are executed synchronously. What is the meaning of scheduling? Scheduling means that the processor stores the status of the process of the CPU time and the state of loading this process in the future. However, this approach is also inadequate, and a thread can interrupt the execution of another thread at any given time. Suppose a thread is writing to a file, while another thread interrupts its operation, and writing to the same file. Windows 95 / NT, UNIX use is this thread scheduling mode.
In the non-seized scheduling mode, each thread can take up how much time the CPU is occupied. In this scheduling mode, it is possible that a thread that has a long execution time makes all other threads that need CPUs "starve". When the process is idle, that is, the process does not use the CPU, the system can allow other processes to temporarily use the CPU. The thread that occupies the CPU has the control of the CPU, and only the CPU can only be used when it actively releases the CPU. Some I / O and Windows 3. X is using this scheduling policy.
In some operating systems, both scheduling strategies are used. Non-preemption schedule policies are usually used in thread operation priorities, and thread scheduling for high priority uses more preemptive scheduling policies. If you are not sure that the system uses that scheduling policy, assume that the scheduled scheduling policy is not available. When designing applications, we think that threads that occupy a CPU time in a certain interval will release the CPU control, this time you will see those threads that are in the waiting queue with the current running threads or Higher priority threads allow these threads to use CPUs. If the system finds such a thread, the currently executed thread and the thread that activates the condition. If you do not find the same priority or more advanced thread, the current thread continues to occupy the CPU. When the thread being executed wants to release the control of the CPU to a low priority thread, the current thread is transferred to the sleep state and the low priority thread has a CPU.
In multiprocessor systems, the operating system assigns these independent threads to different processors, which will greatly speed up the run. The efficiency of thread execution is greatly improved because the splitting single processor of the thread becomes the distributed multiprocessor execution. This multiprocessor is very useful in three-dimensional modeling and graphics processing.
Need multi-threaded?
We sent a print command to ask the printer to print the task, assume that the computer stopped the response and the printer is still working, isn't that our stop hand is waiting for this slow printer print? Fortunately, this situation will not happen, we can listen to music or draw in the same time when the printer works. Because we use independent multithreading to implement these tasks. You may be surprised to access the database or the web server at the same time, how do they work? This is because a separate thread is established for each user connected to the database or the web server to maintain the status of the user. If a program is running in order, there may be problems in this way at this time, and even cause the entire program to crash. If the program can be divided into independent different tasks, use multithreading, even if some of the tasks fails, there is no impact on other, will not cause the entire program to crash. There is no doubt that writing multithreading procedures makes you have a tool that can drive non-multi-threaded programs, but multithreading may also become a burden or require a small price. If you use improper use, it will bring more deals. If a program has a lot of threads, then other programs thread must only take less CPU time; and a lot of CPU time is used for thread scheduling; the operating system also requires enough memory space to maintain the context of each thread. Information; therefore, a large number of threads reduce the operating efficiency of the system. Therefore, if you use multiple threads, the multithreading of the program must be well designed, otherwise the benefits will be far less than the bad. So using multithreaded us must carefully process the creation, scheduling, and release of these threads.
Multi-threaded program design tips
There are a variety of ways to design multithreaded applications. As shown in the following article, I will give a detailed programming example. Through these examples, you will be able to better understand multithreading. Threads can have different priorities, exemplified, in our application, plotting graphics or do a large number of operations while receiving user input, clearly the user's input needs to get the first time response, while graphics or The operation requires a lot of time, and the problem is not large, so the user input thread will require a high lake level, while graphics or low priority. These threads are independent of each other and do not affect each other.
In the above example, graphical drawing or a large number of calculations are clearly required to stand more CPU time. In this period, the user does not have to wait for them to enter the information, so we design the program into independent two Threads, a responsible user's input, a task responsible for handling those time-consuming tasks. This will make the program more flexible and can respond quickly. It can also cause the user to cancel the mission at any time of running. In this example of this drawing graph, the program should always be responsible for receiving the message from the system. If the program is busy with a task, it may cause the screen to become blank, which obviously requires our programs to handle such an event. So I have to have a thread to be responsible to handling these news, just as I said that I should trigger the work of heavy picture screens.
We should grasp a principle that for those tasks that need to be taken immediately, we should give high priority, while other thread priorities should be lower than her priority. The thread that listens to the client request should always be high priority. For a task of user interface with the user, it needs to get the first time response, its priority is due to the high priority.