.NET multi-threaded programming (1): multi-task and multi-threaded
In the .NET multi-threaded programming this series, we will talk about all aspects of multi-threaded programming. First of all, I will introduce you to the concept of multi-thread in this article, and the basic knowledge of multi-threaded programming; in the next article, I will talk one by one. Multi-threaded programming knowledge on the NET platform, such as the important classes and methods of system.threading namespace, and for some examples of programs.
introduction
Early computing hardware is very complicated, but the function of operating system is determined and simple. At that time, the operating system can only perform a task at any point in time, which is only one program at the same time. The execution of multiple tasks must be taken to perform, and the queue is waiting in the system. Due to the development of the computer, the system functions are required to be more powerful. At this time, the concept of time-operated operations: Each running program has a certain processor time, when this occupancy time is over, waiting for the queue waiting for the processor resource The next program began to put into operation. Note that the procedure here is not running after a certain processor time, may need to be allocated again or multiple times. Then it can be seen from here, this execution method is obviously parallel execution of multiple programs, but in macro, we feel that multiple tasks are executed at the same time, so the concept of multitasking is born. Each running program has its own memory space, its own stack and environment variable settings. Each program corresponds to a process, representing a big task. A process can start another process, this started process called a child process. The implementation of the parent process and sub-process only has logical success, and there is no other relationship, that is, their execution is independent. However, it may be a big program (representing a big task), you can split into a lot of small tasks. For functional needs, it is possible to speed up the speed of running, you may need to implement multiple tasks at the same time (each task) Assign a multi-thread to perform the corresponding task). For example, you are looking at some wonderful articles through your web browser, you need to download a good article, you may have a very wonderful article you need to collect, you use your printer to print these online Article. Here, one side of the browser downloads the article in the HTML format, while printing an article. This is a program to execute multiple tasks at the same time, and each task assigns a thread to complete. Therefore, we can see that the ability of a program to perform multiple tasks is achieved by multi-thread.
Multi-thread VS multitasking
As mentioned above, multitasking is relatively related to the operating system, refers to the ability to perform multiple programs at the same time, although this, but in fact, it is impossible to implement more than two in the case of only one CPU. program. The CPU does a high-speed switching between the program, so that all procedures can get smaller CPU times in a short period of time, so that from the user's perspective, it seems that it is executing multiple programs at the same time. Multi-thread relative to the operating system refers to the ability to perform different parts of the same program at the same time, each executed part is threaded. So when writing an application, we have to design well to avoid mutual interference when different threads are performed. This helps us design a robust program so that we can add threads at any time.
Thread concept
The thread can be described as a microtic, it has the starting point, the order of execution, and a end point. It is responsible for maintaining your own stack, which is used for exception handling, priority scheduling, and other information needed to resume the thread execution. From this concept, it seems that there is no difference between threads and processes. It is actually the difference between threads and processes:
A complete process has its own independent memory space and data, but the thread in the same process is shared memory space and data. A process corresponds to a program, which consists of some running threads independently in the same program. The thread is sometimes referred to as a lightweight process running in a parallel run in the program, and the thread is called a lightweight process because it relies on the context environment provided with the process, and uses the resources of the process.
In one process, the scheduling of threads has a preemptive or non-preemption pattern.
In the preemptive mode, the operating system is responsible for assigning the CPU time to each process. Once the current process is assigned to its own CPU time, the operating system will determine which thread that will take up the CPU time. Therefore, the operating system will periodically interrupt the currently executing thread, assign the CPU to the next thread in the waiting queue. So any thread cannot exclusively CPU. Each thread takes up the CPU time depending on the process and operating system. The process assigned to each thread is very short, so that we feel that all threads are executed simultaneously. In fact, the system runs for each process with 2 milliseconds, and then scheders other threads. It also maintains all threads and cycles, allocates a very small amount of CPU time to thread. The switching and scheduling of threads is so fast, so that it feels that all threads are executed synchronously. What is the meaning of scheduling? Scheduling means that the processor stores the status of the process of the CPU time and the state of loading this process in the future. However, this approach is also inadequate, and a thread can interrupt the execution of another thread at any given time. Suppose a thread is writing to a file, while another thread interrupts its operation, and writing to the same file. Windows 95 / NT, UNIX use is this thread scheduling mode.
In the non-seized scheduling mode, each thread can take up how much time the CPU is occupied. In this scheduling mode, it is possible that a thread that has a long execution time makes all other threads that need CPUs "starve". When the process is idle, that is, the process does not use the CPU, the system can allow other processes to temporarily use the CPU. The thread that occupies the CPU has the control of the CPU, and only the CPU can only be used when it actively releases the CPU. Some I / O and Windows 3. X is using this scheduling policy.
In some operating systems, both scheduling strategies are used. Non-preemption schedule policies are usually used in thread operation priorities, and thread scheduling for high priority uses more preemptive scheduling policies. If you are not sure that the system uses that scheduling policy, assume that the scheduled scheduling policy is not available. When designing applications, we think that threads that occupy a CPU time in a certain interval will release the CPU control, this time you will see those threads that are in the waiting queue with the current running threads or Higher priority threads allow these threads to use CPUs. If the system finds such a thread, the currently executed thread and the thread that activates the condition. If you do not find the same priority or more advanced thread, the current thread continues to occupy the CPU. When the thread being executed wants to release the control of the CPU to a low priority thread, the current thread is transferred to the sleep state and the low priority thread has a CPU.
In multiprocessor systems, the operating system assigns these independent threads to different processors, which will greatly speed up the run. The efficiency of thread execution is greatly improved because the splitting single processor of the thread becomes the distributed multiprocessor execution. This multiprocessor is very useful in three-dimensional modeling and graphics processing.
Need multi-threaded?
We sent a print command to ask the printer to print the task, assume that the computer stopped the response and the printer is still working, isn't that our stop hand is waiting for this slow printer print? Fortunately, this situation will not happen, we can listen to music or draw in the same time when the printer works. Because we use independent multithreading to implement these tasks. You may be surprised to access the database or the web server at the same time, how do they work? This is because a separate thread is established for each user connected to the database or the web server to maintain the status of the user. If a program is running in order, there may be problems in this way at this time, and even cause the entire program to crash. If the program can be divided into independent different tasks, use multithreading, even if some of the tasks fails, there is no impact on other, will not cause the entire program to crash. There is no doubt that writing multithreading procedures makes you have a tool that can drive non-multi-threaded programs, but multithreading may also become a burden or require a small price. If you use improper use, it will bring more deals. If a program has a lot of threads, then other programs thread must only take less CPU time; and a lot of CPU time is used for thread scheduling; the operating system also requires enough memory space to maintain the context of each thread. Information; therefore, a large number of threads reduce the operating efficiency of the system. Therefore, if you use multiple threads, the multithreading of the program must be well designed, otherwise the benefits will be far less than the bad. So using multithreaded us must carefully process the creation, scheduling, and release of these threads.
Multi-threaded program design tips
There are a variety of ways to design multithreaded applications. As shown in the following article, I will give a detailed programming example. Through these examples, you will be able to better understand multithreading. Threads can have different priorities, exemplified, in our application, plotting graphics or do a large number of operations while receiving user input, clearly the user's input needs to get the first time response, while graphics or The operation requires a lot of time, and the problem is not large, so the user input thread will require a high lake level, while graphics or low priority. These threads are independent of each other and do not affect each other.
In the above example, graphical drawing or a large number of calculations are clearly required to stand more CPU time. In this period, the user does not have to wait for them to enter the information, so we design the program into independent two Threads, a responsible user's input, a task responsible for handling those time-consuming tasks. This will make the program more flexible and can respond quickly. It can also cause the user to cancel the mission at any time of running. In this example of this drawing graph, the program should always be responsible for receiving the message from the system. If the program is busy with a task, it may cause the screen to become blank, which obviously requires our programs to handle such an event. So I have to have a thread to be responsible to handling these news, just as I said that I should trigger the work of heavy picture screens.
We should grasp a principle that for those tasks that need to be taken immediately, we should give high priority, while other thread priorities should be lower than her priority. The thread that listens to the client request should always be high priority. For a task of user interface with the user, it needs to get the first time response, its priority is due to the high priority.
.NET multi-threaded programming (2): system.threading.thread class
In the next article, I will introduce the thread API in .NET, how to create threads, start and stop threads, set priority and status in .NET, will be automatically Assign a thread. Let's take a look at the knowledge of creating a thread with a C # programming language and continue learning threads. We all know that the main thread of the .NET runtime environment is started by the main () method, and the .NET compile language has an automatic garbage collection function. This garbage collection occurs in another thread, all of which are In the background, let us not feel what happened. The default here is that there is only one thread to complete all the program tasks, but as we have discussed in the first article, it is possible that we will add more as needed. Multi-thread makes the program better coordination. For example, in our example, a program that needs to draw graphics or complete a large number of computational procedures while entering a user, we must add a thread that allows the user to get a timely response, because the input to time and response is Just, and another thread is responsible for graphical drawing or a large number of calculations.
The system.threading namespace of the .NET underlying class library provides a large number of classes and interfaces to support multi-threaded. This namespace has a lot of classes, we will focus on this class here.
The System.Threading.Thread class is created and controlling the thread, setting the priority of its priority and gets the most commonly used state. He has a lot of ways, here we will introduce more common and important methods:
Thread.Start (): Start the execution of the thread;
Thread.suspend (): Suspend the thread, or if the thread has been suspended;
Thread.Resume (): Continue to hang the thread;
Thread.Interrupt (): Abort the thread in WAIT or SLEEP or JOIN threads;
Thread.join (): Block calling thread until a thread is terminated
Thread.sleep (): Block the current thread blocks the number of milliseconds;
Thread.abort (): The process of starting to terminate this thread. If the thread is already terminated, the thread cannot be started via thread.start ().
By calling thread.sleep, thread.suspend or thread.join can be suspended / blocked. Calling the Sleep () and Suspend () methods means that the thread will no longer get the CPU time. The method of these two pause threads is different, sop () allows the thread to stop execution, but before the Suspend () method is called, a security point must be reached when the public language is running. A thread cannot call another thread to call the SLEEP () method, but you can call the Suspend () method so that another thread is suspended. Call the thread.Resume () method for the thread that has hangs will continue. Regardless of how many Suspend () methods to block a thread, simply calling the resume () method once can continue to be executed. The thread that has been terminated and the thread that has not been executed has not been executed. Thread.sleep (int X) causes the thread to block X milliseconds. Only when the thread is to be used by other threads by calling the thread.Abort () or the thread.abort () method can be woken up. If the thread.Interrupt () method is called for a thread.Interrupt () method for the blocking state, thread state is threaded, but you can capture this exception and do it, or ignore this exception and stop the thread at runtime. Within a certain wait time, Thread.Interrupt () and thread.abort () can immediately wake up a thread. Below we will explain how to stop another thread from a thread. In this case, we can permanently destroy a thread by using the thread.abort () method, and will throw the ThreadAbortException exception. Make the ended thread can capture an exception but it is difficult to control recovery, only the way is to call thread.resetabort () to cancel the call just now, and only when this exception is caused by the modified thread. Therefore, the A thread can correctly use the thread.abort () method to act on the B thread, but the B thread cannot call Thread.ResetAbort () to cancel the thread.abort () operation. The thread.abort () method makes the system quietly destroy the thread and does not notify the user. Once the thread.abort () operation is implemented, the thread cannot be restarted. This method does not mean that the thread is immediately destroyed, so we can call Thread.join () to determine its destruction, thread.join () is a blocking call until the thread is indeed endless. return. However, it is possible to call the thread.interrupt () method to abort another thread, and this thread is waiting for the returning of thread.join () call.
Do not use the suspend () method to hang the blocking thread because it is easy to cause deadlocks. Suppose you hang a thread, and the resources of this thread are what other threads need, what will happen. Therefore, we will use thread.Suspend () methods with a different priority to different priorities as much as possible.
Thread class has a lot of properties, these important properties are our multi-threaded programming must be mastered.
Thread.isalive Attribute: Get a value indicating the execution status of the current thread. True if this thread is started and has not been terminated or aborted; otherwise false. Thread.name Properties: Gets or sets the name of the thread.
Thread.Priority property: Get or set a value that indicates the scheduling priority of the thread.
Thread.ThreadState Properties: Get a value that contains the status of the current thread.
In the following example, we will look at how to set these properties, in the subsequent example we will discuss these properties in detail.
Create a thread, first instantiate a Thread class, call the Threadstart delegation in the class constructor. This delegate contains the thread where to start execution. When the thread is started, the start () method launches a new thread. The following is an example program.
Using system;
Using system.threading;
Namespace Learnereads
{
Class Thread_App
{
Public static void first_thread ()
{
Console.writeline ("First Thread Created");
Thread current_thread = thread.currentthread;
String thread_details = "thread name:" current_thread.name
"/ r / nthread state:" current_thread.threadstate.tostring ()
"/ r / n thread priority level:" current_thread.priority.toString ();
Console.writeline ("The Details of The Thread Are:" Thread_Details;
Console.writeline ("First Thread Terminated");
}
Public static void
Main
()
{
ThreadStart THR_START_FUNC = New ThreadStart (first_thread);
Console.Writeline ("Creating The First Thread");
Thread fthread = new thread (thr_start_func);
fthread.name = "first_thread";
fthread.start (); // Starting the Thread
}
}
}
In this example, an FTHREAD thread object is created, which is responsible for performing the task inside the first_thread () method. The agent of the address threadstart that contains first_thread () when Thread's start () method is called.
Thread status
The system.threading.thread.threadState property defines the status of the thread when executing. Threads are terminated from the creation to the thread, which must be in one of them. When the thread is created, it is in the UnStarted state, the Start () method of the Thread class will make the thread state into the Running state, and the thread will always be in this state unless we call the corresponding method to hang, block, Destruction or nature termination. If the thread is suspended, it will be in the SUSPENDED state unless we call the resume () method to re-execute it, this time the thread will revert to Running status. Once the thread is destroyed or terminated, the thread is at the STOPPED state. Threads in this state will not be present, as thread starts start, thread will not be able to return to UNSTARTED state. The thread has a Background status, which indicates that the thread runs at the front desk or a background. In a determined time, the thread may be in multiple states. According to examples, a thread is called Sleep and then another thread calls an Abort method on this blocking thread, which will at the WaitSleepJoin and the AbortRequested status. Once the thread response is swelled or aborted, ThreadAbortException exception is thrown when destroy. Thread priority
System.Threading.Thread.Priority enumerates the priority of the thread, which determines how much CPU time can be obtained. High priority threads typically get more CPU time than threads of the general priority, and if more than one high priority thread is more than one high priority thread, the operating system will allocate the CPU time between these threads. The low priority thread is relatively small, and there is no high priority thread here, the operating system will select the next low priority thread execution. Once the low priority thread encounters a high priority thread, it will give the CPU to high priority threads. The newly created thread priority is a general priority, we can set the priority value of the thread, as shown below:
Highest
Abovenormal
Normal
Belownormal
Lowest
Conclusion: In this section, we discussed the priority of the creation of threads. System.Threading Namespace also includes advanced features of thread lock, thread synchronization, multi-thread management class, and deadlock solution, in the back part we will continue to discuss these content.
.NET multi-threaded programming (3): thread synchronization
With the in-depth learning of multi-threaded learning, you may think you need to understand some questions about thread sharing resources. .NET Framework provides a lot of class and data types to control access to shared resources.
Consider a situation we often encounter: There are some global variables and shared class variables, we need to update them from different threads, which can complete such tasks by using the System.Threading.Interlocked class, it provides atoms, Non-modular integer update operations.
There is also a piece of code you can lock the object using the System.Threading.Monitor class, making it temporarily accessible by other threads.
Examples of the System.Threading.Waithandle class can be used to package an operating system-specific object waiting for exclusive access to shared resources. Especially for interoperability issues for non-tube codes.
System.Threading.mutex is used to synchronize multiple complex threads, which also allows single-threaded access.
Synchronous event classes like ManualRestEvent and AutoreteTevent support a class to notify other events.
Do not discuss the synchronization problem of threads, is equal to very little multi-threaded programming, but we must use multiple thread synchronization. When using thread synchronization, we must be able to determine correctly in advance is that objects and methods may cause deadlocks (dead locks are all threads to stop corresponding, all of them release resources). There is also a problem with the data (referring to the inconsistency caused by multiple threads on the same time), this is not easy to understand, so, there are X and Y two threads, thread x read data from the file And write data to the data structure, thread y is read from this data structure and send data to another computer. Assuming that the Y read data is written, the data is obvious that the data read from the actual store is inconsistent. This situation is obviously what we should avoid happening. A small number of threads will make more than a few more probabilities, and more good synchronization for shared resources. The CLR of the .NET Framework provides three ways to complete the shared resource, such as a global variable domain, a specific code segment, static, and instantiated method and domain.
(1) Code domain synchronization: Use the Monitor class to synchronize all code or part of the method of the static / instantiated method. Static domain synchronization is not supported. In an instantiated method, the THIS pointer is used for synchronization; and in a static method, the class is used for synchronization, which will be described later.
(2) Handmade synchronization: Create your own synchronization mechanism using different synchronous classes (such as WaitHandle, Mutex, ReaderWriterlock, ManualRetevent, AutoreteTevent, and Interlocked, etc.). This synchronous mode requires you to manually synchronize different domains and methods, which can also be used for synchronization between processes and release of deadlocks caused by waiting for the shared resource.
(3) Context synchronization: Use SynchronizationAttribute to create simple and automatic synchronization for the ContextBoundObject object. This synchronization method is only used for instantiated methods and domain synchronization. All objects share the same lock in the same top and bottom field.
Monitor Class
Only one thread can be accessed at a given time and the specified code segment, and the Monitor class is very suitable for thread synchronization of this situation. The methods in this class are static, so do not instantiate this class. Some static methods below provide a mechanism to synchronize access to the consistency of deadlock and maintain data.
Monitor.Enter Method: Gets his lock on the specified object.
Monitor.TryEnter method: Try to get his lock in the specified object.
Monitor.exit method: Release the exclusive lock on the specified object.
Monitor.wait method: Release the lock on the object and block the current thread until it reaches the lock.
Monitor.pulse method: Notify the wait for the change in the thread lock object status in the queue.
Monitor.pulseAll method: Notify all changes to the status of the thread object state.
You can synchronize the code segment by locking and unlocking the specified object. Monitor.Enter, Monitor.Tryenter and Monitor.exit are used to lock and unlock the specified object. Once you get (call Monitor.Enter) specify the lock of the object (code segment), the other threads cannot get the lock. For example, thread x get an object lock, this object lock can be released (call Monitor.exit (Object) or Monitor.Wait). When this object lock is released, the Monitor.pulse method and Monitor.pulseall method notify the next thread of the ready queue and all other ready queues will have the opportunity to get his lock. The thread x releases the lock and thread y gets the lock while calling the Monitor.wait thread x into the waiting queue. When the thread (thread y) from the currently locked object is received by Pulse or Pulseall, the thread waiting for the queue will enter the ready queue. The thread x returns when the object is reacted. If a locking thread (thread y) does not call Pulse or PulseAll, the method may be locked in uncertain. Pulse, Pulseall and Wait must be called synchronized code segments. For each synchronized object, you need a pointer to the pointer, ready queue and waiting queue (including a thread that need to be notified by notifying the status of the object). You may ask, what happens when two threads call Monitor.Enter? Regardless of the two threads to call Monitor.Enter, it is actually one before, one afterward, there will never have an object lock. Since Monitor.Enter is atomic operation, the CPU is impossible to prejug, do not like another thread. In order to get better performance, you should delay the acquisition of the latter to get the lockup and immediately release the front thread object lock. For Private and Internal objects, the lock is possible, but it is possible to cause a deadlock for the External object, because the unrelated code may lock the same object due to different purposes.
If you want to lock a code, it is best to add a set lock statement in the TRY statement, and put Monitor.exit in the Finally statement. For locking of the entire code segment, you can use MethodImplattribute (in the System.Runtime.CompilerServices Namespace) class to set the synchronization value in its constructor. This is an alternative method that the lock is released when the method is turned back. If you need to release the lock quickly, you can use the Monitor class and C # Lock's declaration instead of the above method.
Let's take a look at the code using the Monitor class:
Public void some_method () {
INT A = 100;
INT b = 0;
Monitor.enter (this);
// Say we do something here.
INT C = A / B;
Monitor.exit (this);
}
The above code will generate a problem. When the code is running to INT C = A / B; will throw an exception, Monitor.exit will not return. So this program will hang, and other threads will not be locked. There are two ways to solve the above problems. The first method is to put the code into the try ... finally, call Monitor.exit in Finally, so that the lock will be released. The second method is to use the LOCK () method of the C #. The effect of calling this method and calling the Monitoy.Enter is the same. However, once the code performs an outward range, the release lock will not occur automatically. See the code below: public void some_method () {
INT A = 100;
INT b = 0;
LOCK (this);
// Say we do something here.
INT C = A / B;
}
C # Lock stated that the same feature as Monitoy.Enter and Monitoy.exit, which is used in your code segment that cannot be interrupted by other independent threads.
Waithandle Class
The WaitHandle class is used as a base class, which allows multiple waiting to be operated. This class encapsulates the synchronization method of Win32. WaitHandle objects Notify other threads It requires access to resource exclusive access, and other threads must wait until WaitHandle no longer uses resources and waiting for handles. Below is a few classes from it inherited:
MuteX class: The synchronization primitive can also be used in inter-process synchronization.
AutoreteTevent: Informs one or more threads that have occurred have occurred. Unable to inherit this class.
ManualRestEvent: When notifies one or more thread events that are waiting for it already occurs. Unable to inherit this class.
These classes define some signaling mechanisms to make and release the resource exclusive access. They have two states: Signaled and Nonsignaled. The wait handle of the Signaled state is not part of any thread unless the Nonsignaled state. The thread of the waiting handle no longer uses the SET method when you wait for the handle, and other threads can call the reset method to change the status or any WaitHandle method requires waiting for handle, which see below:
Waitall: Wait for all elements in the specified array to receive the signal.
WAITANY: Wait for either element in the specified array to receive the signal.
Waitone: When you override in the derived class, block the current thread until the current WaitHandle receives the signal.
These WAIT methods block the thread until one or more synchronous objects receive signals.
WaitHandle Object Package Wait for an operating system for exclusive access to shared resources, whether it is a receiving code or non-managed code. But it doesn't have Monitor to use light, Monitor is a complete managed code and is very efficient to the use of operating system resources.
Mutex Class
Mutex is another way to complete the thread and cross-process synchronization, and it also provides synchronization between processes. It allows a thread exclusively to block access from other threads and processes while sharing resources. The name of Mutex is very good to illustrate its owner's exclusive possession of resources. Once a thread has Mutex, you want to get other threads of MUTEX will hang until the thread is released. Mutex.releaseMutex method is used to release Mutex, and a thread can call the wait method multiple times to request the same MUTEX, but Mutex.ReleaseMutemutemute that must be called when the Mutex must call. If there is no thread occupies Mutex, then the state of MUTEX becomes Signaled, otherwise it is NOSIGNALED. Once the state of Mutex changes to Signaled, the next thread waiting for the queue will get MUTEX. The Mutex class corresponds to the Win32's CreateMutex. It is very simple to create a Mutex object. It is common to use the following methods: A thread can get Mutex's ownership by calling WaitHandle.WaitAny or WaitHandle.Waitall. If Mutex does not belong to any thread, the above call will make the thread possex, and Waitone will return immediately. But if there are other threads with Mutex, Waitone will fall into the indefinite waiting until you get Mutex. You can specify a parameter waiting time in the waitone method to avoid an indefinite period MUTEX. Call the Close acting on Mutex will be released. Once Mutex is created, you can use the GetHandle method to get the Mutex handle to use the Waithandle.Waitany or WaitHandle.Waitall method.
Here is an example:
Public void some_method () {
INT A = 100;
INT b = 20;
Mutex firstmutex = new mutex (false);
Firstmutex.waitone ();
// Some Kind of Processing Can Be Done Here.
INT x = a / b;
FigStmutex.close ();
}
In the above example, the thread creates Mutex, but starting without stating that it has MUTEX by calling the waitone method.
Synchronization Events
Synchronization time is some wait handle to notify other threads what happens and resources are available. They have two states: Signaled and Nonsignaled. AutoreteTevent and ManualReveTevent are this synchronization event.
AutoreteEvent Class
This class can notify one or more threads. When a waited thread is released, it converts the status to Signaled. Use the SET method to make its instance status to signal. However, once the wait thread is notified to signal to signal, its turntable will automatically change to Nonsignaled. If there is no thread listening event, the turntable will remain in Signaled. Such can't be inherited.
ManualReveTevent Class
This class is also used to inform one or more thread events. Its state can be manually set and reset. Manual reset time will keep the Signaled state until the ManualReveTevent.reset sets its status as nonsignaled, or keeps the status as nonsignaled until the manualReveTevent.set sets its status as signalted. This class cannot be inherited.
Interlocked Class
It provides synchronization of variable access shared between threads, its operation, atomic operation, and thread sharing. You can add or reduce shared variables via interlocked.increment or interlocKed.Decrement. It is a bit of atomic operations That means that these methods can be incrementally incremented by a constant parameter and return a new value, all the operation is a step. You can also use it to specify the value of the variable or check if the two variables are equal, if equally, The specified value replaces the value of one of the variables. ReaderWriterlock Class
It defines a lock that provides a unique write / multi-read mechanism that makes read and write synchronization. All number threads can read data, and the data lock will be needed when there is a thread update data. Read threads can get locks And only the thread that is not written here. When there is no reading thread and other write threads, the write thread can be locked. Therefore, once the Writer-Lock is requested, all read threads will not read the data until Write thread access It is completed. It supports pause and avoids dead lock. It also supports nested read / write locks. The method of supporting nested readlock is ReaderWriterLock.AcquireReaderLock, if a thread is locked, the thread will be paused;
The method of supporting nested write is ReaderWriterlock.AcquireWriterlock, if a thread is read, the thread is paused. If a read lock will be easy to die. The security method is to use the ReaderWriterlock.Upgradetowriterlock method, which will enable the reader to upgrade to the reader Writer. You can use the ReaderWriterlock.DowngradeFromWriterlock method to degrade the writer to the reader. Call ReaderWriterLock.Releaselock will release the lock, ReaderWriterlock.RestoreLock will reload the status to call ReaderWriterlock.ReleaseLock.
in conclusion:
This part tells the problem of thread synchronization on the .NET platform. In the series, I will give some examples to further illustrate the methods and techniques of these uses. Although thread synchronization will give us a program Come big value, but we'd better use these methods. Otherwise, it is not benefiting, but will be performance drops and even the program crash. Only a lot of contacts and experience can make you control these skills. Use less as little as possible Synchronous code blocks do not complete or uncertain blocking things, especially I / O operation; use local variables as much as possible instead of global variables; synchronization is used in those partial code and process access and status by multiple threads and processes Shared place; arrange your code to get exact controls in a thread; not the code between the thread is safe; in the next article, we will learn the knowledge related to the thread pool. NET multi-threaded Programming (4): Thread pool and asynchronous programming
If you read the three articles in front of me carefully, I believe that you have known the basic thread knowledge and multi-thread programming knowledge of the System.Threading.Thread class provided with .NET Framework and some thread synchronization. We will further discuss some .NET classes here, and how they play in multi-threaded programming. They are: System.Threading.ThreadPool class
System.threading.Timer class
If the number of threads is not a lot, and you want to control the details of each thread, such as thread priority, etc., using thread is relatively appropriate; but if there is a large number of threads, consider using thread pools should be better, it provides Efficient thread management mechanisms to handle multiple tasks. For regular execution tasks Timer class is appropriate; use the representative is the first choice for asynchronous method calls.
System.threading.threadpool class
When you create an application, you should recognize that most of your threads are idle waiting for some events (such as pressing a key or listening to the jacket). There is no doubt that you will think that this is absolute waste of resources.
If there are many tasks here, you need a thread every task, you should consider using thread pools to manage your resources more effectively and benefit from it. The thread pool is a plurality of threads that are executed, which allows you to add a thread automatically created and started to the queue. Use the thread pool to make your system to optimize the time fragmentation of threads when using CPU. But to remember at any specific point in time, each process and each thread pool has only one running thread. This class makes your thread that make up the pool to be managed, and your main energy is concentrated in the logic of workflow rather than the management of threads. When the first instantiation ThreadPool class, the thread pool will be created. It has a default upper limit, that is, the up to 25 per processor, but this upper limit can be changed. This makes the processor will not be idle. If one of the threads waits for an event, the thread pool will initialize another thread and put it into the processor, the thread pool is the thread that is constantly created, and the assignment task gives those threads in the queue without work. The only limit is that the number of working threads cannot exceed the maximum allowable number. Each thread will run in the default priority and use the default stack of stacks that belong to multi-threaded space. Once a job task is added to the queue, you can't cancel.
Requesting the thread pool Processing a task or work item can call the QueueUserWorkItem method. This method takes a WaitCallback representative type parameter, which is packaged in the task of your medicine. Automatically create threads for each task and release threads when the task is released.
The following code illustrates how to create a thread pool and how to add a task:
Public void Afunction (Object O)
{
// do what Ever the function is support to do.
}
// Thread Entry Code
{
// Create An Instance of WaitCallback
Waitcallback mycallback = new waitcallback (afunction);
// add this to the thread pool / queue a task
ThreadPool.queueUserWorkItem (MyCallback);
}
You can also pass a system.threading.waithandle by calling the ThreadPool.RegisterWaitForsingleObject method, when the notification or time exceeds the call to be packaged by System.Threading.WaitortiMerCallback.
Thread pools and event-based programming modes make the thread pool to register for WaitHandles and the call to the appropriate waitorTimalCallback representative method is very simple (when WaitHandle is released). These practices are actually very simple. Here there is a constant observation of the thread queue waiting to operate. Once the operation is complete, a thread will be executed with its corresponding task. Therefore, this method adds a thread as the occurrence of the trigger event.
Let us see how to add a thread to the thread pool with an event, is actually very simple. We only need to create an event of a ManualResetEvent class and a representative of WaitortiMerCallback, and then we need a object carrying a representative status while we must decide the rest interval and execution. We add above to the thread pool and inspire an event:
Public void Afunction (Object O)
{
// do what Ever the function is support to do.
}
// Object That Will Carry The Status Info? o: p>
Public Class Anobject
{
}
// Thread Entry Code
{
// CREATE AN EVENT OBJECT?
ManualReveTevent aevent = new manualReveTevent (false);
// Create an Instance of WaitortimerCallback
Waitortimercallback thread_method = new waitorTimercallback (Afunction); // Create An Instance of Anobject
Anobject myobj = new anObject ();
// Decide How Thread Will Perform
INT TIMEOUT_INTERVAL = 100; // Timeout In Milli-Seconds.
Bool onetime_exec = true;
// add all this to the thread pool.
Threadpool. RegisterWaitforsingleObject (aevent, thread_method, myobj, timeout_interval, onetime_exec);
// raise the evenet
aevent.set ();
}
In the QueueUserWorkItem and RegisterWaitForsingleObject method, the thread pool creates a background thread to call back. When the thread pool begins to execute a task, both methods merge the caller's stack into the thread stack of the thread pool. If you need a security check to consume more time and increase the burden of the system, you can avoid security checks by using their corresponding unsafe methods. That is ThreadPool.unsafeRegisterWaitforsingleObject and ThreadPool.unsafequeueUserWorkItem.
You can also queue the task that is not related to waiting for operation. Timer-Queue Timers and Registered Wait Operations also uses the thread pool. Their return approach is also placed in the thread pool queue.
The thread pool is very useful and is widely used. Set of sub-programming on the NET platform, waiting for operation registration, process timer, and asynchronous I / O. For small and short tasks, the mechanism provided by the thread pool is also very convenient to be in multi-threaded. Thread pools are very convenient for completing many independent tasks and does not need to set up a thread attribute one by one. However, you should also be very clear, there are many cases that can be used in other ways to replace thread pools. For example, your planned task or give each thread-specific property, or you need to put the thread into a single thread space (and the thread pool is put in a multi-threaded space), or a specific task is Very lengthy, you'd better consider clear, safe approach is your choice than using the thread pool.
System.threading.timer Class
The Timer class is very effective for the periodic thread execution task, it cannot be inherited.
This class is especially used to develop console applications because system.windows.forms.time is unavailable. For example, to back up files and check the consistency of the database.
When you create a Timer object, your medication estimates the time between the first agent calls and the later successful calls. A timing call takes time in the process, and in later periodic calls. You can adapt to the Timer's CHANGE method to change the value of these settings or make Timer fail. When timer Timer is no longer used, you should call the Dispose method to release its resources.
The TimerCallback representative is responsible for specifying the method associated with the Timer object (that is, the task to be performed during cycle) and status. It calls once and periodically calls this method until the DISPOSE method is called until the Dispose method is called until the Dispose method is called. The system automatically assigns separate threads.
Let's take a look at a code to see how things create Timer objects and use it. Let's first create a TimerCallback agent that is used in the later method. If needed, the next step we want to create a status object, which has specific information associated with the method called by the proxy. In order to make these simple, we pass an empty parameter. We will instantiate a Timer object, and then use the Change method to change the Timer setting, and finally call the Dispose method to release the resource.
// Class That Will Be Called by the Timer
Public Class WorkontimerReq
{
Public void atimercallmethod () {
// does Some Work?
}
}
// Timer Creation Block
{
// instantiaating the class what gets caled by the timer.
WorkontimerReq Anobj = New WorkontimerReq ();
// Callback Delegate
Timercallback tcallback = new timercallback (anobj. Atimercallmeth);
// define the duetime and period
Long dtime = 20; // Wait Before The First Tick (In MS)
Long PTime = 150; // Timer During Subsequent Invocations (In MS)
// instantiate the Timer Object
Timer atimer = New Timer (TCALLBACK, NULL, DTIME, PTIME);
// do some think with the Timer Object
...
// Change the duetime and period of the time
DTIME = 100;
PTIME = 300;
Atimer.change (DTIME, PTIME);
// do something
...
atimer.dispose ();
...
}
Asynchronous programming
This part of this part is that it is a big part of it. Here, I don't plan to discuss this thing in detail, we just need until it is, because multi-thread programming If the multi-threaded programming of 忽 忽 is obviously not . Asynchronous multi-threaded programming is another multi-threaded programming method that your program may be used.
In the previous article, we spent a lot of space to introduce the synchronization of threads and how to achieve thread synchronization, but it has an inherent fattened shortcomings, you may notice this. That is, each thread must be synchronized, that is, waiting until the other functions, otherwise it will block. Of course, in some cases, it is sufficient for those logical mutual dependence. Asynchronous programming allows more complex flexibility. A thread can be modified asynchronous, do not wait for other things. You can use these threads as any task, and threads are responsible for getting the result to advance. This gives those who need a huge manner that requires a huge manner, and is not affordable to request a better system for enterprise-level systems that will wait for the price.
The .NET platform provides a consistent asynchronous programming mechanism for ASP.NET, I / O, Web Services, Networking, Message, and so on.
postscript
Because of learning, it is difficult to find the Chinese information, so I have to learn English, because the level is not high, I can inevitably tell the original meaning when translation, I hope everyone can point out, and hope that these things can be given to everyone Learning this knowledge gives a certain reference and help, it is very gratifying that it is a little bit.
NET multi-threaded programming (5): Case learning multi-threaded
In the previous multi-threaded programming series, we understand the basic knowledge that multi-threaded programming must be mastered in .NET, it may be very vague after reading the article, and it is still very vague. Going to do, the reason may be too much in theory, and there is not much practical reference example, causing a lot of harvest. Therefore, in the next article, I will give several examples of typical multi-threaded programming, let everyone have a clearer understanding.
Case 1 - No synchronization In our first example, there are two types of threads, two are a reading thread, one is a write thread, two threads run in parallel and need to access the same shared resource. The read thread is started before the write thread is used to set the value of the shared variable. I use thread.sleep to complete these work. The extract code is as follows: thread t0 = new thread (new threadstart (WritthRead));
Thread T1 = New Thread (New ThreadStart (ReadThread10);
Thread T2 = New Thread (New ThreadStart (ReadThread20));
T0.Insbackground = true;
T1.Background = Trown;
T2.Background = Trued;
T0.START ();
T1.Start ();
T2.Start ();
As seen, start two write threads immediately after the reading thread starts. The following code is the code executed by two read threads and write threads.
Public void whitthread ()
{
Thread.sleep (1000);
m_x = 3;
}
Public void readthread10 ()
{
INT A = 10;
For (int y = 0; y <5; y )
{
String s = "readthread10";
s = s "# Multiplier =";
S = S Convert.toString (a) "#";
S = s a * m_x;
ListBox1.Items.Add (s);
Thread.sleep (1000);
}
}
Public void readthread20 ()
{
INT A = 20;
For (int y = 0; y <5; y )
{
String s = "readthread20";
s = s "# Multiplier =";
S = S Convert.toString (a) "#";
S = s a * m_x;
ListBox1.Items.Add (s);
Thread.sleep (1000);
}
}
The result of the last run is as follows: Through the above operation results, we can clearly see that the results we expect, the two results started, the reading thread is running before writing the thread, this is what we try to avoid happening. thing. Case 2 - Synchronization [One WritThread - Many Readthreads]
Below I will use ManualRetevent to solve the problems encountered above to reach the synchronization of the line, the only difference is how we use secure before starting the reading thread and writing threads.
Thread T0 = New Thread (New ThreadStart (SafeWritTHREAD);
Thread T1 = New Thread (New ThreadStart (SafeReadthread10);
Thread T2 = New Thread (New ThreadStart (SafeReadthread20);
T0.Insbackground = true;
T1.Background = Trown;
T2.Background = Trued;
T0.START ();
T1.Start ();
T2.Start ();
Add a ManualReveTevent:
M_mre = new manualReveTevent (false);
Take a look at the code of SafeWritThread:
Public void saFewritthread ()
{
m_mre.reset ();
Writethread ();
m_mre.set ();
}
RESET Set the status of ManualRetevent to non-signal, which means that the event has not happened. Then we call the Writethread method, which can actually skip the RESET because we set their status as non-signal thelays in the constructor of ManualRetevent. Once the Writethread thread returns, call the SET method to set the status of ManualRetevent to signal. Let's take another two SafeReadthread methods:
Public void safereadthread10 ()
{
m_mre.waitone ();
Readthread10 ();
}
Public void safereadthread20 ()
{
m_mre.waitone ();
Readthread20 ();
}
The waitone method will block the current thread until the status of ManualRetevent is set to Signaled. Here, both of our two reading threads will block the SAFEWRITETHREAD to complete the task after calling the SET method. This way we ensure that the two read threads will only be implemented after the write thread completes access to the shared resource. Below is the result of the run: Case 3 - Synchronization [Many Writthreads - Many ReadthReads]
Below we will simulate more complicated situations. In the following program, there are multiple write threads and reading threads. The reading thread can only access the shared resource after all the write threads have completed the task. In practical cases, the reading thread may be parallel to run, but for the sake of simplicity, I have a certain order in the write thread running, and the second write thread is only started after the previous write thread is completed. Here, I added an array of ManualResetEvent objects and ManualResetEvent.
Public ManualReveTevent M_Mreb;
Public ManualReveTevent [] M_MRE_ARRAY;
Add initialization code:
m_mreb = new manualRetevent (false);
M_mre_ARRAY = New ManualRetevent [2];
M_mre_Array [0] = m_mre;
M_mre_Array [1] = m_mreb;
Start four threads:
Thread T0 = New Thread (New ThreadStart (SafeWritTHREAD);
Thread T0B = New Thread (New ThreadStart (SafeWritTHReadb);
Thread T1 = New Thread (New Threadstart (SafeReadThread10b);
Thread T2 = New Thread (New Threadstart (SafeReadThread20b);
T0.Insbackground = true;
T0B.isbackground = true;
T1.Background = Trown;
T2.Background = Trued;
T0.START ();
T0B.Start ();
T1.Start ();
T2.Start ();
There are two StartThreads and two writtyreads, let's take a look at their execution: public void saFewritThread ()
{
m_mre.reset ();
Writethread ();
m_mre.set ();
}
Public void saFewritThreadb ()
{
m_mreb.reset ();
m_mre.waitone ();
Thread.sleep (1000);
m_x = 3;
m_mreb.set ();
}
I used another event object for the second Writethread, and the first thread was waiting to work.
Public void safereadthread10b ()
{
Waithandle.waitall (M_MRE_ARRAY);
Readthread10 ();
}
Public void safereadthread20b ()
{
Waithandle.waitall (M_MRE_ARRAY);
Readthread20 ();
}
Here, use a Waitall method, he is a static method of the WaitHandle base class to ManualRetevent, which is the manualResetEvent array of the previously defined in front. He blocked the current thread until the MANUALRESETEVENT object setting status in the array array is SIGNALED, and it is to wait for them to complete their tasks.