CLR thread pool

xiaoxiao2021-03-06  41

Microsoft has always tried to improve their platform and application performance. Many years ago, Microsoft studied how app developers used threads to see what can do to improve their utility. This study has a very important discovery: developers often create new threads to perform a task, when the task is completed, the thread is terminated.

This model is extremely common in server applications. The client request server, the server creates a thread to handle the client's request, and then when the client's request is completed, the server thread is terminated. Compared with the process, creating and destroying threads is faster, and there is less operating system resources. But the creation and destruction thread is of course not free.

To create a thread, you need to assign and initialize a kernel object, and you need to assign and initialize the stack space of the thread, and Windows® sends a DLL_THREAD_ATTACH notification for each DLL in the process, so that the page in the disk is assigned to the memory, thereby executing Code. When the thread is terminated, a DLL_THREAD_DETACH notification is sent to each DLL, and the stack space of the thread is released, and the kernel object is also released (if the number of usage is 0). Therefore, many overhead associated with creating and destroying threads are independent of the work you want to perform.

Generation of thread pool

This study prompted Microsoft to implement thread pools, and the thread pool appeared in Windows 2000. When the Microsoft® .NET Framework group is designed and builds a public language runtime (CLR), they decided to implement thread pools themselves in the CLR itself. In this way, even if the application is running in a Windows version (such as Windows 98) before Windows 2000, any managed application can also utilize a thread pool.

When the CLR is initialized, there is no thread in the thread pool. When an application is to create a thread to perform a task, the application should request a thread pool thread to perform a task. The thread pool knows a initial thread. The initialization of the new thread is the same as other threads; but after the task is completed, the thread will not destroy themselves. Instead, it returns the thread pool in a suspend state. If the application issues a request to the thread pool again, then this hang will activate and perform tasks without creating a new thread. This saves a lot of overhead. As long as the queuing speed of application tasks in the thread pool is lower than a thread to handle each task, then the same thread can be repeatedly reused, thereby saving a lot of overhead within the application survival period.

Then, if the application task in the thread pool queues exceeds a speed of a thread processing task, the thread pool will create additional threads. Of course, creating a new thread does produce additional overhead, but the application is likely to request only a few threads in its survival period to process all the tasks handed over. Therefore, in general, the performance of the application can be improved by using a thread pool.

You may want to know now that if the thread pool contains many threads and the application's workload is reduced, what will happen. In this case, the thread pool contains several long-term hanging threads, and waste the resources of the operating system. Microsoft also taking this question. When the thread pool thread is suspended, it waits for 40 seconds. If the thread is not available for 40 seconds, the thread will be activated and destroyed from it, and all the operating system resources it use (stack, kernel objects, etc.) are released. At the same time, activation and self-destruction threads may not affect the performance of the application, because the application does not too much, otherwise, the thread will be executed. By the way, although I said that the thread in the thread pool is activated within 40 seconds, it is actually not verified and can be changed.

A wonderful feature of the thread pool is: it is heuristic. If your application needs to perform a lot of tasks, the thread pool will create more threads. If your application's workload is gradually decreasing, the thread pool thread will terminate itself. The algorithm of the thread pool ensures that it only contains the number of threads needed to work on it! Therefore, I hope that you have now understood the basic concept of the thread pool and understand the performance advantages it can provide. Now I will give some code to explain how to use the thread pool. First, you should know that the thread pool can provide four features:

• Asynchronous call Method • Calling at a certain time interval • Calling method when a single kernel object gets signal notifications • When the asynchronous I / O request is called the method

The first three functions are very useful, I will explain it in this column. Application developers rarely use the fourth function, so I will not explain here; it is possible to talk in the future column.

Function 1: Asynchronous call method

In your application, if you have a job that creates a new thread to perform a task, then I suggest you replace it with the new code of the task with the command thread pool. In fact, you usually find that make the thread pool implementation tasks easier than letting a new dedicated thread perform tasks.

To queue the thread pool task, you can use the ThreadPool class defined in the System.ThReading namespace. The ThreadPool class only provides a static method and cannot construct an instance of it. To make the thread pool threaded pool, your code must call a ThreadPool's overload QueueUserWorkItem method, as shown below:

Public Static Boolean QueueUserWorkItem (Waitcallback WC, Object State);

Public Static Boolean QueueUserWorkItem (Waitcallback WC);

These methods queue "Work Item" (and Optional Status Data) in the thread of the thread pool and return immediately. Work item is just a method (by WC parameter identification), which is called and passed to a single parameter, ie state (status data). QueueUserWorkItemWorkItemWorkItem WORKITEM version with no status parameters passes NULL to the callback method. Finally, some threads in the pool will call your method to process the work item. The callback method you have written must match the type of System.Threading.WaitCallback, which is defined as follows:

Public Delegate Void Waitcallback (Object State);

Note that you will never call any way you can create a thread yourself; if you need, the CLR thread pool will automatically create threads, and you will also reuse existing threads. In addition, the thread will not be destroyed immediately after the thread processing the callback method; it will return to the thread pool and prepare other work items in the queue. Using QueueUserWorkItem will make your application more efficient because you will not need to create and destroy threads for each client request.

The code in Figure 1 illustrates how to make the thread pool asynchronously call a method.

Function 2: Calling method at a certain time interval

If your application needs to perform a task at a time, or your application needs to perform certain methods regularly, using the thread pool will be your best choice. System.Threading Namespace Defines Timer Class. When you construct an instance of the Timer class, you are telling the thread pool you want to call back your own way in the future. Timer class has four constructor:

Public Timer (TimerCallback Callback, Object State,

INT32 DUETIME, INT32 Period;

Public Timer (TimerCallback Callback, Object State, Uint32 Duetime, uint32 period);

Public Timer (TimerCallback Callback, Object State,

INT64 DUETIME, INT64 PERIOD;

Public Timer (TimerCallback Callback, Object State,

Timespan Duetime, Timespan Period;

All four constructors construct the identical Timer object. The callback parameter identifies how you want to call up the threaded thread. Of course, the callback method you have written must match the type of System.Threading.TimerCallback, which is defined as follows:

Public Delegate Void TimerCallback (Object State);

The status parameter of the constructor allows you to pass status data to the callback method; if you do not pass the status data to be passed, you can pass NULL. Use the Duetime parameter to tell the thread pool to wait for the first time you call your callback method to wait for how much milliseconds. You can use a symbol or unsigned 32-bit value, a symbolic 64-bit value, or a Timespan value to specify milliseconds. If you want to call the callback method immediately, specify the Duetime parameter as 0. The last parameter period allows you to specify the time you need to wait for each continuous call, in milliseconds. If you pass 0 to this parameter, the thread pool will only call the callback method once.

After constructing the Timer object, the thread pool knows what to do and automatically monitors time. However, the Timer class also provides several other methods that allow you to communicate with the thread pool so that when you change (or if) should call back. Specifically, the TIMER class provides several Change and Dispose methods:

Public Boolean Change (INT32 Duetime, INT32 Period);

Public Boolean Change (uint32 duetime, uint32 period);

Public Boolean Change (INT64 Duetime, INT64 PERIOD);

Public Boolean Change (TimeSpan Duetime, Timespan Period);

Public Boolean Dispose ();

Public Boolean Dispose (Waithandle NotifyObject);

The Change method allows you to change the Duetime and Period of the Timer object. The Dispose method allows you to completely cancel the callback when all hangs have been completed, and optionally use the signal to notify the kernel object identified by the NotifyObject parameter.

The code in Figure 2 shows how to make the thread thread to call a method and call each 2000 milliseconds (or two seconds).

Function 3: Calling method when a single kernel object gets signal notification

Microsoft researchers found that many applications generated threads when performing performance research, just to wait for a single kernel object to get signal notifications. Once the object gets a signal notification, this thread sends a notification to another thread, then loops back, waiting for the object to issue a signal again. There are even a few threads written by some developers, and each thread is waiting for an object. This is a huge waste of system resources. Therefore, if there are multiple threads in your application to get signal notifications waiting for a single kernel object, the thread pool will still be the best resources you improve application performance.

To call your callback method when the thread pool thread gets a signal notification, you can use some of the static methods defined in the System.Threading.ThreadPool class again. To make the thread pool thread call the method when the kernel object gets signal notifications, your code must call a heavy-duty registerwaithandle method, see Figure 3. When you call one of these methods, the H parameter identifies the kernel object you want the thread pool waiting. Since this parameter is an abstract base class system.threading.waithandle, you can specify any class derived from the base class. In particular, you can pass a reference to AutoreseTevent, ManualRetevent, or Mutex Object. The second parameter called Callback identifies the method you want the thread pool thread. The callback method you implemented must match the system.threading.waitortimerCallback commission type, which is defined as shown in the following code line:

Public Delegate Void WaitortimerCallback (Object State,

Boolean Timedout;

The third parameter state allows you to specify some status data that should be passed to the callback method, and if there is no special status data to be passed, the NULL is passed. The fourth parameter Milliseconds allows you to tell the thread pool kernel object to get the time before the signal is notified. Here usually transmits -1 to indicate an infinite timeout. If the last parameter executeonlyonce is true, the thread pool thread will only perform a callback method once. However, if executeonlyonce is a false, the thread pool thread will perform a callback method when the kernel object is received each time a signal notification. This is very useful for the AutoreteEvent object.

When the callback method is called, it is passed to its status data and the Boolean value timedout. If TIMEDOUT is a fake, the method knows that the reason it is called is that the kernel object gets signal notifications. If TIMEDOUT is true, then the method knows the reason it is called is that the kernel object does not get signal notifications within the specified time. The callback method should perform all the required operations.

In the prototype shown earlier, you will notice that the RegisterWaitForsingleObject method returns a registeredwaithandle object. This object determines that the thread pool is waiting for the kernel object. If for some reason, your application wants to tell the thread pool to stop monitoring the registered wait handle, then your application can call RegisteredWaitHandle's unregister method:

Public Boolean Unregister (Waithandle WaitObject);

WaitObject parameters indicate how you want to get signal notifications when all work items in the completed quotes are executed. If you don't want to get a signal notification, you should pass NULL to this parameter. If you pass a valid reference to the WaitHandle-DeriveD object, the thread pool will be inform that the object will be notified after all the suspend work items that have been registered and wait for the handle.

The code in Figure 4 shows how to call the thread pool thread when the kernel object gets signal notifications.

summary

In this column, I tell the needs of the thread pool, explaining how the various functions provided by the CLR thread pool. Now you should understand the value of the thread pool to your development, it can improve your application performance and simplify your code.

You can send your questions and recommendations to Jeff through dot-net@microsoft.com.

转载请注明原文地址:https://www.9cbs.com/read-55229.html

New Post(0)