Summary: In the design of various business solutions, the efficiency of server processing tasks is an important criterion for measuring the advantages and disadvantages of the program. The use of multi-threaded technology concurrent processing tasks is a primary means to improve server efficiency. But frequent threads creation, destruction, and task allocation will also reduce system efficiency. This article designs a universal thread pool, which can set the corresponding thread pool parameters, maximized system performance. Performance, according to the characteristics of the tasks processed by different servers.
Keywords: thread pool multi-thread task virtual function abnormal
Overview
During the design of various business solutions, the efficiency of server processing tasks often determines the success or failure of the program. Multi-threaded processing tasks is the main means of improving server efficiency, which increases the utilization of server resources, making the task concurrently processed. However, if the task of the server processing is lightweight, the frequency is high, then the creation and destruction of the thread will be very frequent, and the system used to process the creation and destruction of the thread will account for considerable proportion, but reduce the system. effectiveness. Through thread pool technology, you can reduce the creation of frequent threads and the destruction of system performance.
The thread pool is a technique for pre-creation threads. The thread pool creates a certain number of (N1) threads before coming, placing in the idle queue. These threads are in a suspended state, which does not consume the CPU, but occupy a smaller memory space. When the task arrives, the buffer pool selects an idle thread and incorporates the task to run in this thread. When N1 threads are processed tasks, the buffer pool automatically creates a certain number of new threads for more tasks. When the system is idle, most threads have been paused, and the thread pool automatically destroys a thread and reclaim system resources.
The design of the general-purpose thread buffer pool is not only necessary, but also considers the portability of this design, reducing repeated development. The focus of consideration in the design is:
The versatility of task objects; thread creation and destruction policies; tasks allocation strategies. Analysis and design
1, the versatility of the task object
Different business solutions have their own unique task processing methods, and the division of tasks is very different. In order to reach a certain degree of velaction when processing task objects, the design of the task object must be completely independent of the processing logic of the actual task. From the perspective of task execution, the task is just a process of processing the process of time or multiple execution, which can define the task interface:
Class Task
{
PUBLIC:
Task ();
Virtual ~ task ();
Virtual bool run () = 0;
}
The TASK class is the base class of all task classes, where the pure virtual function run () is the entry of the task process, and the working thread will start the processing flow of the task from here when the task is handled. When you design a new task, you only need to inherit the Task interface, and the new task can be put into the thread pool.
Design, execution, and destruction of the task:
(1) The task is created when it needs. The task creates a new operation, dynamically creates a specific task object, then incoming the thread pool, and automatically assigns the thread by the thread pool to perform this task.
(2) If the task is executed, it is determined by itself. When an unknown task is executed, it is impossible to predict, and must be determined by the task itself. This policy is implemented by the return value of Task :: Run (). When the work thread performs a task, if the return value is true, it means that the task is executed, and use the delete operation to destroy this task; if the return value is false, it means that the task needs to be executed and does not complete, continue to execute this task.
Such a strategy makes it necessary to initialize various resources in the constructor of the new task class in the design of the new task class. Recycling resources, implementing primary processing logic in the run () method, then new task classes can be executed in the thread pool. 2, thread creation and destruction
The number of threads in the thread buffer pool should be determined according to the requirements of the task.
When the buffer pool has just been established, there is a certain number of (N1) in the thread pool, which makes the new task can be performed promptly. For example, a client is when the login request is sent to the server, such a request makes the server usually need to create several related tasks. That is, the client is interactive with the server side, typically produces a certain number of tasks. According to the service processed by a server, estimates an average of the task N2 generated by a business. Then N1 should be an integer multiple of N2, n1 = n2 × n1, reducing the probability of creating a thread due to insufficient threads to make the server at the initial period of business processing.
When all threads in the thread buffer pool are in a busy state, the thread pool creates a new thread and sets N3. From the above analysis, in order to reduce the probability of creating a thread due to insufficient thread, N3 should also be an integer multiple of N2, N3 = N2 × N2.
When the server service is reduced, a large number of threads are idle, you should destroy a thread. Obviously, the timeout policy should be used here. When some threads are still in the idle state over time T, it is destroyed from the idle thread. It is destroyed N4 empty threads, in order to reduce the probability of creating threads due to insufficient threads, N4 should also be an integer multiple of N2, N4 = N2 × N3. Of course, in order to make new tasks get treated in time, N1 threads should be retained even if the server is always idle. 3, task allocation strategy
In business processing, there will be a wide variety of task objects that are also different from system resources. These tasks, regardless of its spatial complexity, from the perspective of thread execution tasks, the main time complexity should be concerned.
When the thread buffer pool is received, you first look for idle threads, incompart the new task, then perform tasks, and finally delete tasks, set the empty thread. Look for idle threads, incoming tasks, and final cleanup work, these are additional overheads arising from the task. If most of the tasks performed are lightweight tasks, then the resource waste brought about by additional overhead is very good. Highlight. In order to solve this problem, you can pass a thread into the N5 lightweight task. This thread sequentially executes N5 lightweight tasks, since it is completed in a short period of time, does not affect the timeliness of the task response. Obviously, N5≥1.
achieve
Due to the level of source code, it is not possible to list all of the code one by one, here is given in pseudo code, and the thread buffer pool is created, destroyed, task assignment, and task execution process.
(1) Thread pool task allocates the main loop (also a thread)
Here, in addition to the task assignment algorithm, part of the algorithm of the creation and destruction of some threads is included.
For (;;) {
pthread = getIdLethread (); // Check airline line queue
IF (pthread! = null) {
IF (checknewtask ()) {// has a new task
Tasklist TL;
GetTask (TL); // Get a certain number of tasks
AddTaskTothread (PTASK, TL); // Ingest Task Incorporation Thread
Continue; // Continue cycle
}
}
IF (pthread == null && nthread Continue; // Continue cycle } / / Do not have the task to be processed or the upper limit of the number of threads, enter the timeout waiting WaitfortAskorthreadTimeout ()) { IF (Incridletime ()> iDLE_MAX) {// System is idle, timing / / The system is idle, destroying a certain number of idle threads. DECRIDLETHREAD (); } } Else Return 0; // thread termination } (2) Task implementation process for working thread For (;;) { / / Check if the task queue has a task to run if (! CheckTaskQueue ()) {// There is no task in the queue PPOOL-> Ontaskidle (this); // Notify the thread pool, this thread is idle IF (Waitfortask ()) Continue; // Continue cycle Else Return 0; // terminate thread } else {// has tasks need to run Ptask = gettask (); // get a new task Try { While (! ptask-> run ()) { // This cycle is empty, constantly running until the task is executed } } Catch (...) { WriteLog (...); // generate an exception when performing tasks, record the log } Delete ptask; // Task execution, delete this task } } At the core part of the task, use the try-catch control block for exception capture. Although an exception will have a slight impact on the speed, but because the task to be executed is unknown, it cannot guarantee that the task can be performed normally. This is absolutely not allowed because of an exception of a task. Using an exception capture not only ensures smooth execution of the server process, but also stores the abnormal information into the log file, but also tracks errors. Performance Testing In order to verify that the performance of this thread pool is the same, and analyzes the different parameter configuration of the thread pool, the test program has been tested for the test programs, and the test results are shown in Figure 1: The abscissa is the number of tasks; the ordinate is time consuming, Second (s) is unit. Parameter 1: N2 = 1, N 5 = 1; parameter 2: N2 = 5, N5 = 1; parameter 3: N2 = 5, N5 = 5 In the test, the total thread number of the system is limited to 500, and the task is 5ms. Here only for N2 and N5, N2 is the number of tasks added to the thread pool each time the system, N5 is the number of tasks per thread. In the case where the task is relatively small, the occupation of the system is substantially equal. However, when the task is huge, the parameter 1 is slightly higher than the parameter 2 efficiency, and the execution efficiency of the parameter 3 is almost twice the first two. Because it is a lightweight task, the change in N2 is not much impact on system efficiency, and the effect of N5 is significant. Conclude It can be seen by testing that after using the thread pool in the server, it does not mean that system performance will be improved. The tasks of different systems have their own different characteristics, which requires further adjustment of some key parameters of the buffer pool according to the characteristics of the server task to maximize system efficiency. These parameters are N1, N2, N3, N4, N5, N1, N3, N3 in the above analysis.