This article focuses on thread synchronization issues programmed by Windows application layer. During actual programming, we often encounter thread synchronization, for example, when writing a multithreaded program to jointly access a shared resource, if multiple threads are just reading resources, then they will not involve the problem we want to discuss below. If a thread reads the resource when some threads change resources, the synchronization problem when resources are accessed. That is, when a thread changes resources, while other threads are also reading and writing, this will result in an unspeakable resource content. To avoid this happening, ensure the integrity of resources, we usually use thread synchronization methods. The Windows operating system provides us with the Event (Event), Semaphore, Semaphore, Mutex, and other kernel objects to synchronize threads, and also provides a user mode such as critical code segment (critical_section) to synchronize. Below I will discuss these two situations.
Kernel object mode: The core object that Windows kernels can be used to perform thread synchronization, is an object that can signal (Signaled), which is called "Dispatcher Object) in Windows. They include process, thread, event, Semaphore, Mutex, Timer, and the like. For this type of object, threads can get the right to use the object with WaitForsingleObject () or WaitFormultPleObjects (). When the object is in the signal state, the thread returns immediately to obtain the right to use the object. When the object is not in the signal, the object is being occupied by other threads, the thread will be placed in the waiting queue of the kernel object, when the object is changed, the first thread is queued in the queue to get the line. Objects, being placed in the thread ready queue by the kernel, and has been re-obtained CPU time. After the thread is used, you need to release the right to use the object so that other threads waiting for the object are performed. The API of the related object operation has a detailed description in the MSDN, which is no longer repeated. Let's discuss it, Windows's kernel implementation:
Related structure
Typedef struct _dispatcher_header {
Uchar type;
Uchar absolute;
Uchar size;
Uchar inserted;
Long signalState;
List_entry waitlisthead;
DISPATCHER_HEADER;
Typedef struct _kwait_block {
List_entry waitlistentry;
Struct _kthread * restricted_pointer thread;
PVOID Object;
Struct _kwait_block * restricted_pointer nextwaitblock;
USHORT WAITKEY;
USHORT WAITTYPE;
} Kwait_block, * pkwait_block, * restricted_pointer prkwait_block;
(Note: These two structures contain NTDDK.H, which is public)
In the figure below (1), we see that threads 1 is waiting for object B, thread 2 at the waiting object A, B. If the object A changes to a signal state, the kernel will detect thread 2 waiting for the object, but since the thread 2 is still waiting for the object B, thread 2 cannot be placed in the thread ready queue to re-obtain the CPU. If the object B becomes a signal map (1)
State, thread 1 is placed in the ready queue and executes by the kernel because there is no other object. The thread 2 needs to wait for the thread 1 to release the object B, it can be arranged to be implemented. For different kernel objects, the kernel is not the same when processing the thread waiting for them. For kernel objects such as processes, threads, when the object changes to signal state, the kernel will make all the wait threads all access all use rights. Friends who want to learn more about this can see "Inside Windows 2000".
User mode: Under Windows, Windows also provides users with critical code segments (critical_section), this user-like thread synchronization solution. This way is performed faster than the kernel object. Because each time the kernel object is occupied, the system must be switched by the user mode to core mode, which consumes a lot of switching time. The key code segment is just a simple detection in user mode that has been occupied by other threads, which is more than the time to switch between the kernel objects. In the case where the amount of code that needs to be protected is not very large, use the key code segment does not loses a good solution, which can improve the efficiency of the code and reduce the conflict between the thread. When a critical code segment conflicts, the thread detects that the critical code segment is occupied by other threads, the thread enters the waiting state. The key code section has a waiting timeout, if the timeout occurs, the thread still does not get a critical code segment, it will generate an exception. So, pay attention to the release after writing the critical code segment program. The overtime value of the key code segment is recorded in the criticalsectionTIMEOUT keyword under the hkey_local_machine / system / currentcontroset / control / session manager key of the registry. The default value is 2592000s, about 30 days. We can adjust the timeout time by changing this value. Since this value is system public, it is recommended to set this value to not less than 3s to avoid affecting the system to wait for a thread and other applications that are over 3s of the key code segment. The API of the specific operation of the critical code segment is described in detail in the MSDN, which is not described here. Let's discuss the principle of thread synchronization implementation:
The operating system can be threaded synchronization, mainly on atomic operations, the atomic operation is not interrupted, usually it is provided by the processor architecture, typically supported by hardware. Below is the author in the X86 architecture, the two Windows functions written by Linux is imitated, and TestandSetBit () is used to set the specified bit to 1 and TestandCleArbit () to clear the specified bit clear 0, ie lock and unlock operation.
Static Int TestandSetBit (int Norder, Volatile Void * var)
{
INT NOLDBIT = 0;
_asm {
Mov Eax, Var
MOV EBX, Norder
Lock BTS [EAX], EBX
Mov Eax, Noldbit
SBB Noldbit, EAX
}
Return noldbit;
}
Static Int TestandClearbit (Int Norder, Volatile Void * var)
{
INT noldbit = 0; _ASM {
Mov Eax, Var
MOV EBX, Norder
Lock Btr [EAX], EBX
Mov Eax, Noldbit
SBB Noldbit, EAX
}
Return noldbit;
}
(Note: The LOCK instruction is a lock bus for multiprocessor. BTS is the X86 Test-and-set operation. Btr is the Test-RESET operation of x86)
Below, the thread synchronization is implemented by a code with the above function:
DWORD G_DWLOCK = 0;
...
While (TestandSetBit (0, & g_dwlock))
Sleep (1000);
/ *
Code to be protected
* /
TestandClearbit (0, & g_dwlock);
This is an application of thread synchronization in user mode. When the conflict occurs, the current thread sleeps so that the thread that gets the lock can run the protected code and unlock it after running. When waiting for the thread to wake up and re-execute, it will be locked and running. In the core mode, the principle is the same, but the processing mode is different when the resource conflict is different.
Which way to use the thread is used in specific programming, you have to look at the specific needs of the program, and the author is not listed here. Finally, the author will apologize to everyone, some chaos written by the article, maybe some places are unclear, some places are not right, but also look at it in time.
bibliography:
"Inside Windows 2000"
"Windows Kernel Programming"
Colorknight
2003/4/3