Synchronization between processesthreads

xiaoxiao2021-03-06  37

The content of this section is more patient, because the method of synchronization between the process / thread is more, each method has different purposes: this section will speak through the critical area, mutual exclusion, signal lights, events To make synchronization.

Since the operation between the process / thread is parallel, we have a problem synchronization of a data, let's take a look at a code:

Int iCounter = 0; // Global variable

Dowrd Threada (Void * Pd)

{

For (int i = 0; i <100; i )

{

INT iCopy = ICOUNTER;

// Sleep (1000);

ICOPY ;

// Sleep (1000);

ICOUNTER = ICOPY;

}

}

Now suppose there are two threads Threada1 and threada2 running at the same time, how much is the value of ICOUNTER after running, is 200? No, if we take the comment before Sleep (1000), we will easily understand this problem, because it may have been modified by other threads before ICOUNTER's value is correctly modified. This example is an example of enlarged machine code operation, because the process of data read / write inside the CPU will also be interrupted during the thread execution, and other threads can be interrupted.

The variable ICOUNTER is modified by the first thread, if it is written back to the second thread, then written back by the first thread, then the second thread reads actually the wrong data, This is called dirty read. This example can also be extended to files, resources.

So how can we avoid this problem? Suppose we ask other threads to other threads before using ICOUNTER: Who is using it? If it is not used, you can immediately operate the variable, otherwise, other threads will not be used after the other threads are used, and other threads will not be able to use this variable after the control of this variable is not used until I use it and release it. . The modified pseudo code is as follows:

Int iCounter = 0; // Global variable

Dowrd Threada (Void * Pd)

{

For (int i = 0; i <100; i )

{

ask to LOCK ICOUNTER

Wait Other Thread Release The Lock

Lock Successful

{

INT iCopy = ICOUNTER;

// Sleep (1000);

ICOPY ;

}

ICOUNTER = ICOPY;

Release Lock of ICOUNTER

}

}

Fortunately, the OS provides a variety of synchronous objects for us to use and can manage the lock and unlock of synchronous objects for us. What we need to do is generating a synchronization object for each resource that needs to be used synchronously. Apply to lock before using the resource, unlock after use. Let's introduce some synchronous objects:

Roundout: The critical area is the simplest synchronization object, which can only be used inside the same process. Its role is to ensure that only one thread can apply to this object, such as the above example, we can use the critical area to perform synchronization processing. Several related API functions are:

VOID InitializeCriticalSection (LPCRITICAL_SECTION lpCriticalSection); generate critical region VOID DeleteCriticalSection (LPCRITICAL_SECTION lpCriticalSection); deleting critical region VOID EnterCriticalSection (LPCRITICAL_SECTION lpCriticalSection); enter the critical section, the equivalent of locking the application, if the critical section is being used by another thread then the function waiting for the other thread to release BOOL TryEnterCriticalSection (LPCRITICAL_SECTION lpCriticalSection); enter the critical section, the equivalent of locking applications, and EnterCriticalSection different if the critical area being the function will return FALSE immediately use other threads, without waiting VOID LeaveCriticalSection (LPCRITICAL_SECTION LPCriticalSection; exit the critical area, which is equivalent to the application unlocking the exemplary code to demonstrate how to use the critical area to perform data synchronization processing:

// Global variable

Int iCounter = 0;

Critical_section crocounter;

DWORD THREADA (Void * Pd)

{

INT IID = (int) PD;

For (int i = 0; i <8; i )

{

EntercriticalSection (& cricounter);

INT iCopy = ICOUNTER;

Sleep (100);

ICOUNTER = ICOPY 1;

Printf ("Thread% D:% D / N", IID, ICOUNTER);

LeaveCriticalSection (& Cricounter);

}

Return 0;

}

// in main function

{

// Create a critical area

InitializeCriticalSection (& cricounter);

// Create thread

Handle Hthread [3];

CWINTHREAD * PT1 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 1);

CWINTHREAD * PT2 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 2);

CWINTHREAD * PT3 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 3);

HTHREAD [0] = Pt1-> m_hthread;

HThread [1] = PT2-> m_hthread;

HTHREAD [2] = PT3-> m_hthread;

// Waiting for thread ends

/ / The usage of WaitFormultiPleObjects will be said.

WaitFormultiPleObjects (3, Hthread, True, Infinite);

// Delete the critical area

DeletecriticalSection (& Cricounter);

Printf ("/ NOVER / N");

}

Next, mutual use is very similar to the role of the critical area, but mutex is named, that is, it can use it across the process. So create more resources required for mutual use, so if you only use the critical area to use the critical area to use the critical area to use the critical area to reduce the amount of resources. Because the mutex is to be created, once the mutex of the process is created, it can open it through the name. The following describes the API functions available in mutex:

Create a mutex:

Handle Createmutex

LPSecurity_attributes lpmutexattributes, // security information BOOL BINITIALOWNER, / / ​​The initial state,

// If set to true, it means that the thread that creates it directly has the mutual amount without requiring any request.

LPCTSTR lpname // Name, can be null, but this cannot be opened by other threads / processes

);

Open an existing mutual exclusion:

Handle OpenMuteX

DWORD DWDESIREDACCESS, // Access

Bool BinheritHandle, / / ​​Can I be inherited?

LPCTSTR LPNAME // Name

);

Release the use of mutual amount, but the thread that calls the function has the right to use the mutual amount of use:

Bool ReleaseMutex (// effect like LeaveCritics)

Handle hmutex // handle

);

Close mutual exclusion:

Bool CloseHandle

Handle hobject // handle

);

You will say why there is no name as Entermutex, the function is like EntercriticalSection, the right to use the right to use the right to use. Indoal! Getting the right to use the right to use the function:

DWORD WAITFORSINGLEOBJECT

Handle Hhandle, // Waiting for the handle of the object

DWORD dwmilliseconds // Waiting time, in MS, if INFINITE means an indefinite waiting

);

return:

WAIT_ABANDONED is in the mutual exclusion amount, because the mutex is turned off, it becomes signal state

WAIT_Object_0 gets right

WAIT_TIMEOUT exceeds (dwmilliseconds) specified time

After the thread calls WaitForsingleObject, if the control thread is not allowed to be suspended until the time or the control is obtained.

It is said that we must talk more in depth, there must be a meaning of object (object) in the WaitForsingleObject function. The object here is an object with signal state, and there are two states: there is signal / no signal. The meaning of the waiting is that the waiting object becomes a state of signal, and for mutex, if it is being used, there is no signal state, and it is released to have a signal state. When waiting for success, the waitforSingleObject function sets the mutex without signaling, so that other threads cannot obtain use right and need to continue waiting. The WaitForsingleObject function also queues functions, ensuring that the thread waiting for the requested thread first gets the right to use, the following code demonstrates how to use mutex to synchronize, the function of the code is also increasing, and by the output result, it can be seen First, first propose the requested thread first get the control:

Int iCounter = 0;

DWORD THREADA (Void * Pd)

{

INT IID = (int) PD;

/ / Re-open internally

Handle hcounterin = OpenMuteX (Mutex_all_Access, False, "SAM SP 44");

For (int i = 0; i <8; i )

{

Printf ("% D Wait for Object / N", IID);

WaitforsingleObject (hcounterin, infinite);

INT iCopy = ICOUNTER;

Sleep (100);

ICOUNTER = ICOPY 1; Printf ("/ T / TTHREAD% D:% D / N", IID, ICOUNTER);

ReleaseMutex (HcOUNTERIN);

}

CloseHandle (hcounterin);

Return 0;

}

// in main function

{

// Create a mutex

Handle hcounter = NULL;

IF ((hcounter = OpenMuteX (Mutex_all_Access, False, "SAM SP 44") == NULL)

{

// If there is no other process to create this mutex, recreate

HcOUNTER = Createmutex (NULL, FALSE, "SAM SP 44");

}

// Create thread

Handle Hthread [3];

CWINTHREAD * PT1 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 1);

CWINTHREAD * PT2 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 2);

CWINTHREAD * PT3 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 3);

HTHREAD [0] = Pt1-> m_hthread;

HThread [1] = PT2-> m_hthread;

HTHREAD [2] = PT3-> m_hthread;

// Waiting for thread ends

WaitFormultiPleObjects (3, Hthread, True, Infinite);

// Close the handle

CloseHandle (hcounter);

}

}

Here I didn't use global variables to save mutex handles, which is not because they can't do this, but to demonstrate how to open the mutual exclusion that has been created in other code segments. In fact, this example is logically a little wrong, because the variable of ICOUNTER does not use the process, so there is no need to use mutex, just use the critical area. Suppose there is a set of processes to use a file at the same time then we can use mutex to ensure that the file is only used by a process (if you just use the OS's file access control, you need to add more error handling code), in addition Multiple mutex can also be used in synchronization of the use of resources in the scheduler.

Now let's talk about WaitForsingleObject this function, from the previous example we see that waitforsingleObject This function will wait for an object to become a signal state, then what is the object of the signal state? Here are part:

Mutex Event Semaphore Job Process Thread Waitable Timer Console Input

Mutex, Semaphore, Event (event) can be used across the process to perform synchronization data operation, while other objects are independent of the data synchronization operation, but for process and threads, if process and threads In the operating state, there is no signal state, and there is a signal state after exiting. So we can use WaitForsingleObject to wait for the process and thread to exit. (As for the signal light, the usage of events We will follower) We use the waitformultipleObjects function in the previous example, the function of this function is similar to WaitForsingleObject but we can see from the name, WaitFormultiPleObjects will be used to wait for multiple objects to become change There is a signal state, the function prototype is as follows: DWORD WAITFORMULTIPLEOBJECTS (

DWORD NCOUNT, // Waiting for the number of objects

Const Handle * LPHANDLES, // Object handle array pointer

Bool fwaitall, // Waiting method,

/ / Returning to True Waiting for all objects to have a signal state to return, indicating that any object changes to a signal state for false.

DWORD dwmilliseconds // Timeout setting, in MS unit, if INFINITE means an indefinite waiting

);

Return value: WAIT_Object_0 to (WAIT_Object_0 Ncount - 1): When Fwaitall is True, all objects become signal state, when fwaitall is false, use the return value to subtract WAIT_Object_0 to get a signal state in an array The subscript. WAIT_ABANDOTONED_0 to (WAIT_ABANED_0 NCOUNT - 1): When Fwaitall is True, all objects become signal state, when fwaitall is false, there is an object in the object to be mutually exclusive, because the mutual exclusion is made The signal state is subtracted to the subscript of the object that changes to a signal state using the return value to obtain the subscript of the object. WAIT_TIMEOUT: indicates that exceeds the specified time.

The following code in the previous example indicates that the three threads becomes a signal state, that is, three threads end.

Handle Hthread [3];

CWINTHREAD * PT1 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 1);

CWINTHREAD * PT2 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 2);

CWINTHREAD * PT3 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 3);

HTHREAD [0] = Pt1-> m_hthread;

HThread [1] = PT2-> m_hthread;

HTHREAD [2] = PT3-> m_hthread;

// Waiting for thread ends

WaitFormultiPleObjects (3, Hthread, True, Infinite);

In addition, this feature waits for the end of the process in the end of the startup and waiting process.

Through mutex we can specify the resources to be used exclusive, but if there is a situation in the following cases, it cannot be handled by mutual exclusion. For example, a user purchased a three concurrent access license database system, your boss You will ask you to decide how many threads / processes can perform database operations at the same time according to the number of users purchased, and if there is no way to complete this requirement, the signal object can be said to be a resource counter. The operational pseudo code for the signal light is approximately as follows: Semaphore Sem = 3;

DWORD THREADA (void *)

{

While (SEM <= 0)

{/ 相 相 WaitforsingleObject

Wait ...

}

// SEM> 0

// Lock The Semaphore

Sem -;

Do functions ...

// Release Semaphore

SEM ;

Return 0;

}

Here the signal light has an initial value, indicating how many processes / threads can be entered. When the value of the signal light is greater than 0, there is a signal state, which is less than 0 is no signal, so it can be taken with WaitForsingleObject, when WaitForsingleObject waits for successful post-signal light The value will be reduced by 1 until the signal light will increase when the release is released. The API functions used for signal lights have the following:

Create a signal light:

Handle CreateSemaphore

LPSecurity_Attributes LPSemaphoreAttributes, // Security Properties, NULL represents the use of default security description

Long Linitialcount, // initial value

Long lmaximumumcount, // maximum

LPCTSTR LPNAME // Name

);

Open the signal light:

Handle OpenSemaphore

DWORD DWDESIREDACCESS, // Access

Bool BinheritHandle, / / ​​Can I be inherited?

LPCTSTR LPNAME // Name

);

Release the signal:

Bool ReleaseSemaphore

Handle HSemaphore, // handle

Long LreamEleaseCount, // Release number, let the signal value increase

LPLONG LPPREVIOUSCOUNT // Used to get the value of the pre-signal light, can be NULL

);

Turn off the signal:

Bool CloseHandle

Handle hobject // handle

);

It can be seen that the use of the signal light is very similar, and the following code uses the initial value of 2 signal lights to ensure that only two threads can be used simultaneously:

DWORD THREADA (Void * Pd)

{

INT IID = (int) PD;

/ / Re-open internally

Handle hcounterin = Opensemaphore (Semaphore_Access, False, "SAM SP 44");

For (int i = 0; i <3; i )

{

Printf ("% D Wait for Object / N", IID);

WaitforsingleObject (hcounterin, infinite);

Printf ("/ T / TTHREAD% D: DO Database Access Call / N", IID);

Sleep (100);

Printf ("/ TTHREAD% D: DO DATABASE Access Call End / N", IID);

ReleaseSemaphore (HcOUNTERIN, 1, NULL);

CloseHandle (hcounterin);

Return 0;

}

// in main function

{

// Create a signal light

Handle hcounter = NULL;

IF ((HCOUNTER = OpenSemaphore (Semaphore_all_Access, False, "SAM SP 44") == NULL)

{

// If there is no other process to create this signal, recreate

HcOUNTER = CreateSemaphore (NULL, 2, 2, "SAM SP 44");

}

// Create thread

Handle Hthread [3];

CWINTHREAD * PT1 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 1);

CWINTHREAD * PT2 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 2);

CWINTHREAD * PT3 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 3);

HTHREAD [0] = Pt1-> m_hthread;

HThread [1] = PT2-> m_hthread;

HTHREAD [2] = PT3-> m_hthread;

// Waiting for thread ends

WaitFormultiPleObjects (3, Hthread, True, Infinite);

// Close the handle

CloseHandle (hcounter);

}

The signal light is sometimes used as a counter, which in general, sets its initial value to 0, first adjusting the ReleaseSemaphore to increase the count, then use WaitForsingleObject to reduce its count, unfortunately, we must usually get the current value of the signal light, but You can check if the signal is currently 0 by setting the waiting time of WaitForsingleObject.

Next, we will tell the last synchronous object: event, the above-mentioned signal lights and mutex can ensure that resources are allocated and used, and the event is used to notify other process / threads Somewhere has been completed. For example: There are now three threads: Threada, threadb, threadc, now require some of their features to be executed in order, that is, ThreadB executes after the threada executes, threadb executes, and threadc begins to execute. Maybe you think the following code can meet the requirements:

Requirements: A1 is executed, execute B2 and then execute C3, and then assume that each task is executed 1, and it is allowed to operate.

Option One:

DWORD THREADA (void *)

{

Do Something A1;

Create threadb;

Do Something A2;

Do Something A3;

}

DWORD THREADB (void *)

{

Do Something B1;

Do Something B2;

Create threadc;

Do Something B3;

}

DWORD THREADC (void *)

{

Do Something C1;

Do Something C2;

Do Something C3;

}

Option II:

DWORD THREADA (void *)

{

Do Something A1;

Do Something A2;

Do Something A3;

}

DWORD THREADB (void *)

{

Do Something B1;

Wait for Threada Enddo Something B2;

Do Something B3;

}

DWORD THREADC (void *)

{

Do Something C1;

Do Something C2;

Wait for threadb end

Do Something C3;

}

Main ()

{

Create threada;

Create threadb;

Create threadc;

}

third solution:

DWORD THREADA (void *)

{

Do Something A1;

Release evenet1;

Do Something A2;

Do Something A3;

}

DWORD THREADB (void *)

{

Do Something B1;

Wait for envet1 be released

Do Something B2;

Release evenet2;

Do Something B3;

}

DWORD THREADC (void *)

{

Do Something C1;

Do Something C2;

WAIT for Event2 Be Released

Do Something C3;

}

Main ()

{

Create threada;

Create threadb;

Create threadc;

}

Compare the execution time of three programs:

Program first program two prime platform three

1 Threada Threadb Threadc Threada Threadb Threadc Threada Threadb Threadc - THREADA THREADB THREADC THREADA

2 A1 A1 B1 C1 A1 B1 C1

3 A2 B1 A2 C2 A2 B2 C2

4 A1 ​​B2 A3 A3 B3 C3

5 B3 C1 B2

6 C2 B3

7 C3 C3

8

It can be seen that the execution time of the program three is the shortest. Of course, this example is very extreme, but we can see that the event object is used to inform other processes / threads have completed the completion of the completion, and if some tasks It is impossible to achieve the way the process tip is coordinated using the thread ending in other processes. In addition, I also hope to tell a method of analyzing thread execution efficiency through this example.

Event objects can be created in one or two ways, one for automatic reset, waiting for the event object to be used to have a signal after the event object is used, and the event object is automatically changed to no signal state, one for manual reset in other threads The event object status is unchanged after the event object is waited to have a signal using WaitForsingleObject. For example, there are multiple threads waiting for a thread to run, we can use artificial reset events, setting the event to have signal status at the end of the waiting thread, so other multiple threads will be successful (Because the state of the event is not automatically reset). The event-related API is as follows:

Create an event object:

Handle CreateEvent

LPSecurity_attributes lpeventattributes, // security properties, NULL represents the use of the default security description Bool BmanualReset, / / ​​Whether it is manual reset

Bool BinitialState, // Is the initial state is a signal state

LPCTSTR LPNAME // Name

);

Open an event object:

Handle OpenEvent

DWORD DWDESIREDACCESS, // Access

Bool BinheritHandle, / / ​​Can I be inherited?

LPCTSTR LPNAME // Name

);

Setting events without signal:

Bool resetEvent

Handle HEVENT / / Handle

);

Set the event without signal status:

Bool setevent

Handle HEVENT / / Handle

);

Close event object:

Bool CloseHandle

Handle hobject // handle

);

The following code demonstrates the different effects of automatic reset and artificial reset events:

DWORD THREADA (Void * Pd)

{

INT IID = (int) PD;

/ / Re-open internally

Handle hcounterin = OpenEvent (Event_all_access, false, "sam sp 44");

Printf ("/ TTHREAD% D Begin / N", IID);

/ / Set to have a signal state

Sleep (1000);

SetEvent (hcounterin);

Sleep (1000);

Printf ("/ TTHREAD% D end / N", IID);

CloseHandle (hcounterin);

Return 0;

}

DWORD THREADB (Void * Pd)

{// Wait for Threada after continuing

INT IID = (int) PD;

/ / Re-open internally

Handle hcounterin = OpenEvent (Event_all_access, false, "sam sp 44");

IF (wait_timeout == WaitForsingleObject (HcOUNTERIN, 10 * 1000))

{

Printf ("/ t / tthread% D Wait Time Out / N", IID);

}

Else

{

Printf ("/ t / tthread% D Wait OK / N", IID);

}

CloseHandle (hcounterin);

Return 0;

}

// in main function

{

Handle hcounter = NULL;

IF ((hcounter = OpenEvent (event_all_access, false, "sam sp 44") == NULL)

{

// If there is no other process to create this event, then recreate, the event is a manual reset event.

HcOUNTER = CreateEvent (NULL, TRUE, FALSE, "SAM SP 44");

}

// Create thread

Handle Hthread [3];

Printf ("Test of Manual REST EVENT / N);

CWINTHREAD * PT1 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 1);

CWINTHREAD * PT2 = AFXBEGINTHREAD ((AFX_THREADPROC) Threadb, (void *) 2);

CWINTHREAD * PT3 = AFXBEGINTHREAD ((AFX_THREADPROC) Threadb, (void *) 3); hthread [0] = Pt1-> m_hthread;

HThread [1] = PT2-> m_hthread;

HTHREAD [2] = PT3-> m_hthread;

// Waiting for thread ends

WaitFormultiPleObjects (3, Hthread, True, Infinite);

// Close the handle

CloseHandle (hcounter);

IF ((hcounter = OpenEvent (event_all_access, false, "sam sp 44") == NULL)

{

// If there is no other process to create this event, then recreate, the event is an automatic reset event.

HcOUNTER = CreateEvent (NULL, FALSE, FALSE, "SAM SP 44");

}

// Create thread

Printf ("Test of Auto Rest Event / N);

Pt1 = AFXBEGINTHREAD ((AFX_THREADPROC) Threada, (void *) 1);

PT2 = AFXBEGINTHREAD ((AFX_THREADPROC) Threadb, (void *) 2);

PT3 = AFXBEGINTHREAD ((AFX_THREADPROC) ThreadB, (void *) 3);

HTHREAD [0] = Pt1-> m_hthread;

HThread [1] = PT2-> m_hthread;

HTHREAD [2] = PT3-> m_hthread;

// Waiting for thread ends

WaitFormultiPleObjects (3, Hthread, True, Infinite);

// Close the handle

CloseHandle (hcounter);

}

From the execution result, we can see that only one thread in Threada is able to wait for the event object released in Threada when only one thread used in THREADB is used in the second execution.

Be careful when processing multi-process / thread synchronous problems, you must avoid death locks, such as now there are two mutual exclusion A and B, two threads TA and TB, they need these two mutuals before execution Delivery, but now this happens, TA has a mutex A, TB has a mutex B, but they are waiting to have another mutual exclusion, this time obvious no one can get my hope H. This kind of resource owned by each other is called a dead lock in the case of waiting for the resources owned by the other party. For more detailed introductions about this problem, please refer to other reference books.

In the MFC, all synchronous objects provide corresponding classes in these classes to create, open, control, and delete functions in these classes. But if you want to use the waiting function, you need to use two classes: CSIGLOCK and CMULTILOCK. These two classes encapsulate WaitForsingleObject and WaitFormultiPleObjects functions. If your needs need to see these definitions, I want to understand the above introduction, but I feel more intuitive and convenient to use the API function on the object synchronization problem.

Download this section Demonstration code 25k

转载请注明原文地址:https://www.9cbs.com/read-72943.html

New Post(0)