(One)
There is a thread TTHREAD in Delphi to implement multi-threaded programming. This most Delphi book is said, but it is basically a simple introduction to several members of the TTHRead class, then explain Execute's Implementation and Synchronize will be finished. However, this is not all of the multi-threaded programming, and I have written this article to make a supplement.
The thread is essentially the code in the process in the process. There is at least one thread, the so-called main thread. Also there can be multiple sub-threads. When more than one thread is used in a process, the so-called "multi-thread".
So how is this so-called "one code" defined? In fact, it is a function or process (for Delphi).
If you create a thread with a Windows API, it is implemented by a API function called CreateThread, which is defined as:
Handle CreateThread
LPSecurity_attributes LPTHREADATIADATTRIBUTES,
DWORD DWSTACKSIZE,
LPTHREAD_START_ROUTINE LPSTARTDRESS,
LPVOID LPPARETER,
DWORD DWCREATIONFLAGS,
LPDWORD LPTHREADID
);
The parameters of each parameter are said, respectively: thread properties (for threading security attribute settings in NT, invalid at 9X), stack size, start addresses, parameters, creation flags (for setting threads) State in creation), thread ID, and finally returns threaded handle. The start address is the entry of the thread function until the thread function ends, the thread is over.
The implementation process of the entire thread is as follows:
Because the CreateThread parameters are many, and is the Windows API, a universal thread function is provided in C Runtime Library (theoretically use in an OS that supports threads):
Unsigned long _beginthread (void (_USERENTRY * __ START) (void *), unsigned __stksize, void * __ arg);
Delphi also provides a similar function for the same function:
Function Beginthread (SecurityAttributes: Pointer; Stacksize: longword; threadfunc: tthreadfunc; parameter: Pointer; Creger; Var ThreadId: longword: integer;
The functions of these three functions are basically the same, and they are all putting the code in the thread function in a separate thread. The maximum difference between the thread function and the general function is that the thread function starts, the three thread start functions return, the main thread continues to execute down, and the thread function is executed in a separate thread, how long it is to do, what When returning, the main thread is not known.
Under normal circumstances, the thread is returned, the thread is terminated. But there are other ways:
Windows API:
Void ExitThread (DWORD DWEXITCODE);
C Runtime Library:
Void_endthread (void);
Delphi Runtime Library:
Procedure endthread (EXITCODE: INTEGER);
In order to record some necessary thread data (status / attributes, etc.), OS will create an internal object for threads, such as the Handle in Windows is the Handle of this internal Object, so you should release this Object when the thread is over. Although it is easy to make multi-thread programming with API or RTL (Runtime Library), it is still necessary to make more detail processing, and this Delphi is a better package in the Classes unit, which is VCL thread class: TTHREAD
It is also very simple to use this class, and most of the Delphi books say that the basic usage is: first derive a self-a thread class from TTHRead (because TTHREAD is an abstract class, can not generate an instance), then the Override abstract method: EXECUTE This is the thread function, that is, the code section executed in the thread), if you need to use a visual VCL object, you will also need to pass through the SYNCHRONIZE process. For details, details are not described here, please refer to the relevant books.
This article will discuss how the TTHREAD class packages threads, which is to study the implementation of the TTHREAD class. Because it just truly understands it, it is better to use it.
The following is a statement of the TTHREAD class in Delphi7 (this article only discusses the implementation under the Windows platform, so removes all the code about the Linux platform part):
TTHREAD = Class
Private
Fhandle: Thandle;
FTHREADID: THANDLE;
Fcreatesuspended: boolean;
Fterminated: boolean;
Fsuspended: boolean;
Ffreeonterminate: boolean;
FFINIShed: Boolean;
FREGER DIRUE: Integer;
Fonterminate: tnotifyevent;
Fsynchronize: TsynchronizeRecord;
FfaTalexception: TOBJECT;
Procedure callonterminate;
Class Procedure Synchronize (AsyncRec: psynchronizerecord); overload;
Function getPriority: tthreadpriority;
Procedure SetPriority (Value: TthreadPriority);
Procedure setsuspended (value: boolean);
protected
Procedure CheckthReaderror (Errcode: Integer); OVERLOAD;
Procedure Checkthreaderror (Success: Boolean); OVERLOAD;
PROCEDURE DOTERMINATE; Virtual;
.
Procedure Synchronize (Method: TthreadMethod); OVERLOAD;
Property ReturnValue: Integer Read FreturnValue Write FreTurnValue;
Property Terminated: Boolean Read Fterminated;
public
Constructor Create (createSuspended: Boolean);
DESTRUCTOR DESTROY; OVERRIDE;
Procedure afterconstruction; Override; Procedure Resume;
Procedure supend;
Procedure terminate;
Function WAITFOR: Longword;
Class Procedure Synchronize (Athread: Tthread; "OVERLOAD;
Class Procedure Staticsynchronize (ATHREAD: TTHREAD; AMETHOD: TTHREADMETHOD);
Property Fatalexception: TOBJECT Read FfataLexception;
Property FreeOnterminate: Boolean Read FFreeonterminate Write ffreeOnterminate;
Property Handle: Thandle Read Fhandle;
Property priority: tthreadpriority read priority write setpriority;
Property Suspended: Boolean Read Fsuspended Write Setsuspend;
Property Threadid: Thandle Read FthreadId;
Property Onterminate: TNOTIFYEVENT Read Fonterminate Write Fonterminate;
END;
TTHREAD class is a relatively simple class in the RTL of Delphi. There are not many class members. Class properties are simply understood. This article will only be more important to several important class members and unique events: onterminate.
Bis first is the constructor: constructor TThread.Create (CreateSuspended: Boolean); begin inherited Create; AddThread; FSuspended: = CreateSuspended; FCreateSuspended: = CreateSuspended; FHandle: = BeginThread (nil, 0, @ThreadProc, Pointer (Self), Create_suspended, fthreadid; if fhandle = 0 Then Raise Ethread.createresfmt (@SthreadCreateError, [SySerrorMeration (getLastError)]); END; Although this constructor does not have much code, it can be an important member, because thread is It is created here. After calling TObject.Create by Inherited, the first one is a call procedure: AddThread, its source code is as follows: procedure AddThread; begin InterlockedIncrement (ThreadCount); end; also has a corresponding RemoveThread: procedure RemoveThread; begin InterlockedDecrement (ThreadCount); END; their function is simple, is to count the number of threads in the process by adding a global variable. Just here to increase or decrease variables is not commonly used Inc / DEC processes, but use InterlockedIncrement / InterlockedDecrement, which is exactly the same as the variables, and the variables are all or minus one. But they have a biggest difference, that is, InterlockedIncrement / InterlockedDecrement is thread security. That is, they can guarantee the execution result in the multi-thread, and INC / DEC cannot. Or by the term in the operating system theory, this is a pair of "primitives" operations. Take one as an example to illustrate the different details of the two: In general, there are three steps after the operation decomposition of the memory data plus one, read the data 2 from the memory, data plus one 3, deposit memory Now, it is now assumed that in the application of two threads, use INC to add an operation, and thread a read data from memory (assuming 3) 2, thread b reads data from memory (also 3) 3, thread a pair data plus (now 4) 4, thread b add (now 4) 5, thread a deposit data into memory (now data in memory is 4) 6, thread B Also store data into memory (now data in memory is 4, but both threads have added one, should be 5 only, so there is a wrong result) and there is no problem with the interlockincrement process, because The so-called "primitive" is an uninterruptible operation, that is, the operating system guarantees that the thread switching will not be performed before a "primitive" is executed. So in the above example, only when the thread a execution is stored in memory, thread b can start to take a number and add another operation, so that even if it is in multi-thread, the results will certainly is correct.
The previous example also illustrates a situation of "thread access conflicts", which is why the thread needs "synchronize", about this, will also discuss in detail when it is synchronized later. Speaking of synchronization, there is a question: Li Ming, a professor of Canada Waterloo University, has been translated in "Thread Synchronization" in "Thread Synchronization" in "Thread Synchronization", which is actually very reasonable. In Chinese, "synchronization" means "happening simultaneously", and "thread synchronization" purpose is to avoid this "simultaneous happening". In English, Synchronize's meaning is two: one is a traditional sense of synchronization (To Occur At the Same Time), the other is "To Operate IN Unison). The word "Thread Synchronization" should refer to it later, ie "guarantees that multiple threads will be harmonized, avoid errors," However, there is still a lot of words like this in the IT industry. Since it is already a customary, this article will continue to use, just explain here, because software development is a meticulous job, which is clear, absolutely It is not ambiguous.
It's far away, go back to Tthread's constructor, the next most important is this: fhandle: = beginthread (nil, 0, @threadproc, pointer (self), create_suspended, fthreadid); here, it used to say Delphi RTL function beginthread, it has many parameters, the key is the third, four or two parameters. The third parameter is the previously mentioned thread function, that is, the code portion executed in the thread. The fourth parameter is the parameter passed to the thread function, where it is created thread object (ie, Self). In other parameters, the fifth is used to set threads to hang after being created, and do not perform immediately (starting the thread is determined in the CreateSusPended flag according to the CREATESUSPENDED flag), the sixth is the return thread ID. Now look at the core of TTHREAD: Thread function threadproc. Interesting is the core of this thread class is not a member of the thread, but a full-time function (because the parameters of the BeginThread process can only use global functions). The following is its code: function ThreadProc (Thread: TThread): Integer; var FreeThread: Boolean; begin try if not Thread.Terminated then try Thread.Execute; except Thread.FFatalException: = AcquireExceptionObject; end; finally FreeThread: = Thread. FFreeOnTerminate; Result: = Thread.FReturnValue; Thread.DoTerminate; Thread.FFinished: = True; SignalSyncEvent; if FreeThread then Thread.Free; EndThread (Result); end; end; although not much code, but it is the whole TThread The most important part, because this code is the code that is really executed in the thread. The following pair of code lines: First determine the TERMINATED flag of the thread class, if not the flag is terminated, then call the thread class's Execute method to execute thread code, because TTHREAD is an abstract class, the Execute method is an abstract method, so essentially It is an Execute code in the derived class. So, Execute is a thread function in the thread class, all of which need to make a thread code, such as preventing access violations. If Execute has exceeded an exception, the exception object is obtained via AcquireExceptionObject and stored in the FFATALEXCEPTION member of the thread class. Finally some of the ends of the thread did before the end of the thread. The local variable Freethread records the setting of the freeOnterminated property of the thread class, and then set the thread return value to the value of the returned value attribute of the thread class. Then execute the thread class Doterminate method.
Code DoTerminate method is as follows: procedure TThread.DoTerminate; begin if Assigned (FOnTerminate) then Synchronize (CallOnTerminate); end; It is simply to call CallOnTerminate by Synchronize method, and the code CallOnTerminate method is as follows, that is, simply call OnTerminate event: Procedure tthread.callonterminate; begin if assigned (self); end; end; because onterminate event is performed in Synchronize, it is not a thread code, but the main thread code (see Synchronize in Synchronize) Analysis). After executing Onterminate, set the ffinished flag of the thread class to True. Next, the SignalsyncEvent process is executed, and its code is as follows: procedure signal (syncevent); END; is also very simple, that is, set a global Event: syncevent, about Event, this article will be detailed later, and SYNCEVENT Use will be described in the Waitfor process. Then, according to the freeOnterminate settings saved in FreeThread, there are some operations when the thread class is released, and the next destructor is implemented. Finally call the endthread end thread and return the thread return value. At this point, the thread is completely over. Three finishes constructive functions, then look at the analyte function:
Destructor TTHREAD.DESTROY;
Begin
IF (fthreadid <> 0) and not ffinished then
Begin
Terminate;
IF fcreateSuspended then
RESUME
Waitfor;
END;
IF fhandle <> 0 Then CloseHandle (FHANDLE);
Inherited destroy;
FFATALEXCEPTION.FREE;
REMOVETHREAD;
END;
Before the thread object is released, first check if the thread is still executed, if the thread is still in execution (the thread ID is not 0, and the thread end flag is not set), then call the Terminate process end thread. The Terminate process is just simply set the Terminated flag of the thread class, as follows:
Procedure tthread.terminate;
Begin
Fterminated: = true;
END;
Therefore, threads still must continue to execute the normal end, not to terminate the thread immediately, this should be noted.
Say there is a little question here: Many people have asked me, how to "immediately" terminate the thread (of course refer to thread created with TTHREAD). The result is of course not! The only way to terminate the thread is to let the Execute method execute, so in general, let your thread can terminate as soon as possible, you must constantly check the Terminated flag in a shorter time in the Execute method so that you can exit in time. This is a very important principle of designing thread code!
Of course, if you must "immediately" exit the thread, the TTHREAD class is not a good choice, because if you force the thread with an API, it will eventually cause the TTHREAD thread object to not be properly released, and Access Vioc appears when the object destructure. This situation you can only create threads with API or RTL functions. If the thread is starting the suspended state, turn the thread into the running state, then call the Waitfor to wait, its function is to wait until the thread is completed. With regard to the implementation of WaitFor, it will be placed later.
After the thread is completed, turn off the thread handle (where Handle is existing in the case of normal thread is created), and the thread object created by the operating system is released.
Then call TOBJECT.DESTROY to release this object and release the captured exception object, and finally call the number of threads in the process of RemoveThread.
Other aspects regarding SUSPEND / RESUME and thread priority settings, not the focus of this article, no longer described. The other two focuses on this article are discussed below: SYNCHRONIZE and WAITFOR.
But before introducing these two functions, you need to introduce two other thread synchronization techniques: events and critical regions.
Events in Event are different from the events in Delphi. In essence, Event is actually equivalent to a global Boolean variable. It has two assignments: SET and RESET, equivalent to setting it to True or false. Check that its value is performed by WaitFor operation. Corresponding to the Windows platform, it is three API functions: setEvent, resetEvent, WaitForsingleObject (there are several APIs that implement Waitfor features, which is the simplest).
These three are primitives, so Event can achieve the application of general Boolean variables that cannot be implemented in multithreading. The function of Set and Reset has already said, now, for the Waitfor's functionality:
The WaitFor function is to check if the state is a SET status (equivalent to true). If so, return it immediately, if not, wait it to change to the SET state, during the waiting period, call the WaitFor thread at the suspend state. In addition, Waitfor has a parameter for timeout setting. If this parameter is 0, it will not wait, return to the state of Event, if it is infinite, unlimited waiting until the SET status occurs, if it is a limited value, wait for the corresponding millisecond number After returning to the state of EVENT.
When Event is converted from the RESET state to the SET status, wakes up the thread that hangs due to WaitFor this Event, which is why it is called Event. The so-called "event" refers to "state transition". This "state conversion" information can be passed between the thread through Event.
Of course, a similar function can be implemented with a Boolean variable that is protected (see the critical area below), as long as a loop checks the code of this Boolean code instead of WaitFor. From the functional say, it will be found in actual use, such waiting will occupy a large number of CPU resources, reduce system performance, affecting the speed of execution of other threads, so it is not economical, sometimes it may even There will be a problem. So it is not recommended.
Four
CriticalSection is a technology that shares data access protection. It is actually equivalent to a global Boolean variable. However, it is different from it, it only has two operations: Enter and Leave, which can also use two states as True and False, indicating whether it is in the critical area, respectively. These two operations are also primitives, so it can be used to protect shared data in multi-threaded applications to prevent access violations. Method for protecting data with critical regions is simple: Calling Enter settings to enter the critical zone flag each time you have to access shared data, then operate the data, and finally call Leave to leave the critical area. Its protection principle is this: When a thread enters the critical area, if another thread is to access this data, it will find the thread to enter the critical area when calling Enter, and then this thread will be Hang up, wait for the thread calling at the critical area to call Leave leaves the critical area, when another thread is completed, this thread will be woken up, and set the critical area flag to start the data, which prevents access. conflict. To the front of the InterlockedIncrement example, we use CriticalSection (Windows API) to achieve it: Var InterlockedCrit: TRTLCriticalSection; Procedure InterlockedIncrement (var aValue: Integer); Begin EnterCriticalSection (InterlockedCrit); Inc (aValue); LeaveCriticalSection (InterlockedCrit); End; Now let's see the previous example: 1. Thread A enters the critical area (assuming data is 3) 2. Thread B enters the critical area, because A is already in the critical area, so B is hang 3. Thread a pair data plus data Now is 4) 4. Thread a Leave the critical area, wake up thread B (now the data in memory is 4) 5. Thread B is awakened, add one for data (now 5) 6. Thread B Leave the critical area, now The data is correct. The critical area is to protect access to sharing data. With regard to the use of the critical regions, it is important to pay attention to: ie the abnormal conditions of data access. Because if an exception occurs in the data operation, the Leave operation will result in not being executed, and the result will make the thread that should be woken out is not awakened, which may cause no response of the program. So, in general, as this is the correct approach: EntercriticalSECTION TRY // Operate the critical area Data Finally LeavecriticalSECTION END;
Finally, it will be described that Event and criticalSECTION are operating system resources. You need to be created before use. It also needs to be released after use. For example, a global Event: Syncevent and global criticalsection: Thetsynchronization and DonThreadsynchronization are created and released in the initThreadsynchronization and DonThreadsynchronization, while they are called in the initialization and finalization of the Classes unit. Since both the API is used in TTHREAD, Event and CriticalSecion are used, so that the API is an example, in fact, Delphi has provided packages for them, in the Syncobjs unit, are the TEVENT class and the TcriticalSECTION class. The usage is also different from the method of using the API in front. Because of the TEVENT's constructor parameters, Delphi provides an Event class initialized with the default parameters: TsIMpleEvent. By the way, introduce another class for thread synchronization: TMULTIREXCLUSIVEWRITESYNCHRONIZER, it is defined in the SYSUTILS unit. As far as I know, this is the longest class name defined in Delphi RTL, but it has a short alias: Tmrewsync. As for its use, I want to see the name, I can know, I will not say much. With the preparation knowledge of Event and CriticalSection, you can formally began to discuss Synchronize and WaitFor.
We know that SYNCHRONIZE is performed by putting some code in the main thread, because in a process, there is only one main thread. Let's look at the Synchronize implemented: procedure TThread.Synchronize (Method: TThreadMethod); begin FSynchronize.FThread: = Self; FSynchronize.FSynchronizeException: = nil; FSynchronize.FMethod: = Method; Synchronize (@FSynchronize); end; wherein FSynchronize a record type is: PSynchronizeRecord = ^ TSynchronizeRecord; TSynchronizeRecord = record FThread: TObject; FMethod: TThreadMethod; FSynchronizeException: TObject; end; for the main thread between the thread and the exchange of data comprises an incoming thread class objects, synchronization method, and An exception occurred. The overload version is called in Synchronize, and this overload version is quite special, it is a "class method". The so-called method is a special class member method, and its call does not need to create class instance, but is called by class name as constructor. The reason why it will be used to implement it because it can be called when the thread object is not created. However, it is actually another overload version (also class method) and another class method StaticSynchronize.
Synchronize Here is the code: class procedure TThread.Synchronize (ASyncRec: PSynchronizeRecord); var SyncProc: TSyncProc; begin if GetCurrentThreadID = MainThreadID then ASyncRec.FMethod else begin SyncProc.Signal: = CreateEvent (nil, True, False, nil); try EnterCriticalSection (ThreadLock); try if SyncList = nil then SyncList: = TList.Create; SyncProc.SyncRec: = ASyncRec; SyncList.Add (@SyncProc); SignalSyncEvent; if Assigned (WakeMainThread) then WakeMainThread (SyncProc.SyncRec.FThread) ; LeaveCriticalSection (ThreadLock); try WaitForSingleObject (SyncProc.Signal, INFINITE); finally EnterCriticalSection (ThreadLock); end; finally LeaveCriticalSection (ThreadLock); end; finally CloseHandle (SyncProc.Signal); end; if Assigned (ASyncRec.FSynchronizeException) then Raise asyncRec.fsynchronizeException; end; end; this code is slightly more, but it is not too complicated. The first is to determine if the current thread is a primary thread. If so, simply perform the synchronization method and return. If it is not a main thread, it is ready to start the synchronization process. Recording thread switched data (parameters) and an EVENT HANDLE via a local variable, the record structure is as follows: TsyncProc = Record SyncRec: psynchronizecord; Signal: Thandle; End; then create an Event, then enter the critical area (through global variable threadlock, Because there is only one thread to enter the SYNCHRONIZE state, you can use global variables), and then save this record data into the SyncList (if this list does not exist, you create it). It can be seen that this critical area of Threadlock is to protect access to synclist, which will be seen again when introducing Checksynchronize later. Next, it is called SignalsyncEvent, and its code has been introduced when the constructor of TTHREAD is described earlier, and its function is to simply make SYNCEVENT operations. The use of this SYNCEVENT will be described in detail when WaitFor will be described later. The next step is the most important part: call the WakemainThread event to synchronize.
WakemainThread is a global event for TnotifyEvent type. The reason why the SYNCHRONIZE method is used here is because the Synchronize method is inherently through the message, and the process that needs to be synchronized is executed in the main thread. If there is no message loop, it is unusable. Therefore, use this event to process. In response to this event, the Application object, the following two methods are used to set and empty the response of WakemainThread events (from the Forms unit): Procedure Tapplication.hookynchronizewakeup; begin classes.wakemainthread: = WakemaInThread; end;
Procedure Tapplication.unhooksynchronizewakeup; Begin Classes.wakemainthread: = NIL; END; The above two methods are called in the constructor of the TAPPLICATION class and the destructor. This is the code in the Application object, the message is sent here, which uses an empty message to implement: Procedure Tapplication.wakemainThread (Sender: Tobject); Begin PostMessage (Handle, WM_NULL, 0, 0) And the response of this message is also in the Application object, see the following code (delete unrelated part): Procedure Tapplication.WndProc (var message: tMessage); ... begin try ... With Message Do Case Msg of ... WM_NULL: Checksynchronize (Self); end; end; where checksynchronize is also defined in the CLASSES unit, because it is more complicated, temporarily unknown, just know that it is specific to handle the Synchronize function, now continue to analyze Synchronize Code. After executing the WakemainThread event, you will withdraw from the critical zone and then call WaitForsingleObject to start waiting at the Event created before entering the critical area. The feature of this EVENT is to wait for the execution of this synchronization method, with this point, after analyzing CheckSynchronize, will then explain. Note that after WaitforsingleObject, re-enter the critical regions, but if you don't do anything, you will seem to have no meaning, but this is a must! Because the Enter and Leave of the critical regions must be strict. Is it possible to change it so: if Assigned (WakeMainThread) then WakeMainThread (SyncProc.SyncRec.FThread); finally LeaveCriticalSection (ThreadLock);; end; The code above and the biggest difference between the original code WaitForSingleObject (SyncProc.Signal, INFINITE) The WaitForsingleObject is also in the limitations of the critical regions. It seems that there is no impact, but also makes the code greatly, but can you really? In fact, it's not! Because we know, after the ENTER critical area, if other threads have to enter, they will be suspended. The WaitFor method will hang the current thread until you wait another thread to set the STEVENT will be awakened. If it is changed to the above code, if that setEvent's thread also needs to enter the critical region, the deadlock has happened (regarding the theory of deadlock, please refer to the information of the operating system principle). Death lock is one of the most important aspects of threads! The event created at the end of the final release, if the synchronous method returns an abnormality, it will throw an exception again.
Five (Finale) back to the previous CheckSynchronize, see the following code: function CheckSynchronize (Timeout: Integer = 0): Boolean; var SyncProc: PSyncProc; LocalSyncList: TList; begin if GetCurrentThreadID <> MainThreadID then raise EThread.CreateResFmt (@ SCheckSynchronizeError, [GetCurrentThreadID]); if Timeout> 0 then WaitForSyncEvent (Timeout) else ResetSyncEvent; LocalSyncList: = nil; EnterCriticalSection (ThreadLock); try Integer (LocalSyncList): = InterlockedExchange (Integer (SyncList), Integer (LocalSyncList)); try Result: = (LocalSyncList <> nil) and (LocalSyncList.Count> 0); if Result then begin while LocalSyncList.Count> 0 do begin SyncProc: = LocalSyncList [0]; LocalSyncList.Delete (0); LeaveCriticalSection (ThreadLock); Try try syncproc.syncRec.fmethod; Except syncproc.syncRec.fsynchronizeException: = acquireexceptionObject; end; Fina lly EnterCriticalSection (ThreadLock); end; SetEvent (SyncProc.signal); end; end; finally LocalSyncList.Free; end; finally LeaveCriticalSection (ThreadLock); end; end; First, the method must be invoked (the main thread as previously Pass the message to the main thread), otherwise it will throw an exception. Next, ResetSyncevent (it corresponds to the front setsyncEvent), because WaitforsyncEvent is considered, because only Checksynchronize with parameters only in the Linux version, the Windows version is CHECKSYNCHRONIZE that calls the default parameter 0. It is now possible to see the use of SynClist: it is used to record all unreported synchronization methods. Because the main thread has only one, while the child thread may have a lot, when multiple sub-threads call the synchronization method, the main thread may not be processed, so a list is required to record them.
Here, use a local variable LocalsyncList to exchange synclist, which is also an original language: InterlocKedexchange. Similarly, it is also the use of the critical regions to protect the across SYNCLIST. As long as localsynclist is not empty, the accumulated all synchronization method calls are processed by a loop. Finally, release the processing LocalsyncList and exit the critical area. Treatment to the synchronization method: First, the first synchronization method calls the data from the list (removed and deleted from the list). Then exit the critical region (the reason is of course to prevent deadlocks). Then it is a true call synchronization method. If an exception occurs in the synchronization method, the synchronization method data record will be stored after being captured. After re-entering the critical area, call the STEVENT notification calling thread, and the synchronization method is executed (see WaitforsingleObject call in front synchronize). At this point, the implementation of the entire SYNCHRONIZE is complete.
Finally, Waitfor, its function is to wait for the thread execution to end. Its code is as follows: function TTREAD.WAITFOR: longword; var h: array [0..1] of thandle; WaitResult: cardinal; msg: tmsg; begin h [0]: = fhandle; if getCurrentThreadIdIdIDID = MAINTHREADID THEN BEGIN WAITRESULT: = 0; H [1]: = SyncEvent; repeat {This prevents a potential deadlock if the background thread does a SendMessage to the foreground thread} if WaitResult = WAIT_OBJECT_0 2 then PeekMessage (Msg, 0, 0, 0, PM_NOREMOVE); WaitResult : = MsgWaitForMultipleObjects (2, H, False, 1000, QS_SENDMESSAGE); CheckThreadError (WaitResult <> WAIT_FAILED); if WaitResult = WAIT_OBJECT_0 1 then CheckSynchronize; until WaitResult = WAIT_OBJECT_0; end else WaitForSingleObject (H [0], INFINITE); CheckThreadError (GetExitCodet "; end; if it is not executing Waitfor in the main thread, it is very simple, as long as WaitForsingleObject Waiting to wait for this thread to be Signaled status. If you perform WaitFor in the main thread, it is more troublesome. First, you must add a syncevent in the Handle array, then wait until the thread ends (ie, MsgWaitFormultiPleObjects returns Wait_Object_0, see the description of this API in the MSDN). Processing in the loop waiting: If there is a message, this message is removed by PeekMessage (but does not remove it from the message loop), then call MsgWaitFormultiPleObjects to wait for the thread handle or syncevent to appear Signaled status while listening to the message ( Qs_sendMessage parameters, see the description of this API in MSDN, for details. This API can be used as a WaitforsingleObject that can wait at the same time. If it is SYNCEVENT to be SETEVENT (Returns WAIT_Object_0 1), then call the checksynchronize to process the synchronization method. Why call Waitfor in the main thread to use MSGWAITFORMULTIPLEOBJECTS, instead of WAITFORSINGLEOBJECT wait for threads? Because death locks are prevented.
Since the Synchronize processing synchronization method may be called in the thread function Execute, the synchronization method is performed in the main thread. If you wait with WaitForsingleObject, the main thread is hanged here, the synchronization method cannot be executed, resulting in a thread being hang Then there is a deadlock. There is no such problem with WaitFormultipleObjects. First, its third parameter is false, indicating that only one signle or syncevent can make the main thread to wake up, as for QS_sendMessage because Synchronize is passed through the message to the main thread, so Prevent the message is blocked. Thus, when Synchronize is called in the thread, the main thread will be awakened and handled synchronous calls, and continue to enter the hang wait state after the call is completed until the thread ends. At this point, the analysis of threaded TTHREAD can tell a paragraph, a summary of the previous analysis: 1. Threads of the thread class must end according to the normal mode, that is, the execute is completed, so in it must be in appropriate Where there is enough to judge the Terminated logo and quit in time. If you must "immediately" exit, you cannot use the thread class, and you must use the API or RTL function. 2. For visual VCL access to Synchronize, pass the message to the main thread, by the main thread process. 3. The access of thread sharing data should be protected in a critical area (of course, using Synchronize). 4. Thread communication can be carried out using Event (of course, you can also use suspend / resume). 5. When using multiple threads in multi-threaded applications, be careful to prevent deadlocks.
6, waiting for the thread to end with the waitfor method.