Solaris2.4 Multi-threaded Programming Guide

xiaoxiao2021-03-06  78

From: http://bbs.chinaUnix.net/forum/viewtopic.php?t=172898

Solaris2.4 Multithreaded Programming Guide Solaris2.4 Multithreaded Programming Guide 1 - Thread Basics 1. Thread Basics Multithreading can be translated into multi-threaded control. Unlike traditional UNIX, a traditional UNIX process contains a single thread, and multi-threaded (MT) divides a process into many executable threads, each thread runs independently. Read this chapter to understand: defining multithreading Terms Benefiting from Multithreading Looking at Multithreading Structure Meeting Multithreading Standards Because multi-threaded programming can be run independently, can be used in multi-threaded programming 1) Improve application response; 2) make multi-CPU system more effective; 3) Improve the program structure; 4) Take less system resources; 5) Improve performance; 1.1 Define multithreaded terms: thread: instruction sequence executed in the process; single thread: single thread; multi-thread: multi-thread; user-level thread : The ready-made function of the thread function library process in the user space; light process: also known as the thread of the LWP, the internal core execution check code and the system call; binding (bound) thread: thread that is always limited to LWP; Binding (UNBOUND) thread: Threads in LWP dynamic bundling and unloading; count signal: a memory-based synchronization mechanism; 1.1.1 Definition (Concurrency) and parallelism: There are at least two in the process. There is a simultaneous problem when it is processed; at least two threads have parallel problems during execution; the processor can switch execution in the middle of the thread in the middle of the multi-threaded process, which is implemented. At the same time, execution; the same multi-thread process executed on the shared memory multiple processors, each thread can be performed on different processors, respectively, is parallel. When the number of threads in the process is not more than the number of processors, the thread support system and the operating system guarantees the thread to execute on different processors. For example, a matrix multiplication is run in a M-processor and M thread, each thread calculates a column. 1.2 Multithreading benefits 1.2.1 Improve the application response to any program containing many mutually downtreconnected operations (Activity) can be redesigned, making each operation into a thread. For example, in a GUI (graphical user interface), an operation is executed while starting another one, and the performance can be improved by multi-threading. 1.2.2 Multi-threaded applications with simultaneous demand will not consider the number of processors with a higher-processor efficiency. The performance of the application is transparent to the user while the multiprocessor is improved. Mathematical calculations and applications with highly concurrency demand, such as matrix multiplication, can be used to improve speeds on multiprocessor platforms. 1.2.3 Improvement Program Structure Many applications can be transformed from a single, huge thread into some independent or semi-independent executive portions, resulting in more efficient operation. Multi-threaded programs can adapt to changes in user needs than single-threaded programs. 1.2.4 Occupation of less system resource applications can achieve more than one ready-made control by sharing memory by using two or more process sharing memory. However, every process must have a complete address space and an operating system status entry. The overhead for creating and maintaining a large number of state tables is more expensive than the multi-threading method.

Moreover, the independence inherent in the process makes a lot of effort to implement communication and synchronization between processes. 1.2.5 Combine threads and RPC combine multi-thread and RPC (Remote Procedure Call, remote procedure call), you can use a multiprocessor without memory sharing (one workstation group). This structure treats this set of workstations as a large multiprocessor system that makes the application distribution easier. For example, a thread can create a sub-thread, and each child process can do RPC, call the process on another machine. Although the earliest thread only creates some parallel threads, this parallel can include the operation of multiple machines. 1.2.6 Improve performance The performance data of this part is collected from SPARC Station2 (Sun 4/75). The measurement accuracy is microseconds. 1. Thread Create Table 1-1 shows the time between the thread is created with the default stack that uses Thread Package to cache. Time measurement only includes actual generation time. Does not include switching to threads. The ratio (RATIO) column gives the ratio of the row generation time and the previous row. Data indicates that threads are more economical. Creating a new process is probably 30 times that created a UNBOUND thread, which is 5 times the Bound thread that contains threads and LWPs. Table 1-1 Thread Creation Times Operation Microseconds Ritio Create UNBound Thread 52 - Create Bound Thread 350 6.7 fork () 1700 32.7 2. Thread Synchronization Table 1-2 lists the synchronization time of two threads using PV operations. Table 1-2 Thread Synchronization Times Operation Microseconds Ratio UNBound Thread 66 - Bound Thread 390 5.9 Between Processes 200 3 1.3 Multi-Thread Structure List Traditional Unix Support Off-Search - Each process contains a single thread, so use multiple processes Thread. But there is an address space in a process that creates a new process means you need to create a new address space. Therefore, creating a process is expensive, and within an existing process is cheap. Creating a thread is less than the time to create a process, and the part is because the handover does not involve address space in the thread interval. The communication between the processes is very simple because the threads share all things - especially address space. So the data generated by a thread can be provided immediately to other threads. A multi-threaded interface (interface) is implemented by a library libthread. Multi-threaded provides more flexibility by independent of the kernel-level resources and user-level resources. 1.3.1 User-level thread thread is only visible inside the process, and all resources such as address space, files that have been opened are shared in the process. The following state is the thread private, ie the following states of each thread are unique inside the process. Thread ID). Register status (including program counter and stack pointer). Stack. Signal Mask. Priority. Thread-private storage) Because thread sharing processes Execute code and most of the data, the shared data can be seen after a thread modification can be seen in the process. When a process internal thread communicates with other threads, it may not pass through the operating system. The thread is the main excuse for multi-threaded programming. User-level threads can be operated in user space, thereby avoiding mutual switching between kernels.

An application can have thousands of threads without occupying too much kernel resources. The number of kernel resources is mainly determined by the application itself. By default, threads are very light. However, in order to control a thread (eg, more control process scheduling policies), the application should bind the thread. When an application binds all the threads of the thread, the thread becomes kernel resources (see "Bound thread" on page 9). In short, the Solaris user-level thread is:. Create low overhead, because only the virtual memory of the user's address space only is only running the user address space. Quick synchronization, because synchronization is performed in user-level, do not need to be exposed to the kernel level. It can be easily implemented by a thread library libthread. Figure 1-1 Multi-threaded system structure (omitted) 1.3.2 Lightweight PORCESS: LWP) Thread library The underlying control thread called light-supported by kernel support. You can regard the LWP as a virtual CPU that can perform code and system call. Most programmers use threads that are not aware of the presence of LWP. The following is only helping to understand the difference between the Bound and UNBound threads. ------------------------------------ NOTE: Solaris2.x LWP is different from SunOS4.0 LWP The library is no longer supported in Solaris2.x. ------------------------------------ Similar to Fopen and Fread calls Open and Read, threads in STDIO The interface calls the LWP interface because it is the same. The LWP establishes a bridge from the user-level to the kernel. Each process contains one or more LWP, each LWP runs one or more user threads. Creating an outgoing usually just creates a user environment (Context) instead of creating a LWP. Under the careful design of programmers and operating systems, the user-level thread library ensures that the available LWP is sufficient to drive the currently active user-level thread. However, the user thread and the LWP are not one of the corresponding relationships, and the user-level thread can be freely switched between the LWP. How many threads that programmers tell the climate library can be "run" at the same time. For example, if the programmer specifies that up to three threads can be run at the same time, at least 3 available LWPs. If there are three available processors, the thread will be parallel. If there is only one processor here, the operating system will run three LWPs on one processor. If all LWP blocks, the thread library will add a LWP in the buffer pool. When a user thread is blocked due to synchronization, its LWP will be handed over to the next runable thread. This handover is completed through the Corordine Linkage, not the system call. The operating system determines which LWP runs on which processor runs. It does not consider the type and quantity of the thread in the process. The kernel assigns CPU resources in accordance with the type and priority of the LWP. The thread library allocates LWP for the thread. Each LWP is independently distributed independently, performs separate system calls, which leads to separate page errors, and perform parallel in the case of multiprocessors. Some special types of LWP may not be directly handed over to threads. (!? Unknown) 1.3.3 Non-biled threads Unbound Threads line up in the LWP buffer pool is called unbound thread. Usually our thread is unbound so they can freely switch between LWP. The thread library activates the LWP when needed and gives them to the thread that can be executed. The status of the LWP management thread performs the command of the thread.

If the thread is blocked in the synchronization mechanism, or other threads need to run, the thread state is existing in the process memory, and the LWP is handed over to other threads. 1.3.4 Binding Thread Bound Threads If necessary, you can bind a thread on a LWP. For example, you can implement: 1 by binding a thread: 1. Will thread global scheduling (for example, time) 2. Make the threads with a variable signal stack 3. Giving thread allocated Timer and Signal (ALARM) When the number of threads is more than LWP, the Bounded is some of the unbound threads. For example, a parallel matrix calculation program calculates each line in each thread. If each processor has a LWP, but each LWP has to handle multithreading, each processor will cost a considerable time to switch threads. In this case, it is best to process each LWP to reduce the number of threads, thereby reducing thread switching. In some applications, mixing with Bound and UNBound threads are more appropriate. For example, there is a real-time application, I hope some thread has a global priority and is scheduled in real time, and other threads are transferred to the background calculation. Another example is a window system, and most of the operations are UNBOUND, but the mouse operation needs to take up a high priority, Bound, real-time thread. 1.4 Multi-threaded standard multi-threaded programming can be traced back to the 1960s. The development in the UNIX operating system is starting from the mid-1980s. Perhaps it is a surprising, there is a good agreement on support multi-thread, but we can still see different multithreaded development packs this day, they have different interfaces. However, a group called POSIX1003.4a in a few years studied multithreaded programming standards. When the standard is completed, most systems that support multithreaded systems support POSIX interfaces. Very good improvement of the portability of multi-threaded programming. Solaris multi-threaded support and POSIX1003.4a have no fundamental differences. Although the interface is different, each system can easily implement any functions that can be implemented in another system. There is no compatibility issue between them, at least Solaris supports two interfaces. Even in the same application, you can use them in mixing. Another reason for using the Solaris thread is to use the toolkit supporting it, such as multi-threaded debugging tools, and Truss (can track system calls and signals for a program), which can be used to report the state of threads. Solaris2.4 Multi-threaded Programming Guide 2 - Multi-threaded programming 2 Multi-thread programming 2.1 Thread (function) library user-level multi-thread is implemented through thread library, libthread (reference manual Page 3: Library routines). The thread library supports the signal, queuing the runable program and is responsible for manipulating multiple tasks. This chapter discusses some general processes in libthread, first contacting basic operations, and then entering more complex content in progress.

Create a thread - Basic Features Thr_create (3T) Get the thread number THR_SELF (3T) Execute Thr_Yield (3T, THE BELOW IS SAME) Suspend or Continue ThR_SUSEND THR_CONTINUE to send signal THR_KILL Set thread call Mask THR_SIGSETMASK THR_SIGSETMASK Termination Thr- exit waiting for the thread to terminate Thr-join maintenance thread private data thr_keycreate thr_setspecific thr_getspecific create threads - advanced Features Thr_create get the minimum stack size Thr_min_stack get or set the thread while rating Thr_getconcurrency thr_setconcurrency get or set thread priority thr_getprio thr_setprio 2.1.1 create a thread - The basic article THR_CREATE process is the most complex one in the process of all processes in the thread library. This part of the content is only available for you to create a process using the default parameters of THR_CREATE. For THR_CREATE, more complex use, including how to use a custom parameter, we will give a description in the advanced characteristic part. THR_CREATE (3T) This function is used to add a thread in the current process. Note that the new thread does not inherit the unprocessed signal, but inherits the priority and signal mask. #include int thr_create (void * stack_base, size_t stack_size, void * (* start_routine) (void *), void * arg, long flags, thread_t * new_thread); size_t thr_min_stack (void); stack_base-- new thread stack address. If stack_base is empty, THR_CREATE (), allocates a stack for Stack_SIZE for the new thread. Stack_size - the number of bytes of new thread stacks. If this is 0, the default value will be used, and this item is preferably set to 0 without the case. Not each thread needs to specify the stack space. The thread library assigns 1M virtual memory for each thread and does not retain the exchange space. (The thread library uses MAP_NORSERSERVE of MMAP (2) to implement this allocation). START_ROUTINE - The function of specifying the thread starts. If start_routine returns, the thread will exit using the return value of the function as an exit state. (Refer to THR_EXIT (3T)). Flags - Specifies the properties of the new thread, typically set to 0. The value of the Flags is achieved by the position of the following (the last four Flags given in advanced features). 1. THR_DETACHED separates the new thread so that its thread number and other resources can be recycled at the end of the thread. When you don't want to wait for the thread to terminate it. If there is no clear synchronization demand hindrance, a non-suspended, separated thread can terminate before the creator's THR_CREATE returns and assigns its thread number to an experience thread. 2. THR_SUSPENDED hangs a new thread until it is woken up by THR_CONTINUE. 3. THR_Bound is permanently bound to a LWP (generated a binding thread). 4. THR_NEW_LWP plus 1 at the same time of the non-binding thread. 5. Thr_daemon new thread is a daemon. NEW_THREAD - points to the address of the new thread ID. In most cases set to 0. Return Values ​​- THR_CREATE () returns 0 after successful execution and exits. Any other return value indicates that there is an error. When the following is detected, THR_CREATE () fails and returns the value of the response. Eagain: Beyond system restrictions, for example, too much LWP is created.

ENOMEM: Available memory is not enough to create a new thread. EINVAL: STACK_BASE is not NULL and the STACK_SIZE is smaller than the minimum stack returned by the Thr_Minstack () function. 2.1.2 Get the line number THR_SELF (3T) Get your own thread number. #include thread_t thr_self (void) Returns the value - the caller's thread number. 2.1.3 Abandon Execution Thr_yield (3T) Thr_yield stops executing the current thread, allowing permissions to give threads with the same or higher priority. #include void thr_yield (void); 2.1.4 Suspend or continue execution thread thr_suspend (3T) hang thread. #include int thr_suspend (thread_t target_thread); thr_suspend () immediately hangs the thread specified by target_thread. After THR_SUSPEND successfully returns, the pending thread is no longer executed. The subsequent THR_SUSPEND is invalid. Return Values ​​- returns 0 after the execution is successful. Other return values ​​means errors. When the following occurs, THR_SUSPEND () failed and returned the correlation value. Esrch: You can't find Target_thread in the current process. THR_CONTINUE (3T) THR_CONTINUE () Restores a pending thread. Once thread is separated from the suspended state, the subsequent THR_CONTINUE will be invalid. #include int thr_continue (thread_t target_thread); a hang-haunted thread will not be awakened by the signal. The signal is suspended to know the thread is resumed by THR-CONTINUE. Return Value - Returns 0 after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. Esrch: target_thread can't find it in the current process. 2.1.5 Wire-transmitted signal THR_KILL (3T) to the thread signal #include #include int thr_kill (thread_t target_thread, int SIG); thr_kill to the thread number Target_Thread thread signal SIG. Target_thread must be within the same process with the calling thread. The parameter SIG must be defined in Signal (5). When SIG is 0, the error check will be executed, and no actual signal is sent. This can be used to detect if the arget_thread parameter is legal. Return Value - Returns 0 after successful execution, other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL: SIG illegal; Esrch: target_thread can't find; 2.1.6 Setting this thread Signal Mask THR_SIGSETMASK (3T) Get or change the signal mask of this thread (Signal Mask) #include #include int thr_sigsetmask (int ~, const SigSet_t * set, SigSet_t * et); how parameter determines how to change, can be one of the following values: SIG_BLOCK - Add SET, set to the signal group to be blocked on the current signal mask. SIG_UNBLOCK - Remove the SET on the current signal mask, and the set refers to the block of blocking. SIG_SETMASK - replaces the existing mask with a new mask, and the SET refers to the new signal mask. When the value of the SET is NULL, the value of how is not important, and the signal mask will not be changed. Therefore, to query the current signal mask, assign the set to NULL. When the parameter OSET is not NULL, it points to the place where the previous signal mask is stored. Return VALUES - Returns 0 after normal execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value.

EINVAL: SET is not null and how does not be defined; EFAULT: SET or OSET is not a legal address; 2.1.7 Termination Thr_EXIT (3T) is used to terminate a thread. #include void thr_exit (void * status); THR_EXIT function terminates the current thread. All private data is released. If the calling thread is not a separation thread, the id and return status reserved until there are additional threads waiting. Otherwise the return status is ignored, and the thread number is reused immediately. Return Value - When the calling thread is the last non-daemon thread in the process, the process will exit with status 0. When the initial thread returns from the main () function, the process uses the return value of the thread main function. Threads can be stopped in two ways. The first is returned from the initial process. The second is to provide an exit code that ends by calling THR_EXIT (). The following things demonstrate the setting of Flags when you create a thread. The default operation of thread a terminated (when the corresponding bit of FLAGs is set to 0) is the remaining state until the other thread (may be set to b) by "combining" means that the thread A has died. The combined result is that the B thread gets the exit code of thread a, and A is automatically done. You can use the bit or come to the Thr_Detached parameter of the Flags, so that the thread is eliminated after THR_EXIT () or immediately after returning from the initial process. In this case, its exit code will not be obtained by any thread. There is an important special case, at the main thread - the initial existing thread - returns exit () from the main function, and the entire process will terminate. So pay attention to the main function main in the main thread. If the main thread is only called THR_EXIT (), it is only its own death, the process will not end, the other threads in the process will continue to run (of course, if all threads are end, the process is over). If a thread is non-separated, you must have other processes after it "Union", otherwise the resource of the thread will not be reclaimed and is used by the new thread. So if you don't want a thread to be "union, it is best to create it according to the separation thread. Another FLAG parameter is THR_Daemon. The thread created with this flag is a daemon, and these threads are automatically terminated after the other thread is terminated. These daemons are particularly useful in the thread library. The guardian thread can be created in the library function - is invisible to other parts of the program. These threads are automatically terminated when all other threads in the program are terminated. If they are not a daemon, they don't automatically terminate after the other threads, and the process will not end automatically. 2.1.8 Wait thread ends thr_join (3T) to wait for the thread to terminate with the thr_join function. #include int thr_join (thread_t wait_for, thread_t * departed, void ** status); thr_join () function blocks its own thread until the thread specified by Wait_for is terminated. The specified thread must be inside the same process with the thread, and must not be separated thread. When the Wait_FOR parameter is 0, THR_JOIN waits for any non-separated thread to end. In other words, when the thread number is not specified, the exit of any non-separated thread will result in THR_JOIN (). When the departed parameter is not NULL, it points to the address of the Stop Thread ID when THR_JOIN is returned. When the Status parameter is not null, it points to the address of the termination thread exits the code when THR_JOIN is returned. If the thread is created, the stack is specified, and the stack can be reclaimed when THR_JOIN returns. The thread number returned by it can be reassigned.

You can't have two threads at the same time waiting for the same thread. If this happens, one of the threads are often returned and another returns an ESRCH error. Return value - thr_join () Returns 0 after normal execution, other values ​​means errors. When the following occurs, the function fails and returns the correlation value. ESRCH WAIT_FOR is not legal, waiting for threads are separated. EdEadlk waits itself. The last step thr_join () has three parameters, providing certain flexibility. When you need a thread waiting until the end of another specified thread, the latter's ID should be provided as the first parameter. If you need to wait for any other thread to end, set the first parameter to zero. If the caller wants to know that the thread is terminated, the second parameter should be the address of the ID of the ID of the dead thread. If you are not interested, you will zero this parameter. Finally, if you need to know the exit code of the dead thread, you should point out the address of the error code. A thread can wait for all the following code to end: While (THR_JOIN (0, 0, 0) == 0) The declaration of the third parameter (void **) looks strange. The corresponding THR_EXIT () parameter is void *. The intention is that your error code is a four-byte of the length, and the C language gives a 4-byte definition of the length cannot be a VoID type because it thinks there is no parameter. So use void *. Because the third parameter of THR_JOIN () must be a pointer to the THR_EXIT () return value, so the type must be void **. Note that ThR_JOIN () is only valid when the target thread is non-separated. If there is no special synchronization requirement, the thread is generally set to it. It can be considered that the separation thread is a common thread, rather than separating thread knowledge. 2.1.9 Simple routines In Example 2-1, a thread running on top, creating a secondary thread to perform FETCH procedures, this auxiliary process involves complex database queries, take longer. The main thread has other things when waiting for the results. So it waits for THR_JOIN () to wait for the secondary process to end. The result of the operation is used as a stack parameter transmission because the main thread waits for the SPUN-OFF thread to end. In the general sense, it is better to store data with malloc () to store more than the stack of threads. ? ? ? ? Code Example 2-1 a Simple Threads Program Void Mainline () {Char Int Result; Thread_t Helper; Int Status; THR_CREATE (0, Fetch, & Result, 0, & helper); / * Do Something else for a while * / thr_join (Helper, 0, & status); / * It's now safe to use result * /} void fetch (int * result) {/ * fetch value from a database * / * result = value; thr_exit (0);} 2.1.10 Maintaining threads have data single-threaded C procedures with two basic data - local data and global data. The multi-thread C program adds a special type - thread proprietary data (TSD). Very similar to global data, but it is thread private. The TSD is bound to thread. TSD is the only way to define thread private data. Each thread proprietary data item is identified by a unique keyword (key) within a process. With this keyword, threads can access the thread private data.

The maintenance TSD method is performed by the following three functions: · THR_KEYCREATE () - Create Keyword · THR_SETSPECific () - Bind a thread on a keyword · THR_GETSPECICIC () - Store the value of the specified address 2.1.10.1 THR_KEYCREATE (3T) THR_KEYCREATE () assigns a keyword identified TSD in the process. Keywords are unique in the process, and all threads are in creation. The keyword value is NULL. Once the keyword is established, each thread can bind a value for the keyword. This value is unique for binding threads and is maintained separately by each thread. #include int thr_keycreate (void * value); if thr_keycreate () successfully returned, the assigned keyword is stored in the area pointed to by Keyp. The caller must ensure that the storage and keywords The access is synchronized correctly. An optional destructor, can be linked to each keyword. If a keyword's Destructor is not available, thread gives the keyword a non-null value, when thread exits The destructor is called, using the current binding value. The order in which the destructor executed for all keywords cannot be specified. Return Value --ThR_KeyCreate () Returns 0 after normal execution, other values ​​means errors. When the following occurs, the function fails and returns the correlation value. The namespace of the EAGAIN key is not enough for ENOMEM memory. 2.1.2 THR_SETSPECific (3T) #include Int thr_setspecific (thread_key_t key, void * value); THR_SETSPECific () is specified by Key The TSD keyword is bound to a value associated with this thread. Return Value --Thr_setSpecific returns 0 after normal execution, other values ​​means errors. The function fails and returned to the correlation value. EnomeM memory is not EINVAL key Word illegal 2.1.10.3 thr_getspecific (3T) #include int thr_getspecific (thread_key_t key, void ** valuep); thr_getspecific () stores the value of the keyword associated with the calling thread into the area specified by ValueP. Return Value - Tr_Getspecific ) Returns 0 after normal execution, other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL keyword illegal. 2.1.10.5 Global and private thread proprietary data routine 2-2 is from A multi-thread program is extracted. This code can be executed by any number of threads, but must refer to two global variables: errno and myWindow, these two values ​​are varied by threads, that is, thread private . Code Example 2-2 Thread proprietary data - global and private body () {... while (srite (fd, buffer, size) == - 1) {if (errno! = Ei NTR) {FPRINTF (MyWindow, "% S / N", strerror (errno)); EXIT (1);}} .........} The system error code Errno can be obtained by the system call of the thread, not through Other threads. So a thread obtained error code is different from other threads. Variable MyWindow points to a thread private input output stream. So, a thread's MyWindow and another thread are different, so it is ultimately reflected in different windows. The only difference is that the thread library will handle errno, and the programmer needs to carefully design myWindow. One example below illustrates the design method of MyWindow. The processor converts myWindow's pointer to the call to the _mywindow process.

Then call thr_getspecific (), pass the output parameter WIN of the full variable myWindow_key and the identity thread window to it. Code Example 2-3 with reference to a global reference into private #define mywindow _mywindow () thread_key_t mywindow_key; FILE * _mywindow (void) {FILE * win; Thr_getspecific (mywindow_key, & win); Return (win);} void thread_start (...) {... Make_MyWindow (); ...} Variable MyWindow identifies a variable of each thread has a private copy; that is, these variables are thread proprietary data. Each thread calls Make_MyWindow () to initialize your own window and generate an instance of the instance pointing to it. Once the process is called, ready-to-instance can be securely accessing myWindow, after the _mywindow function, the thread can access its private window. So, the operation of MyWindow is like a direct operating thread private data. Code Example 2-4 shows how to set Code Example 2-4 initialization TSD void make_mywindow (void) {file ** win; static int onCE = 0; static mutex_t lock; mutex_lock (& ​​lock); if (! ONCE) {ONCE = 1; thr_keycreate (& mywindow_key, free_key);} mutext_unlock (& ​​lock); win = malloc (sizeof (* win)); create_window (win, ...); thr_setspecific (mywindow_key, win);} void freekey (void * win) {free (win);} First, give the keyword myWindow_key assigned a unique value. This keyword is used to identify TSD. So, the first thread called Make_MYWINDOW THR_KEYCREATE (), which assigns a unique value to its first parameter. The second parameter is a destructor that reclaims the space occupied by the TSD after the thread is terminated. The next step is to assign an instance space for a TSD to the caller. After allocating space, call the CREATE_WINDOW process to create a window for thread and use WIN to identify it. Finally, THR_SETSPECIFIC () is called, and the value of the WIN (i.e., the storage area of ​​the window) is tied together with the keyword. After completing this step, the thread calls THR_GETSPECIFIC (), transmits the global keyword, which is the value that is bound to the keyword when calling thr_setspecific when calling thr_setspecific. If the thread ends, the destructor established in THR_KEYCREATE () will be called, and each destructor is only executed after the termination thread is assigned to the keyword. 2.1.11 create threads - Advanced Features 2.1.11.1 thr_create (3T) #include int thr_create (void * stack_base, size_t stack_size, void * (* start_routine) (void *), void * arg, long flags, thread_t * newthread) SIZE_T THR_MIN_STACK (VOID); stack_base - The stack address used in the new thread. If this parameter is empty, THR_CREATE assigns a stack at least long stack_size for the new thread. Stack_size - New thread uses the number of bytes of stacks. If this parameter is zero, the default value will be used.

If non-zero, it must be larger than the value to be obtained by calling THR_MIN_STACK (). A minimum stack may not accommodate the stack size required for start_routine, so if STACK_SIZE is specified, be sure to ensure that it is the smallest need to be the sum of the stack space required for the function that it is called. Typically, the thread stack assigned by THR_CREATE () begins with a page boundary to the end of the page boundary that is the closest to the nature. Place a page without access rights at the top of the stack, so that most stack overflow errors happen to send a SIGSEGV signal to the crossover thread. The thread stack is used by the caller assigned.??? If the caller uses a pre-allocated stack, the stack will not be released before the thr_join () function pointing to the thread, even if the thread has terminated. The thread is then exited as an exit code with the return value of this function. Normally, you don't need to assign a stack space for the thread. The thread library is equipped with a spacing of each thread with a megaby virtual memory, and does not reserve the switching space (the thread library with the MAP_NORSERVE option of MMAP (2) to allocate). The thread stack created by the thread library has a "red area". The thread library places a red zone on top of the stack to detect overflow. This page is no access, which will cause a page error when accessed. The red area is automatically attached to the top of the stack, whether it is the default capacity of the specified capacity. Only after you absolutely confident that the parameters you give are correct, you can specify the stack. There is not much case you need to specify the stack or its size. Even experts are difficult to know if the specified stack and capacity are correct. This is because the procedure following ABI cannot statically determine the size of the stack. Its size depends on the runtime environment. 2.1.11.2 Establishing your own stack If you specify the size of the thread stack, you must ensure that you consider calling its function and the function you call the function you call. The components of the calling result, local variables, and message structures are needed. Occasionally you need a stack that is slightly different from the default stack. A typical situation is that when the thread needs to have more than one megaby stack space. A less typical situation is that the default stack is too big for you. You may create thousands of threads. If you use the default stack, you need to get the space on G. The upper limit of the stack is obvious, but the lower limit? Be sure to have enough stack space to save the stack framework and local variables. You can use the THR_MIN_STACK () function to get absolute minimum stack capacity, which returns the stack space required to run a hollow process. There are more threads with practical uses, so be careful when reducing thread stacks. You specify a stack in two ways. The first is to assign a null value to the stack address, from the real-time run library for the stack allocation space, but you need to provide a desired value to the Stack_SIZE parameter. Another way is to fully understand the stack management, providing a stack of pole for the THR_CREATE function. This means that you must not only be responsible for the stack allocation space, you have to consider releasing these spaces after the thread. After you assign space for your own stack, you must call an MPROTECT (2) function to attach a red area for it. START_ROUTINE - Specifies the process of the new thread to be executed. When start_routine returns, the thread is exited as an exit code (refer to THR_EXIT (3T)). Note that you can only specify a parameter. If you want multiple parameters, make them one (for example, write a structure). This parameter can be any data indicated by Void, typically a 4-byte value. Any greater value requires indirect delivery with a pointer. Flags - Specifies the properties of the creation thread. Provide 0 in most cases. The value of Flags is passed by bit or operation. THR_SUSPENDED - The new thread hangs, and then executes start_routine after tHR_CONTINUE (). Use this method to operate it before running the thread (for example, changing priority).

The termination of the separation thread is ignored. THR_DETACHED - Separates the new thread so that once the thread is terminated, its resources can be recycled immediately. If you don't need to wait for the thread, set this flag. If there is no clear synchronization requirement, a non-suspended, separated thread can be used before the THR_CREATE function called by its creator and transferring the thread number and other resources to other threads. THR_BOUND - the new thread is permanently bound to a LWP (the new thread is binding thread). THR_NEW_LWP - adds 1 to the non-binding thread. The effect is similar to the increase in simultaneous grade with THR_SETCONCURRENCY (3T), but using thr_setConcurrency () does not affect the rating setting. Typically, THR_NEW_LWP adds an LWP to run an unbound thread in the LWP pool. If you specify THR_BOUND and THR_NEW_LWP, two LWP is created, one is bound to this thread, and the other is to run unbound threads. THR_DAEMON - the logo new thread is a daemon. When all the non-daemons are exited, the process ends. The guarding thread does not affect the process exit state, and is ignored when the number of threads exited. One process can be terminated by calling Exit (2) or when the THR_EXIT (3T) function is terminated. One application, or a library it calls, can create one or more threads ignored when deciding whether to exit. Threads created with THR_DAEMON flags are not considered in the category of the process exit. New_thread - After THR_CREATE () successfully returns, save the address to which the new thread ID is stored. The caller is responsible for providing space to save this parameter value. If you don't interested this value, assign it to it. Return Value - Tr_THREAD returns 0 after normal execution, and other values ​​means errors. When the following occurs, the function fails and returns the correlation value. Eagain exceeds system restrictions, for example, there are too many LWPs. EnomeM memory is not enough to create a new thread. Einval Stack_base is non-empty, but Stack_size is smaller than the return value of THR_MINSTACK (). 2.1.11.3 THR_CREATE (3T) Routine Example 2-5 shows how to create a new thread with a new signal mask different from the creator (orig_mask). In this example, New_MASK is set to shield any signal other than SIGINT. Then the creator's signal mask is changed so that the new thread inherits a different mask. After THR_CREATE () returns, the creator's mask is restored to the original look. Example hypothesized that Sigint was not masked by the creator. If it is initially shielded, remove the mask with the corresponding operation. Another way is to set its own signal mask with the Start Routine of the new thread.

Code Example 2-5 thr_create () Creates Thread With New Signal Mask thread_t tid; sigset_t new_mask, orig_mask; int error; (void) sigfillset (& new_mask); (void) sigdelset (& new_mask, SIGINT); (void) thr_sigsetmask (SIGSETMASK, & new_mask, & orig_mask): error = thr_create (NULL, 0, dofunc, NULL, 0, & tid); (void) thr_sigsetmask (SIGSETMASK, NULL, & orig_mask); 2.1.12 obtain a minimum stack thr_min_stack (3T) with thr_min_stack (3T) to Get the stack of threads #include size_t thr_min_stack; THR_MIN_STACK () Returns the stack size required to perform an empty thread (the empty thread is a thread that creates an empty process). If a thread is not only empty procedure, it should be assigned to the value of more space than THR_MIN_STACK (). If the thread is created by the user, the user should keep sufficient space for the thread. In a dynamically connected environment, it is very difficult to know the minimum stack you need in threads. In most cases, users should not specify their stacks themselves. The user specified stack is only used to support applications that want to control their execution environments. In general, the user should let the thread library to handle the allocation of the stack. The default stack provided by the thread library is enough to run any thread. 2.1.13 Setting the simultaneous level of thread 2.1.13.1 THR_GETCONCURRENCY (3T) uses THR_GETCONCURRENCY () to obtain the current value of the desired simultaneous level. In fact, the number of threads at the same time may be more than or less. #include int THR_GETCONCURRENCY (VOID) Return Value --ThR_GetConcurrency () returns the current value for the desired level. 2.1.13.2 THR_SETCONCURRENCY (3T) Use THR_SETCONCURRENCY () to set the desired simultaneous level. The non-bonded thread in the #incurrency (new_level) process may require simultaneous activities. In order to retain system resources, the default status of thread systems guarantee that there is enough active thread to run a process, preventing the process from being dead because of lack of simultaneousness. Because this may not create the most effective simultaneous level, thr_setConcurrency () allows the application to give the system some prompts to the system to get the needs of the needs. The actual number of active threads may be more than NEW_LEVEL. Note that if you do not use THR_SETCONCURRENCY to adjust the execution resource, multiple compute-bound (????) threads will not assign all of the running threads. You can also get the desired simultaneous level by setting the THR_NEW_LWP flag when calling THR_CREATE (). Return value --thr_setConcurrency () Returns 0 after normal execution, other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EAGAIN designated simultaneous level exceeds the upper limit of system resources. The value of einval new_level is negative. 2.1.14 Get or set the priority of the thread A non-binding thread is scheduled, the system only considers the simple priority of other threads within the process, and does not adjust the kernel. The system priority of threads is unique, and it is created. 2.1.14.1 THR_GETPRIO (3T) Use thr_getprio () to get the current priority of threads.

#include int thr_getPrio (thread_t target_thread, int * PRI) Each thread inherits the priority from its creator, THR_GETPRIO saves Target_Thread's current priority to the address pointing by the PRI. Return Value - Tr_GETPRIO () Returns 0 after normal execution, other values ​​means errors. When the following conditions occur, the function fails and returns the correlation value. Esrch target_thread does not exist in the current process. 2.1.14.2 THR_SETPRIO (3T) uses THR_SETPRIO () to change the priority of the thread. #include int thr_setprio (thread_t target_thread, int pri) Thr_setPrio change the priority of the thread specified with target_thread to be PRI. Under the default, the scheduling of the thread is performed according to the fixed priority - from 0 to the maximum integer, even if it is not fully determined by the priority, it also has a very important position. Target_thread will interrupt low priority threads to give high priority threads. Return Value - Tr_SETPRIO () Returns 0 after normal execution, and other values ​​means errors. When the following occurs, the function fails and returns the correlation value. Esrch target_thread can't be found in the current process. The value of the Einval PRI is meaningless for the scheduling level related to target_thread. 2.1.15 Thread Scheduling and Thriead Functions under Th Runa Functions Impact Thr_SETPRIO () and THR_GETPRIO () These two functions are used to change and retrieve the priority of Target_thread, this priority is scheduled for user-level line library scheduling threads When referenced, it is independent of the priority of the operating system scheduling LWP. This priority affects the combination of threads and LWP - if the runable thread is more than LWP, the high prior level thread gets LWP. The scheduling of thread is "special horizontal", that is, if there is a high priority thread to get the idle LWP, and a low priority thread holds a LWP, then the low priority thread is forced to give the LWP High priority thread. 2.1.15.2 THR_SUSPEND () and thr_continue () These two function control threads are allowed to run. To call thr_suspend (), you can set the thread to hang. That is, the thread is shelved, even if there is available LWP. After calling THR_CONTINUE in other threads to call the parameter, the thread exits the suspend state. These two functions should be careful - their results may be dangerous. For example, the hang-haunted thread may be in an interlocking state, and it can cause the deadlock. A thread can be set to pending with the thr_suspended flag when creating. 2.1.15.3 thr_yield () thr_yield function allows the thread to exit the LWP after exiting the thread of the same priority. (There is no higher priority thread running without running because it gets LWP by mandatory). This function is very important because there is no time in the LWP (although the operating system is in the execution LWP). Finally, it should be noted that priocntl (2) will also affect thread scheduling. For more detailed content, please refer to "LWP and Scheduling Level". Solaris2.4 Multithreaded Programming Guide 3 - Using Synchronous Object Programming [McCartney (Coolcat) March 8, 2003 Synchronize precautions.

Mutex Condition Variable Multi-read, Semophore Process Synchronization (Compare Primitive) Synchronous Object It is a variable in memory that you can access it like accessing general data. Threads within different processes can be synchronized by synchronous variables in shared memory, even if these threads are not visible. Synchronous variables can be placed in files, which can have longer life than creating its processes. The type of synchronous object includes: · Mutually exclusive lock · Status variable · Read and write lock · Signal light (semaphore) In the following cases, synchronization is important: • A thread within two or more processes can be used in Synchronous variable. Note that synchronization variables should be initialized by a process, and the synchronization variable is set to unlocked when initialization. · Synchronization is the only way to ensure a long-lasting data. • A process can map a file and lock it through a thread, after the modification is complete, the thread releases the file lock and recovers the file. In the process of file lock, any thread in any program will block when you lock until unlocked; • Synchronize can ensure the security of the variable data. · Synchronization is also important for simple variables, such as integers. Reading and writing an integer may require multiple memory cycles when the integer does not have a bus alignment or greater than the data width. Although this does not happen on the SPARC system, the transplant procedure cannot be considered when the program is not considered; 3.1 Mutual exclusion locks can make the thread order. The mutex is usually only allowed to perform a key part of the code to synchronize the thread. The mutex lock can also be used to protect single-threaded code. Table 3-1 Mutual Retention Lock Function Mode Mutex_init (3T) Initialize a mutex MUTEXT_LOCK (3T) to a mutex lock MUTEX_TRYLOCK (3T) lock, such as failure does not block MUTEX_UNLOCK (3T) unlock MUTEX_DESTROY (3T) Release the mutual exclusion status If the two processes have shared and writable memory, and after the corresponding initialization settings (see MMAP (2)), the mutex can realize the thread synchronization between the processes. The mutex lock must be initialized before use. Multithreading is waiting for a mutex, it is uncertain. 3.1.1 Initializing a mutex MUTEX_INIT (3T) #include (or #include) INT MUTEX_INIT (MUTEX_T * MP, INT TYPE, VOID * ARG); uses Mutex_init () to initialize a mutex by MP. TYPE can be one of the following values ​​(Arg now not talking now). Usync_process mutex is used to synchronize threads between processes. Usync_thread mutex is only used to synchronize threads inside the process. The mutex lock can also be initialized by allocating zero memory, and usync_thread should be set in this case. There must be no multiple threads to initialize the same mutex lock. A mutually exclusive lock will not be initialized during use. Return Value --Mutex_init () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameter EFAULT MP or Arg points to an illegal address. 3.1.2 Lock MUTEX_LOCK (3T) INT MUTEX_LOCK (Mutex_t * MP); uses Mutex_lock () to lock the mutex to the MP. If Mutex has been locked, the current call thread is blocked until the mutex is released by other threads (the blocking thread is waiting according to the thread priority). When Mutex_lock () returns, the mutual repelration has been successfully locked by the current thread.

Return Value --Mutex_Lock () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameter EFault MP points to an illegal address. 3.1.3 Adding Mrouting Mutex_Trylock (3T) #include (or #include) int MUTEX_TRYLOCK (MUTEX_T * MP); use Mutex_Trylock () to try to lock the mutex lock to MP. This function is a non-blocking version of Mutex_lock (). When a mutex has been locked, this call returns an error. Otherwise, the mutex is locked by the caller. Return Value --Mutex_Trylock () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameter EFault MP points to an illegal address. The mutex lock pointed to the EBUSY MP has been locked. 3.1.4 Unlock Mutex_unlock (3T) #includ_unlock (MUTEX_T * MP) to unlock the mutex_unlock (3T) INT MUTEX_UTEX_UNLOCK () to unlock the mutual exclusive lock pointed by MP. The mutex lock must be in a state of lock and call this function must be a thread to lock the mutually exclusive lock. If there are other threads waiting for the mutex lock, the thread waiting for the queue header and disengages the blocking state. Return Value --Mutex_unlock () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameter EFault MP points to an illegal address. 3.1.5 Clearing mutex MUTEX_DESTROY (3T) #include (or #include) INT MUTEX_DESTROY (MUTEX_T * MP); use the MUTEX_DESTROY () function to release any status of mutex by MP. The memory that stores mutual blanning is not released. Return Value --Mutex_Destroy () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameter EFault MP points to an illegal address. 3.1.6 mutex Code Example Code Example 3-1 Mutex Lock Example Mutex_t count_mutex; Int count; Increment_count () {mutex_lock (& ​​count_mutex); count = count 1; mutex_unlock (& ​​cout_mutex);} int get_count () {int c Mutex_lock (& ​​count_mutex); c = count; mutex_unlock (& ​​count_mutex); Return (c);} Different functions in the example 3-1 mutually exclusive lock, increment_count () guarantees an atomic operation for shared variables (Ie the operation is not interrupted), GET_COUNT () uses a mutex to ensure that its value does not change during the reading. * You may need to access both resources at the same time. Maybe when you use one of the resources, it is found to find another. Just as we see in Example 3-2, if the two threads want to occupy two resources, the order of adding mutual locks is different, and problems may occur. In this example, the two threads are locked separately to the mutex 1 and 2, and when they want to lock the additional resource, they will have a deadlock.

Code Example 3-2 Deadlock Thread 1: Mutex_lock (& ​​m1) / * use resource 1 * / mutex_lock (& ​​m2); / * use resources 1 and 2 * / mutex_unlock (& ​​m2); mutex_unlock (& ​​m1); Thread 2: Mutex_lock (& ​​m2 ); / * Use resource 2 * / mutex_lock (& ​​m1); / * Use resources 1 and 2 * / mutex_unlock (& ​​m1); Mutex_unlock (& ​​m2); the best way to avoid this problem is to add multiple mutually exclusive locks in threads When the lock is followed, follow the same order. One of this technique is a "lock level": sorting a number of allocated a number of locked. If you already have a hierarchy of I, you will not lock the hierarchy of a mutex lock. --------------------------------------- Note --Lock_Init can detect this example is deadlocked type. The best way to avoid deadlocks is to adopt a parameter lock: If the operation of the mutual reversal follows a predefined order, the deadlock will not happen. --------------------------------------- However, this technology is not always available - Sometimes you have to do not follow the predefined order. In this case, the dead lock is stopped, and a thread is unavoidable when discovered that the dead lock is unavoidable, and all resources already occupied must be released. Example 3-3 shows this method. Code Example 3-3 Condition lock Thread 1: Mutex_lock (& ​​m1); Mutex_lock (& ​​m2); Mutex_unlock (& ​​m2); Mutex_unlock (& ​​m1); Thread 2: For (; {Mutex_lock (& ​​m2); If (mutex_trylock (& ​​m1) == 0) / * got it * / break; / * Didn't get it * / mutex_unlock (& ​​m1);} mutex_unlock (& ​​m1); Mutex_unlock (& ​​M2); In the above example, thread 1 is locked in a predetermined order, but Thread 2 disrupted the order. To avoid dead lock, thread 2 must be careful to operate the mutex 1: If set to block when waiting for the mutex release release, it may cause a deadlock. To ensure that the above situation does not occur, thread 2 call Mutex_Trylock, if the mutex is available, it is not available to return to fail. In this example, thread 2 must release the mutex 2, so that thread 1 can use a mutex 1 and a mutex 2. 3.1.7 The lock is in the single-link table, and the example 3-4 has 3 locks at the same time, defined by the lock level to avoid dead locks. CODE EXAMPLE 3-4 Single-stranded table structure TYPEDEF STRUCT NODE1 {Int value; struct node1 * link; mutex_t lock;} node1_t Node1_t Listhead; This case stores a mutex lock using each node of single-stranded table structure. To delete a mutex, start search from Listhead (itself will not be deleted), know to find the specified node. To ensure the same Delete does not happen before accessing its content. Because all search starts in order from Listhead, there is no deadlock. If you find the specified node, lock the node and its predecessor node, because Both nodes need to be changed. Because the preamble node is always locked first, the deadlock will not happen. The following C program deletes an item from a single-linked list.

Code EXAMPLE 3-5 Single-link table Node1_t * delete (int value) {node1_t * prev, * current; prev = & mutex_lock (& ​​prev-> lock); while (current = prev-> link)! = Null ) {Mutex_lock; if (current-> value == value) {prev-> link = current-> link; mutex_unlock (¤t-> lock); Mutex_unlock (& ​​prev-> lock); CURRENT -> link = null; return (current);} mutex_unlock; prev = current;} Mutex_unlock (& ​​prev-> lock); return (null);} 3.1.8 in an annular linked list Lock Schedule 3-6 Change the single-link list of the previous example to a ring list. The ring list does not have an explicit header; a thread can be connected to a node, operate the node and its neighbors. Level locks are not easy here because their linked list is cyclic. Code Example 3-6 Circular Linked List Structure Typef struct node 2 {int value; struct node2 * link; mutex_t lock;} node2_t; The following C program is locked two nodes and do it. Code EXAMPLE 3-7 Looson Link Link Link Table Void Hit Neighbor (Node2_t * Me) {While (1) {Mutex_lock (& ​​Me-> Lock); if (Mutex_lock (& ​​Me-> Link-> Lock) {/ * failed To get lock * / mutex_unlock (& ​​me-> lock); Continue;} Break;} me-> link-> value = me-> value; me-> value / = 2; mutex_unlock (& ​​me-> link-> lock ); Mutex_unlock;} 3.2 Condition variable Condition variables from blocking a thread until a special case occurs. Usually conditions variables and mutual exclusion lockers are used simultaneously. Table3-2 Function Function Operation Conditions Cond_init (3T) Initialization Condition Variable COND_WAIT (3T) Based on Condition Variable Block COND_SIGNAL (3T) Unblock COND_TIMEDWAIT (3T) blocking until the specified event occurs Cond_Broadcast (3T) to release all threads Block Cond_DESTROY (3T) Destruction Condition Variables pass condition variables, one thread can be automatically blocked until a specific condition occurs. The conditions for conditions are carried out under the protection of mutex locks. If a condition is a fake, a thread is automatically blocked, and the mutex lock waiting state change is released. If another thread changes the conditions, it signals the associated conditional variable, wakes up one or more threads waiting to be waiting, and re-evaluates the mutex, re-evaluates the conditions. If the two processes share readable write-writable memory, the condition variables can be used to implement threads between the two processes. Before using the condition variable, you must initialize. Moreover, when there are multiple thread waiting for condition variables, they do not exist in the order of determination.

3.2.1 Initialization Condition Variable COND_INIT (3T) #include (or #include) int COND_INIT (COND_T * CVP, INT TYPE, INT ARG); initialized CVP pointed to condition variables with Cond_init (). TYPE can be one of the following values ​​(Arg is not talking): USYNC_PROCESS condition variable can implement thread synchronization between processes; Usync_Thread condition variables can only be synchronized within the process; conditional variables can be initialized with allocation zero memory, in this kind Under the case, it must be usync_thread. Multithreading cannot initialize the same condition variable at the same time. If a condition variable is being used, it cannot be reinitialized. Return Value - Cond_init () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameter EFault MP points to an illegal address. 3.2.2 About Condition Variable Block COND_WAIT (3T) INClude (or #include) int CONWAIT (COND_T * CVP, MUTEX_T * MP); release the mutex poisoned by MP with COND_WAIT (), and causes the call thread to point to CVP points Condition variable blocking. The blocked thread can be woken up by Cond_Signal (), COND_BROADCAST (), or from the fork () and transfer signals. Changes to the conditional value associated with condition variables cannot be obtained from the return of COND_WAIT (), such a state must be re-estimated. Even returning an error message, COND_WAIT () is usually returned after the mutex is called. The function is blocked until the condition is awakened by the signal. It automatically releases the mutex lock before blocking, and it is automatically obtained before returning. Among a typical application, a conditional expression evaluates at the protection of mutex. If the conditional table is fake, the thread is blocked based on the condition variable. When a thread changes the value of the condition variable, the condition variable gets a signal. This allows one or more threads waiting for the condition variable to exit the blocking state and attempt to get a mutex. Because the COND_WAIT () function of the wake-up thread has been changed, the wait for the waiting condition must be re-test before getting a mutex. The recommended approach is to write condition checks in the While cycle. Mutex_lock (); while (condition_is_false) COND_WAIT (); MUTES_UNLOCK (); if there is a plurality of threads to block the condition variable, the order of exiting the blocking state is uncertain. Return Value - Cond_Wait () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. Efault CVP points to an illegal address. EINTR Wait to be interrupted by the signal or fork (). 3.2.3 Make the specified thread exit the blocking state COND_SIGNAL (3T) INCLUDE (or #include) int CONCLUDE (COND_T * CVP); uses COND_SIGNAL () so that the thread is blocked about the blocking state about the condition variable from the CVP. Use cond_signal () under the protection of the same mutual reverse. Otherwise, the conditional variables can obtain signals between blocks that have the clogging of the associated condition variable and the cond_wait (), which will result in an indefinite waiting. If there is no thread to block the condition variable, COND_SIGNAL is invalid. Return Value - Cond_Signal () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. Efault CVP points to an illegal address.

Code Example 3-8 using cond_wait (3T) and examples cond_signal (3T) of Mutex_t count_lock; Cond_t count_nonzero; Unsigned int count; Decrement_count () {mutex_lock (& ​​count_lock); while (count == 0) cond_wait (& count_nonzero, & count_lock); count = count-1; mutex_unlock (& ​​count_lock);} increment_count () {mutex_lock (& ​​count_lock); if (count == 0) cond_signal (& count_nonzero); count = count 1; mutex_unlock (& ​​count_lock);} 3.2.4 blocked until specified event occurs cond_timedwait (3T) #include (or #include) int cond_timedwait (cond_t * cvp, mutex_t * mp, timestruc_t * abstime); cond_timedwait () and cond_wait () usage is similar, the difference lies in cond_timedwait () after have specified abstime Time does not block. Even returning an error, COND_TIMEDWAIT () is only returned after locking the mutex lock. The COND_TIMEDWAIT () function blocks until the condition variable obtains the signal or passes the time specified by Abstime. Time-out is specified as a certain time in one day, so that the condition can be effectively re-test without recalculating the TIME-OUT value ,? ? ? As is like this in Example 3-9. Return value - t_timedwait () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL is more than 50,000,000, or 50,000,000, or the number of naps is greater than or equal to 1,000,000,000. Efault CVP points to an illegal address. EINTR Wait to be interrupted by the signal or fork (). ETIME ABSTIME has passed. Code EXAMPLE 3-9 Time Condition Waiting TimeStruc_t To; Mutex_t M; COND_T C; MUTEX_LOCK (& M); TO.TV_SEC = Time (NULL) Timeout; TO.TV_NSEC = 0; While (Cond == false) {Err = COND_TIMEDWAIT (& C, & M, & to); if (Err = ETime) {/ * timeout, do something * / break;}} Mutex_unlock (& ​​M); 3.2.5 Remover all threads to exit the blocking state COND_BROADCAST (3T) #include (or) INCLUDE) INT COND_WAIT (COND_T * CVP); uses COND_BROADCAST () so that all threads that block the crux of the condition variable by CVP exit the blocking state. If there is no blocking thread, COND_BROADCAST () is invalid. This function wakes up all threads that are blocked by the Cond_Wait (). Because all threads on conditional variables are involved in competition at the same time, using this function needs to be careful. For example, use COND_BROADCAST () such that threads compete variable resources, as shown in Examples 3-10.

Code Example 3-10 condition variable broadcast Mutex_t rsrc_lock; Cond_t rsrc_add; Unsigned int resources; Get_resources (int amount) {mutex_lock (& ​​rsrc_lock); while (resources

Code Example 3-11 Producer / Consumer Problem and Condition Variable Typedef Struct {Char BUF [BSIZE]; Int Occupled; int nextin; int nextout; mutex_t mutex; cond_t more; cond_tness;} buffer_t; buffer_t buffer; example 3 According to the -12, the producer uses a mutually stunned protection buffer data structure and then determines that there is enough space to store information. If not, it calls COND_WAIT (), joining the thread queue about the condition variable LESS block, indicating that the buffer is full. This queue needs to be awakened by the signal. At the same time, as part of the COND_WAIT (), the thread releases the mutex lock. Waiting for the producer thread relies on the consumer thread to wake up. When the condition variable obtains the signal, the first thread waiting for the line queue of the Less is called wake up. However, a mutex must be obtained before the thread returns from Cond_Wait (). This once again ensures that the thread gets the unique access to the buffer. Thread must detect the buffer has enough space, if any, put the information in the next available location. At the same time, the consumer thread may wait for information to put into the buffer. These threads are waiting for condition variables more. A producer thread, after moving the information into the storage area, call Cond_SIGNAL () to wake up the next wait for consumers. (If there is no waiting for consumers, this call is invalid.) Finally, the producer thread releases the mutual exclusion lock, allowing other threads to operate the buffer. Code Example 3-12 Producer / Consumer Problem - Producer Void Producer (Buffer_t * B, Char Item) {Mutex_lock (& ​​B -> Mutex); While (B-> Occupied> = BSize) Cond_Wait (& B-> Less & b-> mutex); Assert (b-> occupied buf (b-> nextin ) = item; b-> nextin% = BSIZE; b-> occupied ; / * now: Either B-> occupied nextin is the index of the next empty slot in the buffer, or b-> occupied == bsize and b-> nextin is the index of the next (occupied) slot tria Will be EMPTIED By a Consumer (Such AS B-> == B-> Next) * / COND_SIGNAL (& B-> More); Mutex_unlock (& ​​B-> Mutex);} Note the usage of the assert () command; unless the code is compiled with NDebug Assert () does not do anything when the parameter is true (non-zero), if the parameter is false (parameter is fake), the program exits. This declaration is particularly useful in multi-threaded program - they will immediately point out the runtime when failure, and they have other useful features. Later, the code can be more referred to as a declaration, but it is too complicated and cannot be expressed in Boolean expression, so writing by text. ? ? ? Declaration and instructions? ? ? They are all instances of non-variables.

They are all logical propositions, and should not be certified when the program is executed, unless a thread attempts to change the variable of the non-variable description segment. ? ? ? The invariant is an extremely useful technology. Even if they are not written in the program, they also need to see them as an invariant in the analyst. The invaridation in the producer code (some) must be true when the program is executed to this segment. If you move this paragraph to Mutex_Unlock (), it will not remain true. If it moves it backwards follow the declaration, it is still true. The key is that the invariant represents a true attribute that is always true, unless a producer or a consumer is changing the state of the buffer. If a thread is operating a buffer (under the mutex protection), it will be temporarily set to false. However, once the thread ends the operation of the buffer, the invariant will return to true. Example 3-13 is a consumer code. Its processes and producers are symmetrical. Code Example 3-13 Producer / Consumer Problem - Consumer Char Consumer (Buffer_t * B) {Char Item; Mutex_lock (& ​​B-> Mutex); While (B-> Occupied <= 0) COND_WAIT (& B-> More & b-> mutex; assert (b-> occupied> 0); item = b-> buf (b-> next ); b-> next% = BSIZE; b-> occupied--; / * now: Either B-> occupied> 0 and b-> nextout is the index of the nexto ccupied slot in The buffer BY a product (Such as b-> nextout == b-> nextin) * / cond_signal (& b-> less); Mutex_unlock (& ​​B-> Mutex); return (item);} 3.3 Reading single-write lock Allow multiple threads simultaneously read operation, but only one thread is written for a time. Table 3-3 Writing lock function functions RWLock_init (3T) initialize a read and write lock RW_RDLOCK (3T) get a read lock RW_WRLOCK (3T) to get a read lock RW_WRLOCK (3T) to get a write lock RW_TRLOCK (3T) attempt Get a write lock RW_UNLOCK (3T) enable a read and write lock to block rwlock_destroy (3T) Clear read and write lock status If any thread has a read lock, other threads can also have a read lock, but must wait for a write lock. If a thread has a write lock, or wait for a write lock, other threads must wait for a read lock or write lock. The read-write lock is slower than the mutex, but the efficiency can be improved when the protected data is frequently read but does not frequently. If the two processes have shared readable write-writable memory, they can be set to the process between the processes with the read-write lock when initialization. Be sure to initialize before using the write lock. 3.3.1 Initializing a read-write lock rwlock_init (3T) #include (or #include) int rwlock_init (rwlock_t * rwlp, int type, void * arg); use rwlock_init () to initialize the read and write lock pointed by RWLP and set the lock The status is unlock. Type can be one of the following values ​​(Arg now not talking now).

Usync_process read and write locks can implement thread synchronization between processes. Usync_thread read and write lock can only be threaded in the process within the process. Multithreading cannot initialize a read and write lock at the same time. The read and write lock can be initialized by allocating zero memory. In this case, it must set usync_thread. A read and write lock cannot be reinitialized by other threads among the use. Return Value - RWLock_init () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault RWLP or Arg points to an illegal address. 3.3.2 Get a read lock RW_RDLOCK (3T) #include (or #include) INT RW_RDLOCK (RWLOCK_T * RWLP); use rw_rdlock () to a read-write lock pointed by RWLP. If the read-write lock has been completed, call the thread to block until the write lock is released. Otherwise, the lock will be successful. Return Value - RW_RDLOCK () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault RWLP points to an illegal address. 3.3.3 Try to get a read lock RW_TRDLOCK (3T) #include (or #include) int RW_TRDLOCK (RWLOCK_T * RWLP); try to read write lock plus lock, if the read and write lock has been added, return errors, And no longer enter the blocking state. Otherwise, the lock will be successful. Return Value - RW_TRYRDLOCK () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault RWLP points to an illegal address. Ebusy is a lock that is directed by rwlp. 3.3.4 Get a write lock RW_WRLOCK (3T) #include (or #include) int RW_WRLOCK (RWLOCK_T * RWLP); use RW_WRLOCK () to write locks to read and write locks from RWLP. If the read and write lock has been read, the thread block is called until all locks are released. There is only one thread to get a write lock. Return Value - RW_WRLOCK () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault RWLP points to an illegal address. 3.3.5 Attempt to get the write lock RW_TRLOCK (3T) #include (or #include) int RW_TRYWRLOCK (RWLock_t * rwlp); with RW_TRLOCK () attempt to get a write lock, if the read and write lock has been read or write lock, it An error will be returned. Return Value - RW_TRYWRLOCK () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault RWLP points to an illegal address. Ebusy is locked by the read and write lock points from RWLP. 3.3.6 Make a read and write lock exit the blocking state RW_UNLOCK (3T) INT RWLOCK_TRYRDLOCK (RWLOCK_T * RWLP); use rw_unlock () to exit the blocking state by RWLP. The calling thread must have obtained the read lock or write lock for the read-write lock. If any other thread is waiting for the read and write lock, one of them will exit the blocking state. Return Value - RW_UNLOCK () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault RWLP points to an illegal address.

3.3.7 Clear read-write lock RWLOCK_DESTROY (3T) #include (or #include) INT rwlock_destroy (rwlock_t * rwlp); use rwlock_destroy () to cancel the status of the read-write lock pointed by RWLP. The space stored in the read-write lock is not released. Return Value - RW_DESTROY () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault RWLP points to an illegal address. Example 3-14 Demo read and write lock with a bank account. If a program allows multiple threads to be read simultaneously, only one write operation is allowed. Note that the get_balance () function passes the lock to ensure that the inspection and storage operations are atomic operations. Code Example 3-14 read / write bank account Rwlock_t account_lock; Float checking_balance = 100.0; Float saving_balance = 100.0; ... ... rwlock_init (& account_lock, 0, NULL); ... ... float get_balance () {float bal; rw_rdlock (& ​​account_lock); bal = checking_balance saving_balance; rw_unlock (& ​​account_lock); return (bal);} void tranfer_checking_to_savings (float amount) {rw_wrlock (& ​​account_lock); checking_balance = checking_balance - amount; savings_balance = savings_balance amount; rw_unlock (& ​​account_lock);} 3.4 semaphore ( Signal lights) The signal light is the program structure defined by EWDIJKSTRA in the late 1960s. Dijkstra's model is an operation on a railway: a single railway only allows a train to pass at a time. Use a signal light to maintain this railway. A train must wait for the signal of the signal before entering the single railway. If a train enters this orbit, the signal light changes to prevent other trains from entering. When the train leaves this orbit, the signal light must be resembled so that other trains can be entered. In the computer version of the signal light, a signal light is typically an integer called a signal. A thread does a P operation on the signal after being allowed. The literality of the P operation is that the thread must wait for the value of the semaphore to continue, and the quantity will be reduced before the signal is performed. When doing related operations (equivalent to the rails), the thread performs a V operation, i.e., add 1 to the semaphore. These two operations must be uninterruptible, also called indiscriminate, and the English is atomic, that is, they cannot be divided into two sub-operations, and other operations of other threads can be inserted between sub-operations, which may Change the amount of semaphore. In P operation, the value of the semaphore is necessary to be positive before being subtracted (so that the amount of semaphore will not be negative). In P operation or V operation, the operation does not interfere with each other. If the two V operations are executed at the same time, the new value of the signal is larger than the original. Remember what P and V itself means is not important, just like remembering Dijkstra is the Dutch. However, if the interest of scholars can, P represents ProLagen, a synthetic word that is played by Proberen de Verlagen, it means "trying to reduce".

V represents Verhogen, it means "increasing". These are mentioned in Dijkstra's technical note EWD 74. SEMA_WAIT (3T) and SEMA_POST (3T) respectively correspond to the P and V operation of Dijkstra, and SEMA_TRYWAIT (3T) is an optional form of P operation. When the P operation cannot be executed, the thread does not block, but immediately returns a non- Zero value. There are two basic semaphones: binary semaphore, its value can only be 0 or 1, and the counting signal can be a non-negative value. A binary selection is logically equivalent to a mutex. However, although it is not forced, the mutex lock should be considered to be released only by the lock, and the concept of "thread of the signal" does not exist, and any thread can perform a V operation (or SEMA_POST (3T) ))). The function of the counting semaphore is probably as strong as the conditional variables used by mutex. In many cases, the number of semaphors is simpler than using condition variables (as shown in the example below). However, if a mutex and conditional variables are used together, there is an implicit framework, which part of the program is protected is obvious. Otherwise, it can be called with the Go TO in both simultaneous programming, which is more suitable for those structural, not accurate aspects. 3.4.1 Counting the quantity is conceptually, a semaphore is a non-negative integer. The signal amount is used to coordinate resources in a typical case, and the amount of semaphore is generally initialized to the number of available resources. The thread is in hypotheses, adding the counter to the counter, which is reduced to the counter at the time, and the operation has atomicity. If the value of a semaphore is 0, it is indicated that there is no available resources, and the operation to reduce the amount of signal must wait until it is timing. Table 3-4 Signal function function operation SEMA_INIT (3T) Initialization Semual SEMA_POST (3T) Add Semic SEMA_WAIT (3T) About Signal Block SEMA_TRYWAIT (3T) Reduce Semic Sema_DESTROY (3T) Signal Signal Sample Do not possess from which threads, they can be notified with asynchronous events (such as signal processors). Moreover, because the semaphore includes state, can they be used asynchronously? ? ? No need to get a mutex first like a conditional variable. By default, multiple threads of the waiting signal exit the sequence of blocking blocks are uncertain. The amount of semaphores must be initialized before use. 3.4.2 Initializing a semapic (or #include) INT SEMA_INIT (SEMA_T * SP, Unsigned INT Count, int type, void * arg); SEMA_INIT is initialized by the value of the SP by the value of the SP . Type can be one of the following values ​​(Arg is not talking about). The usync_process signal can be threaded between the processes. Only one process requires initialization semaphore. Arg is ignored. The usync_thread semaphore can only be threaded in the process. Multiple threads cannot initialize the same semaphore. A semaphore cannot be reinitialized by other threads in use. Return Value - Sema_init () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. EFAULT SP or ARG points to an illegal address.

3.4.3 Give semapamental value SEMA_POST (3T) #include (or #include) int SEMA_DESTROY (SEMA_T * SP); use SEMA_POST () to increase the amount of seminated signal from SP (indicate its indecent, the same) If there is any other thread to block the quantity block, one of which exits the blocking state. Return Value - Sema_Post () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault SP points to an illegal address. 3.4.4 About a Signal Block SEMA_WAIT (3T) #include (or #include) INT SEMA_WAIT (SEMA_T * SP) SEMA_WAIT () causes the call thread to block when the signal pointed by the SP is less than equal to zero, which is greater than zero atom. Decrease operation. Return Value - Sema_wait () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault SP points to an illegal address. EINTR Wait to be interrupted by the signal or fork (). 3.4.5 SeMA_TRYWAIT (3T) INT SEMA_TRYWAIT (3T) INT SEMA_TRYWAIT () SEMA_TRYWAIT () is subject to zero to zero. It is a non-blocking version of SEMA_WAIT (). Return Value - Sema_Trywait () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault SP points to an illegal address. The value of EBUSY SP is zero. 3.4.6 Status of Sample Sample Sema_DESTROY (3T) #include (or #include) INT SEMA_DESTROY (SEMA_T * SP) Destroy any status associated with SP points to the SP point, but space is not released. Return Value - Sema_DESTROY () Returns zero after successful execution. Other values ​​means errors. When the following occurs, the function fails and returns the correlation value. EINVAL illegal parameters. Efault SP points to an illegal address. 3.4.7 Solving the program number 3-15 of the sample 3-15 in the sample 3-15 of the semaphore; the number of semaphors represents the number of empty and full buffers, the producer thread is without air buffer When the district is blocked, the consumer blocks when the buffer is full. Code Example 3-15 Using signal addressed producer / consumer problem Typedef struct {Char buf [BSIZE]; Sema_t occupied; Sema_t empty; Int nextin; Int nextout; Sema_t pmut; Sema_t cmut;} buffer_t; buffer_t buffer; sema_init (& buffer.occupied, 0, USYNC_THREAD, 0); sema_init (& buffer.empty, BSIZE, USYNC_THREAD, 0); sema_init (& buffer.pmut, 1, USYNC_THREAD, 0); sema_init (& buffer.cmut, 1, USYNC_THREAD, 0) ; buffer.nextin = buffer.nextout = 0; additional pair of semaphors is the same as the mutex lock, used to have multiple producers and multiple air buffers, or have multiple consumers and multiple Controlling access to the buffer in the case of a full buffer. The mutex lock can also work, but here is mainly examples of the demo signal.

Code Example 3-16 Producer / Consumer Problem - Producer Void Producer (Buffer_T * B, Char Item) {SEMA_WAIT (& B-> EMPTY); SEMA_WAIT (& B-> PMUT); B-> BUF [B-> Nextin] = item; b-> nextin ; b-> nextin% = BSIZE; SEMA_POST (& B-> PMUT); SEMA_POST (& B-> Occupied);} code example 3-17 Producer / Consumer Question - Consumer Char Consumer (Buffer_t * b) {char tem; Sema_Wait; Sema_Wait (& B-> CMUT); item = b-> buf [b-> nextout]; b-> next ; b-> next% = BSIZE; SEMA_POST; SEMA_POST (& B-> EMPTY): return (item); Just ensure that the sync variable is initialized in the shared memory segment and with the usync_process parameter. After this, the use of synchronous variables and threads after usync_thread initialized are the same. Mutex_init (& M, USYNC_PROCESS, 0); RWLOCK_INIT (& RW, Usync_Process, 0); Cond_init (& CV, Usync_Process, 0); SEMA_INIT (& S, Count, Usync_Process, 0); Example 3-18 shows a producer / consumer Problems, producers and consumers in two different processes. The main function maps all the memory segments into its address space. Note Mutex_init () and COND_INIT () are set to initialize with type = usync_process. The child process runs consumers, and the parent process runs the producer. This example also shows the manufacturer and consumer driver. Producer Drive Producter_Driver () simply read characters from stdin and calls the producer function product (). Consumers Drive Consumer_Driver () read characters by calling consumer () and writes it to Stdout.

Code Example 3-18 producer / consumer problem, use usync_process main () {int zfd; buffer_t * buffer; zfd = open ("/ dev / zero", o_rdwr); buffer = (buffer_t *) mmap (Null, Sizeof (buffer_t), PROT_READ | PROT_WRITE, MAP_SHARED, zfd, 0); Buffer-> occupied = buffer-> nextin = buffer-> nextout = 0; mutex_init (& buffer-> lock, USYNC_PROCESS, 0); cond_init (& buffer-> less , USYNC_PROCESS, 0); cond_init (& buffer-> more, USYNC_PROCESS, 0); If (fork () == 0) Consumer_driver (buffer); Else Producer_driver (buffer);} void producer_driver (buffer_t * b) {int item; While (1) {item = getChar (); if (item == EOF) {Producter (b, ''); Break;} else produter (b, (char) iTem);}} void consumer_driver (buffer_t * b) {Char Item; While (item = consumer (b)) == ') Break; Putchar (item);}} A child process is created to run consumers; the parent process runs the producer. 3.6 Comparison of synchronous primitives Solaris The most basic synchronization primitive is a mutex. Therefore, it is most effective when memory use and execution. The most basic use of mutex lock is the sequential access to resources. The second is the condition variable in the efficiency row in Solaris. The basic usage of the conditional variable is to block the change in a state. Before you block the block, you must first get a mutex, and you must release the mutual exclusion lock after returning from Cond_Wait () and changing the variable state. The amount of semaphore is more memory than conditional variables. Because the amount of semapses act on the status, not control? ? ? Therefore, it is more easily used under some specific conditions. Different from the lock, the amount of semaphors does not have all anything. Any thread can be added to the blockped signal amount. The read and write lock is the most complex synchronization mechanism in Solaris. This means that it is not as fine as other primitives? ? ? . A read-write lock is usually used in the read operation. Solaris2.4 Multi-threaded Programming Guide 4 - Operating System Programming [McCartney (Coolcat) March 8, 2003, 52 people in reading, 4. Operating System Programming This chapter discusses how multi-threaded programming and operating system interactions, what changes made by operating systems are supported. Process - Multi-threaded Warning (ALARM), INTERM O Quality 4.1 Process - Changes for Multithreading 4.1.1 Copying Parent Thread Fork (2) Use fork (2) and fork1 (2) Functions, you can choose to copy all parent threads to sub-threads, or child threads Only a parent thread? ? ? . The fork () function replicates the address space and all threads (and LWP) in the child process.

转载请注明原文地址:https://www.9cbs.com/read-120962.html

New Post(0)