原 著: david a rusling translation: Banyan & FIFA (2001-04-27 13:56:30)
Chapter 5 Processes The process of inter-process communication mechanisms are communicating with each other under the coordination of the core. Linux supports a large number of process communication (IPC) mechanisms. In addition to signals and pipelines, Linux also supports IPC mechanisms in UNIX system V. 5.1 The signal signal is the oldest process inter-process communication in the UNIX system. They are used to send an asynchronous event signal to one or more processes. The signal can be generated from the keyboard interrupt, and the process is generated in the system error environment of illegal access to virtual memory. The signal is also used by the shell program to send task control commands to their sub-process. There is a set of signal types defined in the system that can be generated by other processes having appropriate permissions in the core or system. Using the kill command (kill -l) can list all the defined signals in the system. The results are running on my system (Intel System) as follows: 1) SIGHUP 2) SiginT 3) Sigquit 4) Sigill 5) SigTrap 6) Sigiot 7) Sigbus 8) Sigfpe 9) Sigkill 10) Sigusr1 11) SigSegv 12) Sigusr2 13 ) SIGPIPE 14) SIGALRM 15) SIGTERM 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR When I run this command in Alpha AXP, I get different signals. In addition to both signals, the process can ignore most of these signals. One is that the SIGSTOP signal that causes the process termination, and the other is the Sigkill signal that causes the process to exit. As for other signals, the process can choose to process them. The process can block the signal. If it is not blocked, you can make a selection between self-processing and transfer the core processing. If this signal is processed by the core, it will use the default processing method corresponding to this signal. For example, when the process receives SIGFPE (floating point abnormality), the core default operation is to cause an exit of Core Dump and the process. The signal is not inherent to relative priority. If two signals are generated on one process at the same time, they will be able to reach the process in any order and process. At the same time, Linux does not provide ways to process multiple same types of signals. That is, the process cannot distinguish it is a received 1 or 42 SIGCONT signals. Linux implements signals by information stored in process task_struct. The number of signals is limited by the processor word. 32-bit word length processors can have 32 signals and 64-bit processors such as Alpha Ax can have up to 64 signals. The currently unprocessed signal is saved in the Signal domain and has shielding code for blocking signals saved in Blocked. All signals can be blocked except Sigstop and Sigkill. When a blockable signal is generated, this signal can remain in the standby state until the blocked release. Linux saves each process to handle each possible signal, which is saved in the SigAction array in each process task_struct. This information includes a process address corresponding to a signal that the process wishes to process, or indicating that ignore the signal or by the core to process it. By system call, the process can modify the default signal processing process, which will change the sigAction of a signal and block shielding code. Not each process in the system can send a signal to all other processes: only the core and superuser have this permission. The normal process can only send signals to processes with the same UID and GID or in the same process group. The signal is generated by setting a bit in the Signal field in the Task_Struct structure. If the process does not block the signal and is in an interruptive waiting state, it can be changed to Running, and if the confirmation process is still in the run queue, it can wake it through the signal. This time the system is scheduled next time, the dispatch manager will select it.
If the process requires a default signal processing, Linux can optimize processing of this signal. For example, the SIGWINCH (the focus change of the X window) signal is nothing wrong with the default processing process. The signal is not a generation to give the process immediately, but must wait until the process is run again. Every time you exit from the system call, it checksional and blocked domains to see if there is a non-blocking signal that can be sent immediately. This looks very unreliable, but each process in the system keeps system calls, such as outputting characters to terminals. Of course, the process can choose to wait for the signal, and the process will always be interrupt until the signal appears. Place the processing code of the current inlausable signal in the SigAction structure. If the processing of the signal is set to default, it shall be paid by the core. The default process of the SIGSTOP signal is to change the status of the current process into stopped and run the Dispatch Manager to select a new process to continue running. The default process of SIGFPE is caused by Core Dump and exits the process. Of course, the process can define its own signal processing. Once the signal is generated, this process will be called. Its address is stored in the SigAction structure. The core must call the signal processing routine of the process, how to do depends on the processor type, but all CPUs must handle this problem: If the signal is generated, the current process is running in the core mode and immediately returns the core or system example. The process of the process, and the process is in user mode. Solving this problem requires manipulating the stack and register of the process. The program counter is set to its address processing process, and the parameters are passed to the process routine by calling the frame or register. When the process continues to execute, the signal processing routine is like a normal function call. Linux is POSIX compatible, so when a particular signal processing routine is called, the process can set which signal can be blocked. This means that the blocked shield code can be changed during the process signal processing. When the signal processing routine ends, this Blocked shield code must be set to the original value. Therefore, Linux adds a process call to organize the work, reset the original Blocked shield code in the Send Signal Process Call Stack. For several signal processing processes at the same time, Linux optimizes its use by stack mode, and the next processing process must wait until the end of the finishing routine will wait until the end of the finishing routine. 5.2 Pipeline General Linux Shell Programs Allow redirection. As $ ls | pr | LPR In this pipe application, the output of the LS column current directory is sent as a standard input to the PR program, and the output of the PR is sent to the LPR program as a standard input. The pipe is a one-way byte stream that connects the standard output of a process to another process. However, it is not aware of the existence of the redirection and its execution result is not different. The shell program is responsible for establishing a temporary pipeline between processes. Figure 5.1 Pipe In Linux, the pipe is implemented by pointing to the two FILE data structures of the same temporary VFS inode, which points to a physical page in the memory. Figure 5.1 Each File data structure points to different file operation routines, one is to implement the write to the pipe, and the other is read from the pipe. This hides the difference between the system call when reading and writing pipes and reading and writing ordinary documents. When the write process is written to the pipeline, the byte is copied to the shared data page, and when the reading process is read from the pipe, the byte is copied from the shared data page. Linux must synchronize access to the pipeline. It must ensure that the reader and writer will perform synchronization mechanism such as lock, wait queues, and signals for this to use the lock, wait queue and signal. When the writer wants to write the pipeline, it uses a standard write library function. Indicates that the descriptor that opens the file and open the pipe is used to index the File data structure collection of the process. Linux system calls using the WRITE process that is directed by the pipe file data structure. This WRITE process manages the write request with information stored in VFS inode representing the pipe.
If there is not enough space to accommodate all the data written to the pipeline, as long as the pipe is not locked by the reader. Then Linux is locked for the writer and copies the bytes written from the write process address to the shared data page. If the pipe is locked by the reader or there is no sufficient space to store data, the current process will sleep in the waiting queue of the pipe inode, while the schedule manager begins to perform to select other processes. If the write process is interrupted, it will be woken by the reader when there is enough space or the pipeline being unlocked. When the data is written, the VFS inode of the pipe is unlocked, and any reader process that is sleeping on the waiting queue of this inode will be awakened. The process of reading data from the pipeline is similar. The process allows non-block read (which depends on how they open the file or the pipeline), if there is no data readable or the pipe is locked, the return error message indicates that the process can continue. The blocking mode makes the reader process sleep on the waiting queue of the pipe inode until the writer process ends. When the two processes end the use of the pipeline, the pipe inode and shared data pages will be abandoned. Linux also supports Named Pipes, which is FIFO pipe because it is always based on the advanced principles. The first written data will first read from the pipe. Unlike other pipelines, the FIFO pipe is not a temporary object, which is an entity in a file system and can be created by mkfifo commands. The process can be free to use the FIFO pipe as long as appropriate permissions. Open the FIFO pipeline slightly different. Other pipes need to create (its two file data structures, VFS inode, and shared data pages) and the FIFO pipeline already exist, only need to open and close by the user. Before the writer process opens it, Linux must let the reader process first open this FIFO pipe; any reader process must write data to it before reading it. The method of use of FIFO pipes is basically the same as normal pipeline, while they use the same data structure and operation. 5.3 Set 5.3.1 System V IPC Mechanism Linux supports three processes between the UNIX System V (1983) versions. They are messages, signal lights, and shared memory. These system V IPC mechanisms use a common license method. These resources can only be accessed after passing the flag to the core through the system call. These system V IPC objects use very similar to the file system. The reference nonus of the object is used as an index in the resource table. This index value requires some processing before you can get it. The Linux data structure of all system V IPC objects in the system contains an IPC_Perm structure that contains process owners and creators and group flags. There is also access mode and IPC object keys for this object (owner, group and other). This key value is used to locate the reference flag of the system V IPC object. Such key values have two groups: public and private. If this button is public, any process for accepting permission checks in the system can find a reference nickname of the system V IPC object. System V IPC object must never be referenced in a key value, but only reference numerals can be used. 5.3.2 Message Queue Message Queue allows one or more processes to write to it and read messages. Linux maintains a MSGQUE message queue list, each element points to a MSQID_DS structure that describes a message queue. When a new message queue is created, the system allocates an MSQID_DS structure from the system memory while inserting it into an array. Figure 5.2 System V IPC Message Queue Each MSQID_DS structure contains an IPC_PERM structure and points to a pointer that has entered this queue message. In addition, Linux retains the queue modification time information, such as the time written in the queue, such as the last system. MSQID_DS contains two waiting queues: one is written to the queue and the other is used by queue reading process.
Each process attempts to write a message to the write queue, the system will compare its valid user and group flags in the IPC_PERM structure of this queue. If the write operation is allowed, this message is copied from the address space of the process to the MSG data structure, and placed to this message queue tail. Since Linux strict restrictions can be written to the number and length of the message, this message may not be accommodated in the queue. At this point, this write process will be added to the waiting queue of this message queue, and call the schedule manager to select a new process. This process will be awakened when released from the queue from the queue. The process of reading from the queue is similar. The process will be tested again for this write queue. The read process will select the first message in the queue (no matter what type) or the first particular type of message. If there is no message to meet this request, the read process will be added to the reading wait queue of the message queue, and then the system runs the Dispatch Manager. When there is a new message written to the queue, the process will be awakened to continue. 5.3.3 The simplest form of the signal light signal light is a memory unit that can be verified and set (TEST & SET). This inspection and setting operation is uninterrupted for each process or is an atomic operation; once no one is stopped. The result of the inspection and setting operation is that the current value of the signal is 1, which can be a positive number or a negative number. According to the result of this operation, the process may be able to sleep until the value of this signal is changed by another process. The signal light can be used to implement critical region: a code in this area can only be performed by one process. If you have multiple collaboration processes to read and write from a data file. Sometimes you may need these files to access the strict access order. Then, a signal light that can be used on the file operation code is used, which has two signal lights, one test and reduces the signal value of 1, and another test and add 1. The process of the first access file will try to reduce the signal value by 1, and the signal light value becomes 0 if successful. This process begins using this data file, but if another process wants to reduce the signal value by 1, the signal light value will be -1, which will fail. It will hang up until the first process completes the use of this data file. At this point, this wait process will be awakened, this time it will succeed in the operation of the signal light. Figure 5.3 System V IPC Signals Each system V IPC signal light object corresponds to a signal of a signal, Linux uses the SEMID_DS structure. All SEMID_DS structures in the system are indicated by a set of semary pointers. There is a SEM_NSEMS in each of the arrays, which represents a SEM structure pointed to by SEM_BASE. Authorized processes can use system calls to manipulate the number of signals that contain system V IPC signal objects. This system call can define many operations, each operation with three inputs: signal light index, operational value, and a set of flags. The signal index is an index of a signal of the number of signal, and the operation value is a value that will be applied to the signal light. First Linux will check if all operations have been successful. If the operation value is greater than 0 with the current value of the signal, or the operation value and the current value of the signal light are 0, the operation will succeed. If all signal lights fail, Linux only pending those operational flags do not require system calls to non-blocking types. After the process hangs, Linux must save the execution status of the signal, put the current process in the waiting queue. The system also establishes a SEM_QUEUE structure on the stack and pops each domain. This SEM_QUEUE structure will be placed here to wait for the tail of the queue (using the SEM_PENDING and SEM_PENDING_LAST pointer). The system places the current process into the waiting queue in the SEM_QUEUE structure, then start the schedule manager to select other processes. If all of these signal lights are successful, there is no need to hang up the current process, and Linux will do the same operations in the number of other members in the array of signal, and then check those processes that are waiting or suspend.
First, Linux will check each member in the hanging queue (SEM_PENDING) to see if the signal light can continue. If you can remove its SEM_QUEUE structure from the suspended linked list and send a signal lamp operation to the number of signal lights. Linux also wakes up the process of sleep and makes it a process of the next run. If the signal light operation in the traversal process is not completed, Linux will repeat this process until all signal lights are completed and there is no process that needs to continue to sleep. But the use of signal lights may produce a serious problem: dead lock. When a process enters the critical area, it changes the value of the signal light and leaves the critical area, when the run is failed or by KILL, the dead lock will happen. Linux prevents the happening of the phenomenon by maintaining a chain that changes a group of squeezed signals. Its concrete approach is to let Linux will set this signal to the process before it performs. These status values are stored in the SEMID_UNDO structure of the SEMID_DS and Task_STRUCT structure using the signal reception process. Signal lamp operation will force the system to maintain the status of the state it. Linux maintains at least one SEM_UNDO structure corresponding to the array of signal lights to each process. If the process is requested to perform a signal operation, there is no such structure, and Linux will create one for it if necessary. This SEM_UNDO structure will place the task_struct structure of this process and the SEMID_DS structure of this signal number. When the signal light is operated, the negative number of the signal light change value is placed in the inlet of the signal in the SEM_UNDO structure of the process. Therefore, when the operation value is 2, one-2 will be added in the adjustment inlet of this signal light. Like normal exit, when the process is deleted, Linux will use the SEM_UNDO collection of the process to use the adjustment value for the array of signal lights. If the signal set is deleted and the SEM_UNDO data structure is still in the TASK_STRUCT structure of the process, this signal is set to be invalid. At this point, the signal light clearance code simply discard the SEM_UNDO structure. 5.3.4 Shared Memory Sharing Solids Allows one or more processes to communicate through memory in their virtual address space simultaneously. This virtual memory page appears in each shared processes page table. But this page is not necessarily located in the same location of all shared processes. As with other system V IPC objects, access to shared memory areas is controlled by key and access. Once the memory is shared, the process of the process is not verified. It relies on other mechanisms such as system V signal lights to synchronize access to shared memory. Figure 5.4 System V IPC Shared Memory Each newly created shared memory area is represented by a SHMID_DS data structure. They are saved in the SHM_SEGS array. The SHMID_DS data structure describes the size of the shared memory, how the process is used, and the shared memory is mapped to its respective address space. Access permissions for controlling this memory by shared memory creators and their keys are public or private. If it is sufficient, it can also load this shared memory into physical memory. Each process using this shared memory must connect to virtual memory through system calls. This time the process creates a new VM_Area_Struct to describe this shared memory. The process can determine where there is a location of its virtual address space in this sharing, or let Linux choose a sufficiently large area. The new VM_AREA_STRUCT structure will be placed in the VM_Area_Struct Link Pointed by ShmID_DS. Connect them through the VM_NEXT_SHARED and VM_PREV_SHARED pointer. There is no creation when there is a connection in the virtual memory; the process is created when it is accessible. Page is generated when the process first accesss the page in shared virtual memory. When this page is retrieved, Linux finds the VM_Area_Struct data structure that describes this page. It contains processing function address pointers that point to use this type of virtual memory. Shared Memory Page Error Processing Codes Look for whether there is this shared virtual memory page in the Page Table Inlet Log list corresponding to this shmid_ds. If there is no existence, it will allocate the physical page and create a page table entry for it. It also puts it in the page table of the current process, this entry is saved in the SHMID_DS structure.