Linux Kernel Core Chinese Manual (5) - Inter-Process Communication Mechanism

xiaoxiao2021-03-06  44

Interprocess Communication Mechanisms communicates with each other and core communications and coordinating their behavior. Linux supports some mechanisms of inter-process communication (IPC). Signals and pipes are two of them, Linux also supports the mechanisms of system V IPC (with a first-time Unix version name). 5.1 Signals signal is one of the methods of communication between the processes used in the UNIX system. A signal for transmitting an asynchronous event to one or more processes. The signal can be generated by a keyboard terminal or generated by an error condition, such as a process attempting to access its virtual memory in the virtual memory. The shell also uses a signal to send job control signals to its child process. There are some signals with core generation, and others can be generated by other privileges in the system. You can use the kill command (kill -l) to list your system's signal set, in my Linux Intel System Output: 1) SIGHUP 2) Sigint 3) Sigquit 4) Sigill5) SigTrap 6) Sigiot 7) Sigbus 8) SIGFPE9 ) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR213) SIGPIPE 14) SIGALRM 15) SIGTERM 17) SIGCHLD18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ26) SIGVTALRM 27) SIGPROF 28 Sigwinch 29) SIGIO30) Sigpwr is different on the Alpha Ax Linux system. The process can choose to ignore most of the signals generated, there are two exceptions: SIGSTOP (let the process stop execution) and Sigkill cannot be ignored, although the process can select how it handles the signal. The process can block the signal if it does not block the signal, it can choose your own handle or make the core process. If the core is handled, the default behavior of the signal will be executed. For example, the default action of the process receives SIGFPE (floating point accident) is to generate CORE and exit. The signal has no inherent priority, and if a process generates two signals, they appear in the process in any order and in any order. In addition, there is no mechanism to handle multiple signals of uniform types. The process cannot know that it receives 1 or 42 Sigcont signals. Linux uses information stored in the task_struct of the process to implement the signal mechanism. The supported signal is limited to the word length of the processor. The 32-bit word length processor can have 32 signals, while 64-bit processors, such as ALPHA AXP, can have up to 64 signals. The signal to be processed is placed in the Signal field, and the Blocked domain places a signal mask to be blocked. In addition to Sigstop and Sigkill, all signals can be blocked. If a blocked signal is generated, it remains to be processed until the block is released. Linux also saves how each process handles information about each possible signal, which is placed in a SigAction's data structure array, and each process's task_struct has a pointer to the corresponding array. This array includes the address of the routine that handles this signal, or includes a flag, telling Linux that the process is desirable to ignore this signal or let the core process. The process changes the default signal processing by performing system calls, which changes the SigAction and blocking masks of the appropriate signal.

Not all processes in the system can send signals to other each process, only core and superusers can. The normal process can only send a signal to the process of the same UID and GID or in the same process group. The signal is generated by setting the appropriate bit in the Signal of Task - Struct. If the process does not block the signal, and is waiting, it can be interrupted (the status is interruptible), then its state is changed to Running and confirms it to run the queue, wake up it in this way. This scheduler will treat it as a running candidate when the system is scheduled. If the default process is required, Linux can optimize the processing of the signal. For example, if the signal sigwinch (X Window changes focus) occurs, you don't need to do anything. When the signal is generated, it will not appear immediately in the process, and they must wait until the next timest operation. Each process is exited from the system call to check its signal and blocked domain. If there is any signal without blocking, you can send it. This seems to be very unreliable, but each process in the system is called in the call system, such as writing a character to the terminal. If you prefer, the process can choose to wait for the signal, and they hang in the Interruptible state until there is a signal. The Linux signal processing code checks for each currently unblowned signal in the sigAction structure. If the signal handler is set to the default action, the core will handle it. The default processing of the SIGSTOP signal is to change the status of the current process to stopped, then run the scheduler, select a new process to run. The default action of the SIGFPE signal is to let the current process generate Core (Core Dump), let it exit. Workaround, the process can specify its own signal handler. This is a routine that is called when the signal is generated and the SIGAction structure includes the address of this routine. Linux must call the signal processing routine of the process, as for how the process is related to the processor. However, all CPUs must handle the current process running at the core state and is preparing to return to the user state of the calling core or system routine. The method of solving this problem is to process the stack and register of the process. The process program counter is set to its signal handler address, the parameters of the routine are added to the calling structure or passed through the register. The signal handler is normal call when the process is restored. Linux is POSIX compatible, so the process can specify the signal to block when the specific signal handler is called. This means changing the blocked mask when calling the signal handler of the process. When the signal handler ends, the Blocked mask must be restored to its initial value. Therefore, Linux adds a call to a sorting routine in the stack that receives the signal, returns the Blocked mask to the initial value. Linux also optimizes this situation: If several signal processing routines need to be called, they are set together, and each time you quit a process routine, call the next, until the final call is called. 5.2 Pipes (Pipeline) The normal Linux Shell allows redirection. For example: $ LS | PR | LPR Tap the output LS of the command LS listing the directory file to pick up the PR command to the standard input of the PR command. Finally, the standard output of the PR command is connected to the standard input of the LPR command through the pipeline, and the results are printed on the default printer. The pipe is a one-way byte stream that connects the standard output of a process and the standard input of another process. No process is aware of this redirect, working with it. It is the temporary pipeline between the shell.

In Linux, use two FILE data structures that point to the same temporary VFS I node (itself pointing one physical page in memory). Figure 5.1 shows a pointer to a vector table for each FILE data structure containing a different file operation routine: one for writing, the other is read from the pipe. This masks the difference between system calls that read and write ordinary documents. When the write process is written in the pipe, the byte copys to the shared data page, when reading from the pipeline, bytes are copied from the sharing page. Linux must synchronize access to pipelines. It is necessary to ensure that the writing and reading steps of the pipeline are consistent, which uses locks, waiting for queues and signals (Locks, Wait Queues and Signals). See include / linux / inode_fs.h When writing processes, it uses standard Write library functions when writing to the pipe. These library functions passed the file descriptor as the index in the File data structure group of the process, each representing an open file, in which case is open. Linux system calls the WRITE routine that describes the File data structure describing this pipe. This WRITE routine uses information stored by the VFS I node of the pipe to manage the request. If there is enough space to write all bytes to the catheter, as long as the pipe does not have the read process lock, Linux is locked to write the process, and copy the byte from the address space to the shared data page. If the pipe is read or locked or space is not enough, the current process sleeps, and placed in the waiting queue of the pipe I node, and call the scheduler, run another process. It can be interrupted, so it can receive signals. If there is enough space write data or lock release in the pipe, the write process will be woken up by the process. When the data is finished, the VFS I node locks that the pipeline is locked, and all read processes in the waiting queue of the pipe I node will be awakened. See FS / PIPE.C PIPE_WRITE () is very similar to writing data from the pipeline. The process allows for non-blocking reads (depending on their opening of the file or pipeline), at this time, if there is no data readable or the pipe is locked, an error will be returned. This means that the process will continue to run. Another way is to wait in the waiting queue of the I node of the pipeline until the write process is completed. If the process of the pipelines completes the operation, the I node of the pipeline and the corresponding shared data page are discarded. See FS / PIPE.C PIPE_READ () Linux can also support named pipes, also called FIFO, because pipeline works in the principle of first out. The data that is first written to the pipe is the data that is first read. Do not want to pipe, FIFO is not a temporary object, they are entities in the file system, can be created with mkfifo commands. As long as there is a suitable access, the process can use FIFO. The FiFo's open space and the pipeline are slightly different. A pipe (its two File data structures, VFS I nodes, and shared data pages) are one-time creation, and FIFO is already existing, which can be opened and closed by its users. Linux must handle the process that opens the FIFO read before the write process is opened, and the process read before writing data written. In addition to these, FIFO is almost completely the same as the pipeline, and they use the same data structure and operation.

Sockets Note: Plus network articles Plus System V IPC Mechanisms (System V IPC Mechanism) Linux Support Three First Mechanisms Inter-Unix System V (1983) process Mechanism: Message Quelery, Signals and Shared Memory (Message Queues) , Semaphores and Shared Memory. System V IPC mechanism shares universal authentication methods. The process can only pass system calls to pass a unique reference identifier to the core to access these resources. For checks for access to the system V IPC object, use access permissions, very like checks for file access. Access to the system V IPC object is created by the creator of the object. Each mechanism uses the reference identifier of the object as the index of the resource table. This is not a direct index and requires some operations to generate indexes. All Linux data structures in the system v IPC object include an IPC_Perm data structure, including user and group identifier for creating a process, and Key for this object (primary, group and other) and IPC objects. . Key is used as a method for reference identifiers for positioning system V IPC objects. Support two keys: public and four. If Key is open, any process in the system can find the reference identifier of the corresponding system V IPC object as long as the authority check is passed. The system V IPC object cannot use the key reference, and their reference identifiers must be used. See the include / linux / ipc.himentage queues message queue allows one or more processes to write messages, one or more processes read messages. Linux maintains the Msgque vector table for a series of messages. Each of them points to a data structure of MSQID_DS, and completes the message queue. When you create a message queue, the data structure of a new MSQID_DS is allocated from the system memory and inserted into the vector table, each MSQID_DS data structure includes an IPC_Perm data structure and a pointer to the message of this queue. In addition, Linux reserves the change time of the queue, such as the time written by the last queue. The MSQID_DS queue also includes two waiting queues: one for writing to the message team and the other for reading. See INCLUDE / Linux / MSG.H Each process attempts to write messages to the write queue, its valid user and group identifier is compared to the mode of the queue's IPC_PERM data structure. If the process can be written by this team, the message writes from the address space of the process to the MSG data structure, and placed the finals of the message queue. Each message has a process-oriented tag that the application is specified. However, because Linux limits the number and length of the message that can be written, there may be no space accommodation message. At this time, the process will be placed in the message queue Write waiting queue, and then call the scheduler to select a new process. When one or more messages are read from this message queue, they will be awakened. Reading from the queue is a similar process. The process of access is checked. A read process can be selected whether or not the message is read from the queue or select a special type of message. If there is no conditional message, the read process will be added to the message queue's read wait process, and then run the scheduler. When a new message writes a queue, this process will be awakened and continue to run. The easiest form of Semaphores (Signal Lights) signal light is a location in memory, and its value can be inspected and set by multiple processes. The operations of the inspection and settings are at least for each process associated, and it is uninterruptible or atomic: as long as the startup is started.

The results of the inspection and setting operations are the sum of the current values ​​and setting values ​​of the signal light, which can be positive or negative. Depending on the results of the test and setting operation, a process may have to sleep until the value of the signal light is changed by another process. The signal light can be used to implement an important region (Critical Regions), which is an important code area, and only one process is running at the same time. For example, you have many collaborative processes to read and write from a single data file. You may want access to files to be strictly coordinated. You can use a signal, initial value 1, in the code of the file operation, join two signal lights, the first check, and reduce the value of the signal, the second check and add it. The first process of accessing the file attempts to reduce the value of the signal light, if successful, the value of the signal is 0. This process can now continue to run and use data files. However, if another process needs to use this file, it is now trying to reduce the value of the signal light, which will fail because the result will be -1. This process will be hung up until the first process processes the data file. When the first process processes the data file, it increases the value of the signal light to 1. Now wait for the process to be awakened, this time it reduces the trial of the signal light will succeed. Each system V IPC signal object describes a signal of a signal, Linux uses the SEMID_DS data structure to express it. All SEMID_DS data structures in the system are pointed by the Semary pointer vector table. Each of the signals have SEM_NSEMS, which is described by a SEM data structure pointed to by SEM_BASE. All processes that allow a system V IPC signal object to operate can be operated by system calls. System calls can specify a variety of operations, each of which uses three input descriptions: signal light index, operational value, and a set of flags. The signal index is an index of the number of signal light, and the operation value is to increase the value of the current signal. First, Linux checks if all operations are successful. Only the operand plus the current value of the signal light is greater than 0 or the current value of the signal light is 0, and the operation is successful. If any signal lights fail, as long as the operation mark does not require system call without blocking, Linux will hang this process. If the process is to hang, Linux must save the status of the signal to be performed and put the current process in the waiting team. It implements the above process by establishing a SEM_QUEUE data structure in the stack and filling it. This new SEM_QUEUE data structure is placed at the end of the waiting queue of this signal light (using SEM_PENDING and SEM_PENDING_LAST pointer). The current process is placed in the waiting queue of this SEM_QUEUE data structure (Sleeper), calls the scheduler, run another process. See INCLUDE / Linux / Sem.h If all signal lights are successful, the current process does not need to be hanged. Linux continues to apply these operations on the appropriate member of the number of signal lights. Now Linux must check any sleep or pending process, and their operations may now be implemented. Linux order looks for each member in the operation waiting queue (SEM_PENDING) to check if its signal lights can be successful. If it can be deleted from the operation waiting table from the operation list, and apply this signal light to the array of signal lights. It wakes up the sleep process, allowing it to continue to run when the next scheduler is running. Linux checks the queue from the head to the tail until the signal light cannot be executed to wake up more processs. There is a problem in the signal light operation: Deadlock.

This happens in a process to change the value of the signal light into an important region, but because the collapse is collapsed or not left this important area. Linux avoids this by maintaining a tuning table of the number of signals. Just if these adjustments are implemented, the signal light returns a status before the signal of the signal. These adjustments are placed in the SEM_UNDO data structure, in the queue of the SEM_DS data structure, in the queue of the task_struct data structure of the process of using these signal lights. Each independent signal light operation may need to maintain an adjustment action. LINUX maintains a data structure of a SEM_UNDO at least for each of the signals of each process. If the process of the request is not, it is created for it when you need it. This new SEM_UNDO data structure is also queued in the queue of the Task_Struct data structure of the process and the SemiD_DS data structure of the signal light queue. When performing operations in the signal lights in the Signal Lamp, the value of the SEM_UNDO data structure of this process is added to the entry of the SEM_UNDO data structure of this process. So, if the operation value is 2, then this increases on the adjustment entry of this signal light. When the process is deleted, such as exiting, Linux traverses its SEM_UNDO data structure group and implements adjustments to the number of signals. If the signal light is deleted, its SEM_UNDO data structure still stays in the TASK_STRUCT queue of the process, but the corresponding signaling number identifier is not valid. In this case, the code to remove the signal light is simply discarding this SEM_UNDO data structure. Shared Memory shared memory allows one or more processes to communicate with memory communication in their virtual address space simultaneously. This virtual memory page has a page entry reference in the page table of each shared process. However, there is no need to have the same address in virtual memory in all processes. Like all system V IPC objects, access to the shared memory area via the Key control, and access access checks. After memory sharing, no longer check how the process uses this memory. They must depend on other mechanisms, such as the signal lights of the system V to synchronize access to memory. Each newly created memory area is expressed in a SHMID_DS data structure. These data structures are saved in the SHM_SEGS vector table. The SHMID_DS data structure describes how much this shared is available, how many processes are using it and how to map how their address space is mapped to their address space. Controlled by the creators of shared memory to control access to this memory and its key is open or private. If there is enough permissions it can also lock the shared memory in physical memory. See Include / Linux / Sem.h Each process that wants to share this memory must be called adhesion to virtual memory through the system. This creates a new VM_AREA_STRUCT data structure that describes this shared memory. The process can select the location of its virtual address space in the shared or select a sufficient free area by Linux. This new VM_AREA_STRUCT structure is placed in the VM_AREA_STRUCT list pointed to by SHMID_DS. Connect them together via VM_NEXT_SHARED and VM_PREV_SHARED. When there is adhesive in the virtual, it is actually not created, and when the first process is trying to access it. A Page Fault occurs when a process is first accessed by one page of shared virtual memory. When Linux processes this page fault, it finds a VM_Area_Struct data structure that describes it. Here, this type of pointer to the processing routine for sharing virtual memory is included. Shared memory Page Fault Processing Code looks up in this SHMID_DS page table entry list to see if this shared virtual memory page is available.

转载请注明原文地址:https://www.9cbs.com/read-51946.html

New Post(0)