Linux core --5.linux process

xiaoxiao2021-03-06  119

原 著: david a rusling Translation: Banyan & Fifa (2001-04-27 13:55:46

Chapter IV Process Management This chapter focuses on how Linux kernel creates, manage, and delete processes in the system. The process performs specific tasks in the operating system. The program is a static entity stored on the disk containing executable machine instructions and data. The process or task is a computer program that is active. The process is an entity that is changing with the execution process. Like the program, the process also contains the value of the program counter and all CPU registers, while its stack stores temporary data such as subroutine parameters, returning addresses, and variables. Current executives, or processes, including active status in the current processor. Linux is a multi-processing operating system. The process has independent permissions and responsibilities. If a process in the system crashes, it does not affect the rest of the process. Each process runs in its respective virtual address space, and the reliable communication mechanism is used by core control. The process will use resources in the system within the lifetime. It uses the CPU in the system to perform instructions, and place instructions and data in physical memory. Use the file system to open and use the file, simultaneously or indirectly using the physical device. Linux must track each process and resources in the system to achieve fairness allocation of resources between processes. If the system has a process exclusible to use most physical memory or CPU usage, this situation is unfair to other processes in the system. The most valuable resources in the system are CPUs, and there is only one CPU in the system. Linux is a multi-processing operating system, and its ultimate goal is to perform tasks on each CPU in any time system, thereby increasing the utilization of the CPU. If the number of processes is more than the number of CPUs, some processes must wait until the CPU is idle before they can run. Multi-processing is very simple; it will stop executing and wait until the resource is available when the process needs a system resource. In a single processing system, such as DOS, the CPU will be in an empty state, this time will be wasted. In the multi-processing system, because there are multiple processes, when a process begins waiting, the operating system will take the CPU control and hand it over to other processes that can be run. The scheduler is responsible for selecting the appropriate process to run, Linux uses some scheduling policies to ensure the fairness of the CPU allocation. Linux supports multiple types of executable formats, such as ELF, Java, etc. Since these processes must use a system sharing library, they have transparency to their management. 4.1 Linux Process In order to let Linux manage the process in the system, each process is represented by a task_struct data structure (the task and the process can be mixed in Linux). Array TASK contains pointers that point to all Task_struct structures in the system. This means that the maximum number of processes in the system is limited by the TASK array size, and the default value is generally 512. When you create a new process, Linux allocates a task_struct structure from the system memory and adds it to the TASK array. The structure of the current running process is indicated by a CURRENT pointer. Linux also supports real-time processes. These processes must respond to external times (this is the meaning of "real-time"), and the system will distinguish these processes and other processes. Although the Task_Struct data structure is large and complex, it can be divided into some features: the State process will change State according to the environment during the execution. The Linux process has the following status: Running process is running (it is the current process of the system) or ready to run state (it assigns it to it in the wait system). The waiting process is waiting for an event or resource. Linux will wait for the process into two categories; interrupts and uninterrupted. The interrupt wait process can be interrupted by the signal; the uninterruptible wait process is waiting for the hardware condition and is uninterrupted without any circumstances. The STOPPED process is stopped, usually by receiving a signal. The process being debug may be in a stopped state.

Zombie This is due to some reasons of termination, but still retain the task_struct structure in Task data. It is like a process that has been dying. The Scheduling Information scheduler needs this information to determine which process in the system is most urgent to run. Each process in the IdentifierS system has a process logo. The process logo is not an index of the Task array, it is just a number. Each process also has a user with group logo, which is used to control access rights to files and devices in the system. Inter-Process Communication Linux supports classic UNIX IPC mechanisms such as signals, pipes, and signal lights, and IPC mechanisms in system V, including shared memory, signal lights, and message queues. We will discuss the IPC mechanism in Linux in the IPC chapter. All processes in the Links Linux system are interrelated. In addition to the initialization process, all processes have a parent process. The new process is not created, but copied, or clone from the previous process. The Task_Struct structure corresponding to each process contains a pointer to its parent process and the brother process (process of the same parent process) and the child process. We can use the PStree command to observe the relationship between running processes in Linux systems: init (1) - - crond (98) | -emacs (387) | -GPM (146) | -inetd (110) | -kerneld (18 ) | -Kflushd (2) | -klogd (87) | -KSwapd (3) | -login (160) --- Bash (192) --- Emacs (225) | -LPD (121) | -MINGETTY (161 ) | -Mingetty (163) | -MingeTty (164) | -login (403) --- Bash (404) --- PStree (594) | -sendmail (134) | -syslogd (78 ) `-Update (166) In addition, all processes in the system are connected in a two-way linked list, while their roots are the task_struct data structure of the init process. This list is used by Linux core to find all the processes in the system, which provides support for the PS or Kill command. The Times and Timers core requires the recording process of the process and the CPU time consumed in its life. Every time the clock is hop, the core is to update the amount of time saved in the Jiffies variable, the amount of time consumed in the system and user mode. Linux supports the interval timer associated with the process, and the process can be used to set the timer through the system call to send a signal to it after the timer arrive. These timers can be disposable or periodic. The FILE System process can freely open or close files, and the Task_Struct structure of the process contains a pointer to each open file description and a pointer to two VFS inode. Each VFS Inode uniquely tag a directory or file in the file, and also provides a unified interface to the underlying file system. Linux's support for file systems will be described in detail in the FileSystem chapter. These two pointers, a root directory of the process, and another point to its current or PWD directory. The PWD is derived from the UNIX command PWD to display the current work directory. These two VFS inode include a count domain, when multiple processes are referenced, it will increase. That's why you can't delete the current directory of the process, or the reason why your subdirectories.

Virtual Memory has some virtual memory (core threads and background processes), and Linux core must track mapping relationships of virtual memory and system physical memory. The Processor Specific Context process can be considered to be the sum of the current state of the system. When the process is running, it will use the processor's registers and stacks, and more. When the process is suspended, the context of the process - all CPU-related status must be saved in its Task_Struct structure. When the scheduler re-schedules the process, all contexts are reset. 4.2 Identifiers, like other UNIX, Linux uses the user and group icon to check access to files and executable images in the system. All files in the Linux system have owners and allowed permissions, which describes the use of system users to file or directory. Basic permissions are read, write, and executable, which are assigned to three types of users: files owners, belong to the same group, and all processes in the system. Each type of user has different permissions, such as a file allows its owner to read and write, but the same group can only read and other processes are not allowed. Linux uses groups to grant a group of users to a group of users instead of all processes in a single user or system. If you can create a group for all users in a software item and set them to only the source code in the read and write items. A process can belong to multiple groups (up to 32), which are placed in the group array in the Task_struct of the process. As long as a set of processes can access a file, the process given by this group has the corresponding group access rights to this file. Task_struct structure has four pairs of processes and group numerals: UID, GID represents the user flag and group flag of the running process. Effective Uid and GID Some programs can change the UID and GID of the execution process in the execution process to the UID and GID of its program itself (saved in the VFS Inode property that describes the executable image). These programs are called SETUID programs, often using access to certain services, especially those that are running, such as network background processes. Valid UID and GID are the UIDs and GIDs that the SETUID execution process is executed. When the process is trying to access privileged data or code, the core will check the effective GID and UID of the process. File System Uid and GID They and the valid UID and GID are similar but used to verify file system access rights. If the NFS server running in user mode, the NFS file system will use these flags. In this case, only the file system UID and GID have changed (rather than valid UID and GID). This avoids malicious users send a KILL signal to the NFS server. These two markers are required in the Saved Uid and Gid Posix standard, they are used by programs that change the process UID and GID through the system call. When the proximity UID and GID changes, they are used to save real UID and GID. 4.3 Scheduling All Process Sections run in user mode, and some time is running in system mode. How to support these modes, the implementation of the underlying hardware is different, but there is a security mechanism that allows them to switch between user mode and system mode. The permissions of user mode are much smaller than system mode. The process is switched to system mode through system call to continue execution. At this time, the core is executed. In Linux, the process cannot be preemptive. As long as you can run them, you cannot stop. When the process must wait for a system event, it decides to release the CPU. For example, the process may need to read characters from the file. Generally waiting to occur during the system call, at which time the process is in system mode; the process in the waiting state will be suspended and other process is selected by the scheduler. The process is often waiting for the execution system call.

Since the process in the waiting state may also occupy the CPU time, Linux uses a pre-load scheduling policy. In this policy, each process only allows for a short time: 200 milliseconds, when this time is running, the system will select another process to run, the original process must wait for a while to continue running. This time is called a time slice. The scheduler must select the process that is most urgent to run and can perform. The running process is a process that is only waiting for the CPU resource. Linux uses a priority-based simple scheduling algorithm to select the next running process. When a new process is selected, the system must save the status of the current process, registers in the processor, and context status to the task_struct structure. At the same time, it will reset the status of the new process and give the system control to this process. In order to assign the CPU time to each executable process in the system, the scheduling manager must save these time information in task_struct. Policy is applied to the scheduling strategy on the process. There are two types of Linux processes in the system: normal and real-time processes. The real-time process is higher than other processes. If a real-time process is in executable, it will be executed first. There are two strategies for real-time processes: time slice rotation and advanced first. In the time slice rotation strategy, each executable real-time processes perform a time film, and advanced first outstanding policies perform each executable process to perform in the order in the run queue and cannot vary. The Priority Dispatch Manager is assigned to the priority of the process. It is also the time that the process allows operations (JIFFIES). The system calls renice can change the priority of the process. RT_Priority Linux supports real-time processes, and their priority is higher than that of non-real-time processes. The scheduler uses this domain to give each real-time process relative priority. It is also possible to change the priority of the real-time process through system call. The COUNTER process allows time to run (saved in Jiffies). The process is the value of the process priority when the process is running, which changes over time. The core calls the scheduler in several locations. If the current process is placed after the wait queue is run or the system call is over, and the user mode is returned from the system mode. At this point, the system clock sets the COUNTER value of the current process to 0 to drive the scheduler. The following operations will be performed each time the schedule manager runs: the Kernel Work Dispatch Manager runs the underlying handler and handles the dispatch task queue. The Kernel chapter will describe this lightweight core thread. Current Process This must be processed for the current process before selecting other processes. If the current process's scheduling policy is a time slice rotation, it is put back to the run queue. If a task can be interrupted and a signal is received from the last scheduled, its state changes to Running. If the current process is timeout, its status becomes Running. If the status of the current process is running, the status remains unchanged. Those processes that are neither in a Running state and an interruptible will be removed from the run queue. This means that these processes will not be taken into account when the Dispatch Manager selects the run process. The Process Selection scheduler selects a process that is the most urgent needed to run in the run queue. If there is a real-time process in the run queue (those that have real-time scheduling policies), they are more priority weight than the normal process. The weight of the ordinary process is its counter value, and the real-time process is Counter plus 1000. This suggests that there is a running real-time process in the system, which will always run before any normal process. If there are other processes that exist in the system as the current process, then the current running process has dropped some of the time films, so it will be in an unfavorable situation (its Counter has become small); and the original priority is the same The COUNTER value of the process is obviously larger than it, so that the most in front of the running queue will begin to execute and the current process is put back into the run queue. In the balancing system where there is a plurality of the same priority process, each process is sequentially executed, which is the Round Robin Policy.

However, since the process often needs to wait for some resources, their operating sequence is also often changing. Swap processes If the system selects other processes, you must hang the current process and start the new process. The process will use the register, physical memory, and CPU when executed. Each time a subroutine is called, it places the parameters in the register and place the return address in the stack, so the scheduler is always running in the current process. Although it may be in privileged mode or core mode, it is still in the current running process. When the execution of the suspend process, the system's machine status, including the program counter (PC), and all processor registers, must be stored in the TASK_STRUCT data structure of the process. Simultaneously load the new process of machine status. This process is related to the system type. Different CPUs use different ways to complete this work, usually this operation requires hardware assist completion. The process of switching occurs after the dispatch manager runs. The context saved by previous processes is the same as the context that is loaded with the current process, including process program counters and register content. If virtual memory is used before or the current process, the system must update its page table portfolio, which is related to the specific architecture. If the processor uses the conversion bypass buffer or buffering the page table entry (such as alpha ax), the page table entry for previously running the process must be shared. 4.3.1 Scheduling in multiprocessor systems In the Linux world, multi-CPU systems are very rare. But there is a lot of work on Linux to ensure that it can run on the SMP (symmetric multiprocess) machine. Linux can make reasonable load balancing schedules between CPUs in the system. The load balancing work here is more obvious than the dispatch manager. In multiprocessor systems, people want each processor to always be with the working state. When the current process on the processor runs out of its time tablet or waiting for system resources, each processor will run a stand-alone schedule manager. A worthwhile problem in the SMP system is that there is more than one IDLE process in the system. In a single processor system, the IDLE process is the first task in the task array. There is an IDLE process in each CPU in the SMP system, and each CPU has a current process. The SMP system must track each processor. The IDLE process and the current process. In the SMP system, each process is included in the Task_Struct structure of each process contains the number of the processor currently running it and the number of the last runtime processor. Each time the process is scheduled to different CPUs, Linux can use processor_mask to make a process only run on one or several processors: if the n position is bit, the process can run on the processor N. . When the scheduler is running, it does not consider a process that is not set in its processor_mask in its processor_mask. At the same time, the scheduling manager will give some priorities that are last running in this processor because the process moves to the other processor will result in performance. 4.4 File Figure 4.1 The file used by the process Figure 4.1 shows the file system related information used by each process in the system. The first FS_Struct contains VFS inode and its shielding code that points to the process. This shield code value is the default value used when creating a new file, which can be changed by system call. The second data structure FILES_STRUCT contains information about all files currently used by the process. The program is read from the standard input and writes to the standard output. Any error message will output to a standard error output. Some of these files may be real files, and some are output / input terminals or physical devices, but the programs treat them as files. Each file has a descriptive, file_struct can contain 256 file data structures, which describe a file used by the current process. F_Mode domain indicates that the file will create: read-only, read and write is only written. F_POS contains the next file read and write operation start position. f_inode points to the VFS inode that describes this file, f_ops point to a set of functions that can be operated in this file.

These abstract interfaces are very powerful, which makes Linux support multiple file types. In Linux, the pipeline is achieved with the mechanism we have to discuss below. Whenever a file is opened, an idle file pointer located in Files_Struct will be used to point to this new file structure. The Linux process hopes that there is at least three file desements when the process is started, and they are standard input, standard output, and standard error output, and the general process inherits them from the parent process. These descriptors are used to index the FD array of processes, so standard input, standard output, and standard error output correspond to file desees 0, 1, and 2, respectively. Each time you have access to the file, you will be done together through the file operator subroutines in the file data structure, and the virtual memory process of the Virtual Memory process includes executable code and multiple resource data. The first load is a programmatic image, such as LS. Ls and all executable images are composed of executable code and data. This image file contains information needed to load the executable code, while also connecting the program data to the virtual memory space of the process. Then during the execution process, the process positioning can use virtual memory to include the file content being read. The newly allocated virtual memory must be connected to the virtual memory where the process already exists. Last Linux process calls the universal library process, such as a file processing subroutine. If each process has a copy of the library process, then sharing is meaningless. Linux can make multiple processes simultaneously using shared libraries. Codes and data from the shared library must be connected to the virtual address space of the process and the virtual address space for sharing other processes of this library. All code and data contained in its virtual memory are used at all times. Although it can load those code used in certain circumstances, when it is initialized or handled for special events, it also uses part of the shared library's partial subroutine. However, if these not or rarely used code and data are all loaded into physical memory, it causes great waste. If you waste so many resources in the system, the system efficiency will be greatly reduced. Linux uses request page technology to bring the virtual memory that the process needs to be accessed into physical memory. The core will mark these virtual addresses in the process page table into state but not in memory, without having to directly transfer all code and data directly into physical memory. When the process tries to access these code and data, the system hardware will generate a page error and transfer control to the Linux core to process. This must know where these virtual memorys must be understood from each virtual memory area in the processor address space and how to load it into memory to process page errors. Figure 4.2 Virtual Memory Linux core needs to manage all virtual memory addresses, which describes the contents of each process virtual memory in the VM_AREA_STRUCT structure pointed to by its Task_Struct structure. The MM_STRUCT data structure of the process also contains information that has been loaded with executable images and pointers that point to Process Page Tables. It also includes a pointer to the VM_Area_Struct Link list, each pointer represents a virtual memory area within the process. This linked list is arranged in the virtual memory location, and Figure 4.2 shows a virtual memory of a simple process and managing the core data structure distribution map. Since those virtual memory areas are different, Linux uses the VM_Area_Struct to point to the pointer to a set of virtual memory processing processes to abstract this interface. By using this policy, all process virtual addresses can be handled in the same way without having to understand the difference between the underlying for memory management. If the process is trying to access the memory area, the system only needs to call the page error handling process. Creating a new virtual memory area for the process or the processing page is not in a physical memory error, the Linux core repeats the VM_Area_struct data structure collection. This is consumed to find the time on the VM_Area_struct directly affect system performance. Linux connects the VM_AREA_STRUCT data structure to the AVL (ADELSON-VELSKII AND LANDIS tree structure to speed up the speed. In this connection, each VM_Area_STRUCT structure has a left pointer and the right pointer pointing to the VM_Area_struct structure.

The pointer on the left points to a lower virtual memory start address node and the pointer on the right points to a higher virtual memory start address node. In order to find a node, Linux starts looking up from the root node of the tree until the correct VM_Area_Struct structure is found. Insert or releases a VM_AREA_STRUCT structure does not consume additional processing time. Linux does not directly assign physical memory when the process requests allocated virtual memory. It just creates a VM_AREA_STRUCT structure to describe this virtual memory, which is connected to the virtual memory list of the process. When the process tries to write a newly allocated virtual memory, the system will generate a page error. The processor will try to resolve this virtual address, but if the page table entry corresponding to this virtual address is not found, the processor will abandon the resolution and generate a page error exception, processed by the Linux core. Linux see if this virtual address is in the virtual address space of the current process. If Linux creates the correct PTE and assigns a physical page for this process. Code or data contained in this page may need to read from a file system or swap disk. The process will then continue to execute from the page error, because the physical memory already exists, so no additional page exception will be generated. 4.6 Process Create System Starting Always in the core mode, only one process: initialization process. Like all processes, the initialization process also has a machine status represented by stacks, registers. This information will be stored in the Task_struct structure of the initialization process when there is any other process in the system being created and running. At the end of the system initialization, the initialization process starts a core thread (init) and retained in the IDLE state. If there is no matter to do, the Dispatch Manager will run the IDLE process. The IDLE process is the only process that is not dynamically assigned task_struct. Its task_struct is static in the core configuration and the name is blamed, called init_task. Since it is the first true process of the system, the identity of the init core thread (or process) is 1. It is responsible for completing some of the initial setting tasks of the system (such as opening system console and installation root file system), and executes system initialization programs, such as / etc / init, / bin / init or / sbin / init, depending on specific system. The init program uses / etc / inittab as a script file to create a new process in the system. These new processes create their new processes. For example, the Getty process will create a login process when the user tries to log in. All processes in the system are derived from the init core thread. The new process is created by cloning the old process or the current process. The system calls fork or clone can create a new task, and copy the core in the core state. At the end of the system call, there is a new process waiting for the scheduled manager to select it to run. The system assigns a new Task_Struct data structure from physical memory, and there is one or more physical pages that include the replicated process stack (user and core). Then create a process marker uniquely tag this new task. However, the replication process retains the marker of its parent process is also reasonable. The newly created Task_Struct will be placed in a Task array, which will be copied into the new task_struct in the task_struct of the Task_struct of the replication process. After the replication is completed, Linux allows two processes to share resources rather than copying their copies. These resources include files, signal processing, and virtual memory. The process is recorded for shared resources with their respective count. Before the use of resources for resources, Linux will never release this resource. For example, the replication process is to share virtual memory, and its task_struct will contain pointers to the original process of mm_struct. MM_Struct will increase the count variable to represent the number of times the current process share. The technology used by the replication process virtual space is very clever. Copy will generate a set of new VM_Area_struct structures and corresponding mm_struct structures, as well as page tables for replication processes. Any virtual memory for this process is not copied.

Since there are possibilities in physical memory due to the virtual memory of the process, some may be in the exchange file, so the copy will be a difficult and cumbersome work. Linux uses a "Copy ON WRITE" technology: This virtual memory block is copied only when the virtual memory is written only when one of the two processes is written. But no matter whether you write or write, any virtual memory can be shared between two processes. The memory of read-only properties, such as executable code, can always be shared. To make the "Copy ON WRITE" policy, you must mark the page table entry of the writable area as read-only, while the VM_Area_struct data describing their VM_Area_struct is set to "Copy On Write". A page error is generated when the process tries to write a virtual memory. At this time, Linux will copy this memory and modify the page table of the two processes and the virtual memory data structure. 4.7 Clock and Timer Core Tracking the process of creating the process and the CPU time consumed in its life period. At the time of each clock, the core will update the current process in system mode and user mode (record in Jiffies). In addition to the above timers, Linux also supports several process-related time interval timers. The process can transmit a variety of signals to it by these timers, these timers are as follows: REAL This timer is transmitted to the process when the clock expires when the clock expires. Virtual This timer is only recorded at the process running, and the SIGVTALRM signal will be sent when the clock expires. Profile This timer counts when the process is running and core for its runtime. Send the SIGPROF signal to the process when you go. The above time space segmentation can also be run separately, and Linux stores all of this information in the TASK_STRUCT data structure of the process. These time spaces can be set by system calls and start them or read them or read their current values. Virtual and Profile Timers are handled in the same way. The time interval schedule of the current process will be decremented, and the appropriate signal will be sent when it will be sent. Some of the mechanisms of the REAL clock spacer sometimes, these will be discussed in detail in the Kernel chapter. Each process has its own Timer_List data structure, which is discharged into the system's timer chain table when the time space is scheduled. After the timer expires, the underlying process will remove it from the queue and call the time interval processing. This process will send a SIGALRM signal to the process and restart the timer to re-put it into the system clock queue. 4.8 Program Execute Implied, the Linux program is executed by a command interpreter. The command interpreter is a user process and people call it a shell program. There are multiple shell programs in Linux, and several most popular are SH, BASH and TCSH. In addition to several built-in commands such as CD and PWD, the commands are one executable binary file. When typing a command, the shell program searches for a directory to locate this executable image file in the process PATH environment variable. If found, it is loaded and executed. The shell uses the above described Fork mechanism to replace its child process with the contents of the found binary executable image. In general, the shell will wait for this command to complete the exit of the sub-process. You can switch to the background by pressing the Control-Z button to switch to the background and the shell starts running again. At the same time, you can also use the shell command BG to put the command in the background, and the shell will send a sigcont signal to restart the process until the process needs to perform terminal output and input. Executable files can have many formats, even a script file. The script file requires the appropriate command interpreter to process them; for example / bin / sh explains the shell script. The executable target file contains executable code and data so that the operating system can get enough information to load it into memory and execute.

Linux most commonly used target files are ELF, but theoretically Linux can flexibly handle almost all target file formats. Figure 4.3 The registered binary format The binary format supported by Linux can be constructed to be loaded as a module. The core saves a linked list that can be supported (see Figure 4.3), and when a file is executed, various binary formats are in turn. The most widely supported format on Linux is A.out and ELF. The execution file does not need to be read in memory, and a request load technology is used. Each part of the executable image used by the process is transferred to the memory, and the part that is useless will be discarded from memory. 4.8.1 ELF ELF (Executable and Connectable Format) is a target file format for UNIX system lab design, which has now become the most used format in Linux. However, compared with other target file formats, such as ECOFF and A.out, ELF overhead are slightly large, and its advantages are more flexible. The ELF executable contains executable code, ie a text segment: text and data segment: DATA. Social narratives in executable images have been put in the virtual address space of the process. The static connection image is obtained through the connector LD, and the code and data required to run this image are included in a single image. This image also defines the memory distribution of the image and the address of the code that is first executed. Figure 4.4 ELF Executable Format Figure 4.4 shows a configuration of an ELF executable image of a static connection. This is a simple C program that prints "Hello World" and exits. The file header is described as an ELF image with two physical file heads (E_PHNUM = 2), and the physical file header is located at the 52-byte of the image file start position. The first physical file header describes the executable code in the image. It starts from virtual address 0x8048000, with a length of 65532 bytes. This is because it contains the Printf () function library code to output the static connection image of "Hello World". The entry point of the image, that is, the first instruction of the program, is not at the starting position of the image and at the virtual address 0x8048090 (e_entry). The code is just the second physical file header. This physical file header describes the data used in this program. This is because the starting 2200 bytes contain a pre-initialization data and the next 2048 byte contains data initialized by the executed code. When linux loads an ELF executable image to the virtual address space of the process, it doesn't really load the image. First it builds its virtual memory data structure: the VM_Area_struct tree and page table of the process. When the program is executed, the page will be generated? BF9 娲恚  鸪 鸪 虼  牒 牒  荽 荽 砟 诖〕〕〕 蛑 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺 惺嬷 嬷 ァ ァ ァ 〦LF Binary Format The loader discovers that this image is a valid ELF executable image that will put the current executive image of the process from the virtual memory. When the process is a replication image (all the processes are all), the parent process executes the old image program, such as a command interpreter like Bash. At the same time, any signal processing process will also be cleared and the open file is turned off. At the end of the flushing, the process has been prepared for the new executive image. Regardless of the format of the executable image, the same information will be stored in the MM_STRUCT structure of the process, which is pointed to the image code and data. The values ​​of these pointers are determined when the ELF executable image is read from the file and the relevant program code is mapped to the process virtual address space. At the same time, VM_AREA_STRUCT is also established, and the page table of the process is also modified. The mm_struct structure also includes a pointer to the parameter passing to the program and the process environment variable. On the other hand of the ELF shared library, the dynamic connection map does not contain all the code and data required to run. Part of which is only connected to the shared library when running.

转载请注明原文地址:https://www.9cbs.com/read-125770.html

New Post(0)