Memory Management The memory management subsystem is an important part of the operating system. From the early days of computer development, there is a memory need for physical capabilities greater than the system. In order to overcome this limit, many strategies have been developed, and the most successful is virtual memory. Virtual Memory enables the system to have more memory than actual memory by sharing memory between competition processes. Virtual memory not only makes your computer memory, but the memory management subsystem also provides: Large Address Spaces operating system enables the system to have more memory than the actual amount. Virtual memory can be much larger than the physical memory in the system. The The processor must convert a virtual page number to a physical page and access the correct offset of the physical page. To this end, the processor uses a page table ( For example: The entry of the virtual page 2 in the Page 2 of the process All memory accesss are page tables, each process has its own page table. This physical page number must appear in the page table of two processes for two processes sharing a physical memory page. Figure 3.1 shows the process of two shared physical page numbers 4. For the process x virtual page number is 4, and for the process Y virtual page number is 6. This also shows an interesting place for shared pages: shared physical pages do not have to have the same place in the virtual memory space of sharing it. 3.1.4 Physical and Vitual Addressing Modes (Physical and Virtual Addressing Mode) There is no significance of running in virtual memory for the operating system itself. This will be a nightmare if the operating system must maintain its own page table. Most multi-purpose processors support physical address mode and virtual address mode. Physical addressing mode does not require a page table, and the processor does not need any address translation in this mode. _Page_accessed Linux Used, Sign A page has been accessed 3.2 Caches If you use the above theoretical model to implement a system, it works, but it will not be too efficient. The operating system and processor designers do their best to make system performance. In addition to using faster processors, memory, etc., the best way is to maintain a cache for useful information and data, which makes some operation faster. Repeat three times until the page number of the physical address including the virtual address is found. Then use the last field in the virtual address: byte offset, lookup data in the page. Each platform running Linux must provide a conversion macro that allows the core to handle the page table of a specific process. In this way, the core does not need to know the specific structure of the page entry or how to organize it. In this way, It tracks the empty page linked list in the List unit queue in the Free_Area data structure. If the page block of the request size is not idle, find the next size block (<2 times the size of the request). Continue this process until all Free_Area or find an idle page. If the page block found is greater than the requesting page, the block will be separated into a piece of suitable size. Because all blocks are all the pages of the <2, this division is relatively simple, you only need to score it. The idle block is placed in a proper queue, and the assigned page block is returned to the caller. For example, in Figure 3.4, if the data block on page 2 is requested, the first 4 block (starting to page number 4) will be divided into two 2 blocks. The first 2 block that starts on the page number 4 will be returned to the caller, and the second 2 block (starting from the page number 6) will be ranked in the FREE_Area array unit 1 in 2 pages Block in the queue. 3.4.2 Page Deallocation The big page block into a small page block in the process of the page block, which will make the memory zero. Page recycling code as long as you may link the page into a large page block. In fact, the size of the page is very important (<2 power), because this can easily make a large page block. As long as a page block is recycled, check if its adjacent or same size page block is idle. If so, make it and the newly released page block together with a new next size idle page. When the two memory table combines a larger page block, the page recycle code must be tried to merge the block block into a larger block. In this way, the idle page block will be as large as possible. For example, in Figure 3.4, if the page number 1 is released, it will combine with the idle page number 0 and put it in a 2-page queue that is empty in the Unit 1 of Free_Area. 3.5 Memory Mapping (Memory Mapping) When an image is executed, the content of the execution image must be placed in the virtual address space of the process. The situation is the same for any shared library to execute the image. The execution file does not actually be put in physical memory, but only the virtual memory connected to the process. Thus, this part of the image is loaded into memory from the execution file as long as the program references the image. This image and process virtual address space are called memory mappings. Virtual memory for each process is represented by a mm_struct data structure. This includes information currently executed (e.g., Bash) and pointers pointing to a group of VM_Area_STRUCT structures. Each 3.6 Demand Paging As long as the image mapped to the virtual memory of the process, it can start running. Because only the first part of the image is placed in physical memory, it will soon access the virtual space area that has not yet been placed in the physical memory. When the process is accessing the virtual address without a valid page entry, the processor is reported to the See Linux / PageMap.h When the data is read from the memory map, such as when Demand Paging needs to be placed in memory, this page is read in 3.8.1 Reducing The size of the page and buffer caches Page and It records two indexes: The first is the index in the Each process's virtual memory area can have its own exchange operation (indicated by