This time is less, briefly introduces the advanced I / O of Unix.
Non-blocking I / O
Non-blocking I / O allows us to call I / O operations that will not be blocked, such as Open, Read and Write. If this operation cannot be completed, an error is immediately returned, indicating that the operation will continue to block.
2. Record the lock
When a process is reading or modifying some of the files, you can block other processes from modifying the same file area.
The locked area can be determined in the recorded structure FLOCK.
Multiple processes can have a shared read lock on a given byte, but write locks on a given byte can only be used unused by one process. Further, if there is already one or more read locks on a given byte, the lock cannot be replied to the byte; if there is a monopoly write lock on one byte, then Cannot add any read locks again.
3. Flow
A full duplex path is provided between the user process and the device driver. The figure below is a stream model with processing modules. All inputs and outputs of the stream are based on messages, and the message consists of the following components: message type, optional control information, and optional data. Between the heads, each processing module, and the device driver, the message can be downward, or it can be reversed. Various different operations can be executed using IOCTL according to the message. Special attention is that this stream is different from the File stream.
4. I / O multi-pass
Its basic idea is to first construct a table related to the descriptor, then call a function, it is going to have a ready I / O in these descriptors for I / O. When returns, it tells the process which descriptor is ready to perform I / O. The I / O multi-channel adapter prevents the process from being blocked when the plurality of descriptors are handled simultaneously. Usually implemented with SELECT and POLL functions.
5. Asynchronous I / O
Asynchronous I / O can be achieved using SELECT and POLL. Regarding the state of the descriptor, the system does not take the initiative to tell us any information, we need to actively check (call SELECT or POLL). When an event cares about a certain descriptor already occurs, a signal is generated, and the process is notified with this signal.
6. Storage mapping I / O
Storage mapping I / O maps a disk file with a cache in the storage space. So the data is taken from the cache, it is equivalent to the corresponding byte in the read file. Similar to it, the data is stored in the cache, and the corresponding byte is automatically written (i.e., when the write reservoir region is written automatically by the kernel deficiency algorithm). In this way, I / O can be performed without using read and write. In UNIX, mapping is usually implemented by a function MMAP, and the file is mapped to the address space of the process.
one example:
For map storage areas, attribute settings (such as read / write permissions, sharing methods, etc.). At the same time, the offset of the start address addr and files of the maps is typically the multiple of the length of the system's dummy page.
After Fork, the child process inherits the storage map (because the child process copies the parent address space, the storage map is part of the address space), but due to the same reason, the new program after Exec does not inherit this storage mapping Area. When the process is terminated, or after MunMap is called, the storage map area is automatically removed. Closing the file descriptor FileDes does not cancel the mapping area.
The benefits of using storage mapping I / O are: kernel directly to map storage bundle as I / O operation, faster. In the Read / Write mode, the kernel is copied between the user cache and its own cache, and then uses it to caches I / O; and store the mapping I / O processing is the storage space instead of reading, write a file, So often simplify the algorithm.
The biggest gains to me is to understand the meaning of MMAP.