Socket architecture of Windows NT and Windows 2000
For Winsock applications for developing a large response scale, there is a basic understanding of the socket architecture of Windows NT and Windows 2000. It is very helpful for WINDOWS NT and Windows 2000.
Different from other types of operating systems, Windows NT and Windows 2000 transport protocols do not have a style like a socket, which can be directly conversible with the application, but use a more underlying API called transmission drive. TRANSPORT DRIVER INTERFACE, TDI. Winsock's core mode driver is responsible for connecting and buffer management to provide a socket emulation emulation to the application (implemented in the AFD.sys file) while responsible for conveying drivers with the underlying transfer driver.
Who is responsible for managing buffers?
As mentioned above, the application is talking to the transfer protocol driver via Winsock, while AFD.sys is responsible for buffer management for applications. That is, when the application calls the send () or wsasend () function to send data, AFD.sys will copy the data into its own internal buffer (depending on the SO_SNDBUF set value), then send () or Wsasend () The function returns immediately. It can also be said that AFD.sys is responsible for sending data in the background. However, if the application requested the data exceeded the SO_SNDBUF set buffer size, the wsasend () function blocked until all data is sent.
The case where data is received from the remote client is similar. As long as you do not need to receive a lot of data from the application, AFD.SYS will copy the data into its internal buffer. When the application calls the RECV () or WSARECV () function, the data will be copied from the internal buffer to the buffer provided by the application.
In most cases, such architectures are well operated, especially when the application is written in a traditional socket, and the non-overlapping Send () and receive () mode are written. However, the programmer should be careful, although the SO_SNDBUF and SO_RCVBUF option value can be set to 0 (turn off the internal buffer) by setsockopt () this API, but the programmer must clearly turn off the internal buffer of AFD.sys. What consequences have caused the system crashes that may cause a copy of the buffer copy when transmitting and receiving data.
For example, an application is turned off by setting SO_SNDBUF to 0, and then issues a blocking Send () call. In this case, the system core will lock the buffer of the application until the receiver confirms that the send () call returns after receiving the entire buffer. It seems that this is a simple method that determines whether your data has received all of the other party, but it is not the case. Think about it, even if the remote TCP notification data has been received, it is also not mean that the data has been successfully given to the client application. For example, the other party may have indifferent resources, causing AFD.sys to copy data to the application. Another more tight problem is that only one transmission call can be performed each time in each thread, the efficiency is extremely low.
Set SO_RCVBUF to 0, and turn off the AFD.SYS's receiving buffer cannot make performance improvement, which will only buffer the received data is buffered at a lower level than Winsock. When you send a Receive call, it is equally buffered. District copy, so you have wanted to avoid the conspiracy of the buffer copy.
Now we should be clear, close the buffer is not a good idea for most applications. As long as you want to pay attention to a few WSARECVS overlapping calls at a certain connection, you usually do not need to turn off the receiving buffer. If AFD.sys always has buffers available by the application, it will not necessarily use internal buffers.
High-performance server applications can turn off the send buffer while do not lose performance. However, such an application must be very careful to ensure that it always makes multiple overlapping calls, rather than waiting for a overlap transmission to issue the next one. If the application operates in order to send one after another, it is wasted twice to send the neutral time, in short, it is necessary to ensure that the transfer driver will be turned to another buffer after sending a buffer. Area. (Translator) Liu Xi SICKID10001@21cn.com