linuxwindowssocketsunixportability

What really is the "linger time" that can be set with SO_LINGER on sockets?


The man page explains little to nothing about that option and while there are tons of information available on the web and in answers on StackOverflow, I discovered that many of the information provided there even contradicts itself. So what's that setting really good for and why would I need to set or alter it?


Solution

  • When a TCP socket is disconnected, there are three things the system has to consider:

    1. There might still be unsent data in the send-buffer of that socket which would get lost if the socket is closed immediately.

    2. There might still be data in flight, that is, data has already been sent out to the other side but the other side has not yet acknowledged to have received that data correctly and it may have to be resent or otherwise is lost.

    3. Closing a TCP socket is a three-way handshake with no confirmation of the third packet. As the sender doesn't know if the third packet has ever arrived, it has to wait some time and see if the second one gets resend. If it does, the third one has been lost and must be resent.

    When you close a socket using the close() call, the system will usually not immediately destroy the socket but will first try to resolve all the three issues above to prevent data loss and ensure a clean disconnect. All of that happens in the background (usually within the operating system kernel), so despite the close() call returning immediately, the socket may still be alive for a while and even send out remaining data. There is a system specific upper time bound how long the system will try to get a clean disconnect before it will eventually give up and destroy the socket anyway, even if that means that data is lost. Note that this time limit can be in the range of minutes!

    There is a socket option named SO_LINGER that controls how the system will close a socket. You can turn lingering on or off using that option and if is turned on, set a timeout (you can set a timeout also if turned off but that timeout has no effect).

    The default is that lingering is turned off, which means close() returns immediately and the details of the socket closing process are left up to the system which will usually deal with it as described above.

    If you turn lingering on and set a timeout other than zero, close() will not return immediately. It will only return when issue (1) and (2) have been resolved (all data has been sent, no data is in flight anymore) or if that timeout has been hit. Which of both was the case can be seen by the result of the close call. If it is success, all remaining data got sent and acknowledged, if it is failure and errno is set to EWOULDBLOCK, the timeout has been hit and some data might have been lost.

    In case of a non-blocking socket, close() will not block, not even with a linger time other than zero. In that case there is no way to get the result of the close operation as you cannot ever call close() twice on the same socket. Even if the socket is lingering, once close returned, the socket file descriptor should have been invalidated and calling close again with that descriptor should result in a failure with errno set to EBADF ("bad file descriptor").

    However, even if you set linger time to something really short, like one second and the socket won't linger for longer than one second, it will still stay around for a while after lingering to deal with issue (3) above. To ensure a clean disconnect, the implementation must ensure that the other side also has disconnected that connection, otherwise remaining data may still arrive for that already dead connection. So the socket will go into a state most systems call TIME_WAIT and stay in that state for a system specific amount of time, regardless if lingering is on and regardless what linger time has been set.

    Except for one special case: If you enable lingering but set the linger time to zero, this changes pretty much everything. In that case a call to close() will really close the socket immediately. That means no matter if the socket is blocking or non-blocking, close() returns at once. Any data still in the send buffer is just discarded. Any data in flight is ignored and may or may not have arrived correctly at the other side. And the socket is also not closed using a normal TCP close handshake (FIN-ACK), it is killed instantly using a reset (RST). As a result, if the other side tries to send something over the socket after the reset, this operation will fail with ECONNRESET ("A connection was forcibly closed by the peer."), whereas a normal close would result in EPIPE ("The socket is no longer connected."). While most programs will treat EPIPE as a harmless event, they tend to treat ECONNRESET as a hard error if they didn't expect that to happen.

    Please note that this describes the socket behavior as found in the original BSD socket implementation (original means that this may not even match the behavior of modern BSD implementations such as FreeBSD, OpenBSD, NetBSD, etc.). While the BSD socket API has been copied by pretty much all other major operating systems today (Linux, Android, Windows, macOS, iOS, etc.), the behavior on these systems sometimes varies, as is also true with many other aspects of that API.

    E.g. If a non-blocking socket with data in the send buffer is closed on BSD, linger is on and linger time is not zero, the close call will return at once but it will indicate a failure and the error will be EWOULDBLOCK (just like in case of a blocking socket after the linger timeout has been hit). Same holds true for Windows. On macOS this is not the case, close() will always return at once and indicate success, regardless of data in the send buffer or not. And in case of Linux, the close() call will actually block in that case up to the linger timeout, despite the socket being non-blocking.

    To learn more about how different systems actually deal with different linger settings, have a look at the following link:

    https://www.nybek.com/blog/2015/04/29/so_linger-on-non-blocking-sockets/

    There was also a page with results for blocking sockets but unfortunately the Internet Archive did not capture it and the orignal blog is gone for good. The test code is still available but I don't have access to all platforms to re-create the test results:

    https://github.com/nybek/linger-tools

    As you can see, the behavior might also change depending on whether shutdown() has been called prior to close() and other system specific aspects, including things like setting a lingering timeout will have an effect despite lingering being turned off completely.

    Another system specific behavior is what happens if your processes dies without closing a socket first. In that case the system will close the socket on your behalf and some systems tend to ignore any linger setting when they have to do so and just fall back to the system's default behavior. They cannot "block" on socket close in that case anyway but some systems will even ignore a timeout of zero and do a FIN-ACK in that case.

    So it's not true that setting a linger timeout of zero will prevent sockets from ever entering the TIME_WAIT state. It depends on how the socket has been closed (shutdown(), close()), by whom it has been closed (your own code or the system), whether it was blocking or non-blocking, and ultimately, on the system your code is running on. The only true statement that can be made is:

    If you manually close a socket that is blocking (at least the moment you close it, might have been non-blocking before) and this socket has lingering enabled with timeout of zero, this is your best chance to avoid that this socket will go into TIME_WAIT state. There is no guarantee it won't but if that won't prevent it from happening, there is nothing else you could do to prevent it from happening, unless you have a way to ensure that the peer on the other side will initiate the close for you; as only the side initiating the close operation may end up in a TIME_WAIT state.

    So my personal pro tip is: If you design a sever-client-protocol, design it in such a way that normally the client closes the connection first because it is very undesirable that server sockets typically end up in TIME_WAIT state but it's even more undesirable that connections are closed by RST as that can lead to data loss of data previously sent to the client.