While busy working with Windows Sockets in overlapped mode and using Completion routines (so no IOCP) for feedback I found the following curious case:
listen
and AcceptEx
.ConnectEx
We now have (at least) 3 sockets: 1 listing socket, a client connected socket and a server connected socket.
after transferring some data we close both the server and client connected sockets with shutdown
. After this step both sockets are closed with closesocket
.
Currently: just to be sure we have no pending completion routine I issue the following (pseudocode):
while SleepEx( 0, TRUE ) == WAIT_IO_COMPLETION do ;
I thought now it would be save to free the memory of the OVERLAPPED
structures used by WSARecv
and WSASend
.
After this moment when the thread becomes in an alertable state again another completion routine callback is done for the server connected socket with an error 10053 but using the OVERLAPPED
structure we just freed. This is use of memory after free.
Question:
When can you be sure no completion callbacks are issued anymore for a socket using overlapped IO using completion routines?
You need to wait for the I/O completion (closing the socket will cancel outstanding requests and you will get a completion callback).
The OS has ownership of the OVERLAPPED structure and associated buffer until you synchronize on event completion (by waiting for the hEvent
or receiving an APC). You cannot do anything with the buffer until you receive this callback, and you definitely must not free it. Wait for the OS to tell you it is no longer needed.
Note that cancellations don't necessarily cause completion immediately, because the driver may be synchronizing with hardware requests and only mark the IRP complete when the hardware state changes. (This would be necessary if DMA is in use but might be done for other operations just for consistency) So the SleepEx
loop you showed is not guaranteed to collect all cancellations.
Keep track for each socket of the pending operations, and use WaitForSingleObjectEx
instead of SleepEx
, to wait explicitly for each one.