I'm using select()
on a Linux/ARM platform to see if a udp socket has received a packet. I'd like to know how much time was remaining in the select call if it returns before the timeout (having detected a packet).
Something along the lines of:
int wait_fd(int fd, int msec)
{
struct timeval tv;
fd_set rws;
tv.tv_sec = msec / 1000ul;
tv.tv_usec = (msec % 1000ul) * 1000ul;
FD_ZERO( & rws);
FD_SET(fd, & rws);
(void)select(fd + 1, & rws, NULL, NULL, & tv);
if (FD_ISSET(fd, &rws)) { /* There is data */
msec = (tv.tv_sec * 1000) + (tv.tv_usec / 1000);
return(msec?msec:1);
} else { /* There is no data */
return(0);
}
}
The safest thing is to ignore the ambiguous definition of select()
and time it yourself.
Just get the time before and after the select and subtract that from the interval you wanted.