socketsnetwork-programmingttlsetsockopt

setsockopt not setting the correct IP_TTL when called in a loop


I am working on a traceroute program on macOS that sends UDP probes in batches with increasing TTLs using the following code (stripped down)

int sd = socket(AF_INET, SOCK_DGRAM, 0);
if (sd < 0) { // error handling }

sockaddr_in saddr;
memset(&saddr, 0, sizeof (saddr));
saddr.sin_family = AF_INET;
saddr.sin_port = htons(port_num);
if (bind(sd, (const sockaddr*)&saddr, sizeof(saddr)) < 0) { // error handling }

for (int i = 1; i <= 5; ++i) {
    int ttl = i;
    if(setsockopt(sd, IPPROTO_IP, IP_TTL, &ttl, sizeof(ttl)) < 0) {
        // error handling
    }

    char probe[64] = {0};
    int cc = sendto(sd, probe, sizeof(probe), 0, &dest_addr, sizeof(dest_addr));
    if (cc < 0 || cc != sizeof(probe)) {
       // error handling
    }
}

I would expect the code to send five packets with TTLs from 1 to 5, however, if I inspect the network activity in Wireshark all the packets are sent with TTL set to 5.

I have not set the socket as non-blocking and none of the API calls reports any error.

Is it incorrect to call setsockopt between sendto calls like this? Moreover, if I add a 10ms sleep between iterations, it works as expected.


Solution

  • You need to either stick to the delay or close and re-open the socket as the ttl value gets set by sendto() asynchronously