cnetwork-programmingntp

How to correct error in time for a ntp client application due to the round trip delay


I am developing a client application which sends udp packets to the configured network time server requesting time. The server responds with current utc time. This time in the response is set as the current time in the device. If there is any delay happens between the packet transfer there will be error in current time at client side which was received from the time server. How should i correct this error happened due to the delay in transmission. Please help.


Solution

  • wikipedia only give principle,
    it's more tricky depending upon precision you look for:
    The official specs (help understanding data format and concepts)
    https://www.rfc-editor.org/rfc/rfc5905#section-7.3

    Simplest:
    Assuming
    1- you use 64 bit NTP protocol
    2- you use a ntp_packet structure like here below
    (Inspired by https://github.com/lettier/ntpclient which give a good hints up)

    typedef struct{
        uint8_t li_vn_mode;         // Eight bits. li, vn, and mode.
                                    // li.   Two bits.   Leap indicator.
                                    // vn.   Three bits. Version number of the protocol.
                                    // mode. Three bits. Client will pick mode 3 for client.
        uint8_t stratum;            // Eight bits. Stratum level of the local clock.
        uint8_t poll;               // Eight bits. Maximum interval between successive messages.
        uint8_t precision;          // Eight bits. Precision of the local clock.
    
        union {                     // 32 bits. Total round trip delay time.
            uint32_t rootDelay;     // (NTP short format)
            struct  {
                uint16_t rootDelay_s;
                uint16_t rootDelay_f;
            } uRDE;
        };                          // 32 bits. Total round trip delay time.    (NTP short format)
    
        union {                     // Total dispersion to the reference clock,in NTP short format.
            uint32_t rootDispersion;// (NTP short format)
            struct  {
                uint16_t rootDispersion_s;
                uint16_t rootDispersion_f;
            } uRDI;
        };                          // 32 bits. Total round trip delay time.    (NTP short format)
    
        uint32_t refId;             // 32 bits. Reference clock identifier.
        uint32_t refTm_s;           // 32 bits. Reference time-stamp seconds.
        uint32_t refTm_f;           // 32 bits. Reference time-stamp fraction of a second.
        uint32_t origTm_s;          // 32 bits. Originate time-stamp seconds.
        uint32_t origTm_f;          // 32 bits. Originate time-stamp fraction of a second.
        uint32_t rxTm_s;            // 32 bits. Received time-stamp seconds.
        uint32_t rxTm_f;            // 32 bits. Received time-stamp fraction of a second.
    
        // “Time at the server when the response left for the client.” 
        uint32_t txTm_s;            // 32 bits and the most important field the client cares about. Transmit time-stamp seconds.
        uint32_t txTm_f;            // 32 bits. Transmit time-stamp fraction of a second.
      } ntp_packet;                 // Total: 384 bits or 48 bytes.
    

    then in your code save your local time BEFORE sending your request something like herebelow
    then call NTP server (if you don't now how to setup packet, very basic, zero packet then just init first byte)

        memset(&packet,0,sizeof(packet));
        *((char*)&packet)= 0x1b; // Represents 27 in base 10 or 00011011 in base 2.
        // Save time just before request
        const double dMyTime = std::chrono::duration_cast<std::chrono::duration<double>>(std::chrono::system_clock::now().time_since_epoch()).count();
    
        n = ::send( sockfd, ( char* ) &packet, sizeof( ntp_packet ),0 );
        if ( n == SOCKET_ERROR){
            error( "ERROR writing to socket" );
        }
        // Wait and receive the packet back from the server. If n == -1, it failed.
        n = ::recv( sockfd, ( char* ) &packet, sizeof( ntp_packet ),0 );
        if ( n == SOCKET_ERROR){
            error( "ERROR writing to socket" );
        }
    

    Now you can extract a simple juice

    double dDelay       = (double)packet.uRDE.rootDelay_s + (double)packet.uRDE.rootDelay_f/(double)UINT16_MAX;
    double dServerTime  = (double)packet.txTm_s-(double)2208988800  + (double)packet.txTm_f/(double)UINT32_MAX;
    double dOffset      = dServerTime - (dMyTime+dDelay/2);     // Add delay/2 as average uplink time+process on server that trigger sending dServerTime
    

    Caveat:
    it assume that uplink+ ntpserver process is equal ~downlink.
    You can look at dDelay, roundtrip are in generally in the ms range, that represent your max error, so it maybe accurate enough.
    Under windows I successfully get the clock drift with sub-millisecond accuracy. Which is generally more than adequate on a non real time OS.
    Also I din't figure out clearlly how is round trip computed when you send 0 as your as your originate time. Could be IP packet stamp.

    Else:
    More complex (I don't have snippet) methode require a succession of exchanges with the NTP server,
    (largely overkill on OS like Windows or vanillia Unix)
    you should preset your own local time in the packet you send, the NTP server send you back when it was received.
    Purpose of all that cooking is to figure out what is your downlink delay (and only it).
    Customary usage is to do that 4 to 16 times, use stats methods to discarding outliers, keeping only inliers (standard dev methods is enough to give a view of that), and finally averaging inliers. This allows to compute a decent approximation of your up and downlink time, hence compute your clock drift more accurately base on NTP sent time.

    Hope it helps