floating-pointcudadoubledouble-double-arithmetic

Emulating FP64 with 2 FP32 on a GPU


If one were to emulate double precision floating point with two single precision floating points what would the performance be like, and can it be done well?

Currently Nvidia is charging quite a premium for double precision enabled Tesla cards which enable you to get one third of the single precision performance (notable exceptions Titan/Titan Black).

If one were to use a Geforce GPU with gimped double precision and emulate double precision using 2 single precision floats what would the performance be like?


Solution

  • You can get a rough estimate of the performance by counting the number of float operations required to implement each double-float operation. You would want to inspect binary code with cuobjdump --dump-sass to get an accurate count. I am showing a double-float multiplication below that takes full advantage of FMA (fused multiply-add) support on the GPU. For double-float addition code, I would point you to a paper by Andrew Thall as I do not have the time to code this up right now. From previous analysis I believe the addition code given in the paper is correct, and that it avoids common pitfalls in faster but less accurate implementations (which lose accuracy when the magnitude of the operands is within a factor of two).

    If you are a registered CUDA developer you can download double-double code from NVIDIA's developer website (log in at https://developer.nvidia.com) which is under BSD license, and rework it relatively quickly into double-float code. NVIDIA's double-double code supports the operations addition, subtraction, division, square root, and reciprocal square root.

    As you can see, the multiplication below requires 8 float instructions; unary negation is absorbed into FMA. The addition requires around 20 float instructions. However, the instruction sequences for double-float operations also require temporary variables, which increases register pressure and can decrease occupancy. A reasonably conservative estimate may therefore be that double-float arithmetic performs at 1/20 the throughput of native float arithmetic. You can easily measure this yourself, in the context relevant to you, i.e. your use case(s).

    typedef float2 dblfloat;  // .y = head, .x = tail
    
    __host__ __device__ __forceinline__ 
    dblfloat mul_dblfloat (dblfloat x, dblfloat y)
    {
        dblfloat t, z;
        float sum;
        t.y = x.y * y.y;
        t.x = fmaf (x.y, y.y, -t.y);
        t.x = fmaf (x.x, y.x, t.x);
        t.x = fmaf (x.y, y.x, t.x);
        t.x = fmaf (x.x, y.y, t.x);
        /* normalize result */
        sum = t.y + t.x;
        z.x = (t.y - sum) + t.x;
        z.y = sum;
        return z;
    }
    

    Note that in various applications, full double-float arithmetic may not be necessary. Instead one can use float computation, augmented by error compensating techniques, one of the oldest of which is the Kahan summation. I gave a brief overview of easily available literature on such methods in a recent posting in the NVIDIA developer forums. In the comments above, Robert Crovella also pointed to a GTC 2015 talk by Scott LeGrand, which I haven't had time to check out yet.

    As for accuracy, double-float has a representational precision of 49 (24+24+1) bits, compared with IEEE-755 double which provides 53 bits. However double-float cannot maintain this precision for operands small in magnitude, as the tail portion can become a denormal or zero. When denormal support is turned on, the 49 bits of precision are guaranteed for 2-101 <= |x| < 2128. Denormal support for float is turned on by default in the CUDA tool chain for architectures >= sm_20, which means all architectures supported by the currently shipping version, CUDA 7.0.

    As opposed to operations on IEEE-754 double data, double-float operations are not correctly rounded. For the double-float multiplication above, using 2 billion random test cases (with all source operands and results within the bounds stated above), I observed an upper bound of 1.42e-14 for the relative error. I do not have data for the double-float addition, but its error bound should be similar.