floating-pointinterpolationprecisionfloating-accuracynumerical-analysis

Chebychev points and floating point precision


I've seen in my lecture notes that the Chebychev points, defined (in MatLab notation) as

n = 20;
i = (1:n+1)';
% Chebychev
xi = 5*cos((2*i-1)*pi/(2*(n+1)))

are not symmetric in [-1,1] in floating point precision. But, if one uses the trigonometric identity cos(x) = sin( pi/2 - x) applied to the points above, he finds out that the Chebychev points

xxi = 5*sin(pi*(n+2-2*i)/(2*(n+1)))

are now symmetric in the interval.

Now, the big question: why does this substitution gives the symmetry? I'm pretty sure that this has to do with the pi, but it appears in both the formulations, so what is really going on? I'd like to see a formal argument or a mathematical computation of such a situation


Solution

  • The sine formula uses points distributed symmetrically around zero, and the cosine format does not. The features of the floating-point format, notably the granularity with which it represents numbers, is symmetric around zero, and therefore calculations that are symmetric around zero produce results that are symmetric around zero.

    The cosine formula uses cos at the points (2i−1)/(2n+2) π for 1 ≤ i ≤ n+1. For n=20, these points are 1/42 π, 3/42 π, 5/42 π,… 41/42 π.

    1/42 π is about .075. The greatest power of two not greater than .075 is 2−4. When 1/42 π is calculated in the IEEE-754 binary64 format, which has 53 bits in its significand, the floating-point scaling is such that the greatest bit position in the significand represents 2−4, and the lowest bit position represents 2−56. So the result must be rounded to the nearest multiple of 2−56. In contrast, 41/42 π is about 3.067, and the leading bit position of its significand represents 22, while the lowest bit position represents 2−50. So the result must be rounded to the nearest multiple of 2−50, a scale 64 times greater than that for 1/42 π. So the rounding errors in floating-point calculations are generally different for 1/42 π and 41/42 π, for 3/42 π and 39/42 π, and so on.

    The sine formula uses sin at the points (n+2-2i)/(2n+2) π for 1 ≤ i ≤ n+1. For n=20, these points are 20/42 π, 18/42 π, 16/42 π, … −16/42 π, −18/42 π, −20/42 π. In this, when 20/42 π and −20/42 π are calculated in binary64, they both use the same scaling for the significand. So their rounding errors are identical, except in sign, and, the calculated results are identical except for the sign bit. Similarly, 18/42 π and −18/42 π use the same scaling, and all the terms are paired with a symmetric partner, except for 0/42 π, but that is zero and has a calculation error (zero) that is symmetric with itself.

    Further, typical implementations of the sin routine are symmetric around zero, so that sin(-x) and -sin(x) produce identical results. Commonly, they operate by reducing the argument modulo 2π (at least in effect) and evaluating a polynomial that approximates sine, and that polynomial is typically symmetric around zero (has all odd powers of its variable x). So evaluating sin(x) and sin(-x) preserves the symmetry, and so does the final multiplication by 5. (cos implementations may have a similar symmetry, but, since the arguments in this case are already asymmetric, cos cannot restore symmetry.)