arraysfortrandoubleprecisiongfortran

How to set double precision for elements of an array in Fortran 90


I am not very familiar with Fortran code and want to compare it to some C code. I have the following definition for an array for which I want to have double precision (as I want to compare numbers between the Fortran and C code with double precision).

INTEGER :: n = 3
DOUBLE PRECISION, DIMENSION(n, 1) :: grid

Later I have

grid(1,1) = -2.85697001387280    
grid(2,1) = -1.35562617997427   
grid(3,1) =  0.00000000000000       

Running the code through the debugger, To my surprise, debugger gives

grid = [-2.8569700717926025, -1.3556262254714966, 0]

which is only correct to single precision. I have also tried REAL*8, REAL(8) instead of DOUBLE PRECISION. I found it all the more surprising as the following code

INTEGER :: n = 3
REAL(8) :: mina = 0.5, maxa = 5.0 
REAL(8) :: lmina, step
REAL(8), DIMENSION(n,1) :: a

lmina = log(mina)
lmaxa = log(maxa)
step = (lmaxa - lmina) / (n - 1)
DO i=1,n
   a(i,1)=lmina+(i-1.0)*step
END DO

does create scalars and the array a that are accurate to double precision and exactly equal to the numbers of the C code. Why is this different in case of the hard-coded elements and how to fix it.

FYI, I compile the debug version as follows:

gfortran -g -o myfile myfile.f90

Solution

  • Fortran is like C in that numeric literals have a default data type, and no attempt is made to infer the type from the left-hand side of an assignment. Fortran is not like C in that the default for floating point literals is single precision (REAL in F77 and earlier, REAL with default KIND in F90 and later). This must be overridden to get a double precision constant (DOUBLE PRECISION or in F90 and later REAL with specific non-default KIND). Compare with C where you must override the default to get a single precision (float) constant.

    In the code that you subsequently cited:

    REAL(8) :: mina = 0.5, maxa = 5.0 
    REAL(8) :: lmina, lmaxa, step
    REAL(8), DIMENSION(n,1) :: a
    
    lmina = log(mina)
    lmaxa = log(maxa)
    step = (lmaxa - lmina) / (n - 1)
    DO i=1,n
       a(i,1)=lmina+(i-1.0)*step
    END DO
    

    Here it is unsurprising that the values end up being double precision without rounding.

    1. mina and maxa are double precision (using the implementation-specific KIND value of 8, also see the other answer for the risks of working this way). They are initialized from single precision literals, but both are exactly representable in binary floating point and thus there is no rounding.
    2. lmina and lmaxa are double precision and calculated using a double precision LOG function.
    3. step is double precision and calculated from double precision and integer values.
    4. The value computed in the loop is from double precision variables (lmina and step) and a single precision literal 1.0 which is exactly representable and thus not rounded (and gets promoted to double precision in the computation).

    You can get a double precision constant by using D instead of E in scientific notation or by appending an underscore followed by the appropriate KIND parameter (which may be either a numeric literal or an integer PARAMETER declared elsewhere). A common idiom for doing this is to have a MODULE which is USEd to define a common numeric KIND type to use throughout; an example of this style is given in the other answer.