I was wondering about the overhead of querying size of array in fortran. Old fortran (<f95) way was to pass the size of array to the arguments of subroutine:
subroutine asub(nelem,ar)
integer,intent(in)::nelem
real*8,intent(in)::ar(:)
! do stuff with nelem such as allocate other arrays
end subroutine asub
Since the size function of f95, it can be done this way:
subroutine asub(ar)
real*8,intent(in)::ar(:)
! do stuff with size(ar) such as allocate other arrays
end subroutine asub
Is method 2 bad performance-wise if asub is called million times ?
I am asking because I am working on a relatively big code where some array sizes are global variables (not even passed as subroutine arguments), which is really bad in my opinion. Method 1 would require a lot of work in order to propagate the array sizes to the whole code while method 2 is clearly faster to achieve in my case.
Thanks !
nelem
is a number that you need to read from memory, size(ar)
is also a number that you need to read from memory. And you need to inquire the value just once. And then probably do a lot of computation over nelem
elements. The overhead inquiring the size of the value will be completely negligible.
OK, size(ar)
is a function call, but the compiler can just insert reading the right value from the array descriptor). And even if it remains a function call, still it will be called just once.
Differences, if any, will be elsewhere, mainly as described in the Q/A linked linked by francescalus Passing arrays to subroutines in Fortran: Assumed shape vs explicit shape. Depending on what the compiler can assume about the array being contiguous in memory it will be able to optimize it better or worse (e.g. SIMD vectorization).
As always, where performance matters, you should test and measure. Remember to enable all relevant compiler optimizations.