I'm maintaining a legacy Fortran codebase that (to my eye) relies on undefined behavior. There's a module defined like this
module the_module
private :: baz
contains
integer function foo(bar)
type(my_type) :: bar ! my_type is defined elsewhere, and is a *large* data structure
if (.not.baz(bar)) then
foo = 0
return
end if
! ...
end function
logical function baz(bar)
type(my_type) :: bar
type(my_type) :: previous_bar
if (bar%field /= previous_bar%field) goto 10
! ...
10 previous_bar = bar
end function
end module the_module
I understand that if a variable is initialized at declaration, it implies the save
attribute which retains the value between calls. The baz
function does not do that, nor does it explicitly have the save
attribute. Why, then, has the code been able to rely on previous_bar
containing the previous value passed into the function? The first time it's called, previous_bar
seems to have a zeroed out structure (character
variables are spaces, numeric values are all 0).
My suspicion is that since my_type
is a large data structure, it's probably being stored statically in memory, and the author just got lucky that the compiler chose that. My other guess is something to do with the fact that baz
is private
(almost no other functions in this code base are private
). I'm using the Intel oneAPI legacy IFORT compiler.
You're absolutely right, this code relies on the fact that previous_bar
is statically allocated by the compiler, which is allowed but not required by the standard. Furthermore, even if this variable was save
d, the standard does not require the compilers to initialize it. baz
being private doesn't make any difference here. So, depending on the compiler, this code may work, or not work.
Note that with (coming) compilers that will conform to the F2023 standard, this code is even guaranteed to fail, as this revision will make all subroutines recursive
by default, thus requiring the compilers to NOT statically allocate the non-saved variables.