fortranhomebrewapple-m1gfortran

Gfortran: Large Arrays do not work on M1 Mac


My Fortran code is not working on my M1 Mac (Ventura 13.5) anymore. Here is a simple example.

program mwe
    implicit none
    integer, parameter :: nx=500
    integer, parameter :: ny=500
    integer, parameter :: nz=500
    integer, parameter :: ncol=4

    integer :: image(nx, ny, nz, ncol-1)
    double precision :: alpha(nx, ny, nz)

    image = 0
    alpha = 0.0

end program mwe

Commenting out either array or image (declaration & assignment) causes the code to work fine. But with both uncommented, I get this when I run it:

dyld[10519]: dyld cache '(null)' not loaded: syscall to map cache into shared region failed
dyld[10519]: Library not loaded: /usr/lib/libSystem.B.dylib
  Referenced from: <43A71502-FD1F-3929-A1F3-3102B17ACF2D> /Users/USERNAME/Desktop/fortran_mwe/a.out
  Reason: tried: '/usr/lib/libSystem.B.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/lib/libSystem.B.dylib' (no such file), '/usr/lib/libSystem.B.dylib' (no such file, no dyld cache), '/usr/local/lib/libSystem.B.dylib' (no such file)
Abort trap: 6

There are some discussions online, allocatable arrays seem to work, but are not an option in my case. Any help is appreciated!


Edit: Info requested in comments:

Output from otool -l your_binary | fgrep -B1 -A10 LC_SEGMENT_64:

Load command 0
      cmd LC_SEGMENT_64
  cmdsize 72
  segname __PAGEZERO
   vmaddr 0x0000000000000000
   vmsize 0x0000000100000000
  fileoff 0
 filesize 0
  maxprot 0x00000000
 initprot 0x00000000
   nsects 0
    flags 0x0
Load command 1
      cmd LC_SEGMENT_64
  cmdsize 392
  segname __TEXT
   vmaddr 0x0000000100000000
   vmsize 0x0000000000004000
  fileoff 0
 filesize 16384
  maxprot 0x00000005
 initprot 0x00000005
   nsects 4
    flags 0x0
--
Load command 2
      cmd LC_SEGMENT_64
  cmdsize 152
  segname __DATA_CONST
   vmaddr 0x0000000100004000
   vmsize 0x0000000000004000
  fileoff 16384
 filesize 16384
  maxprot 0x00000003
 initprot 0x00000003
   nsects 1
    flags 0x10
--
Load command 3
      cmd LC_SEGMENT_64
  cmdsize 152
  segname __DATA
   vmaddr 0x0000000100008000
   vmsize 0x0000000095030000
  fileoff 0
 filesize 0
  maxprot 0x00000003
 initprot 0x00000003
   nsects 1
    flags 0x0
--
Load command 4
      cmd LC_SEGMENT_64
  cmdsize 72
  segname __LINKEDIT
   vmaddr 0x0000000195038000
   vmsize 0x0000000000004000
  fileoff 32768
 filesize 898
  maxprot 0x00000001
 initprot 0x00000001
   nsects 0
    flags 0x0

Output from otool -l your_binary | fgrep -B1 -A3 LC_MAIN:

Load command 13
       cmd LC_MAIN
   cmdsize 24
  entryoff 16140
 stacksize 0

Solution

  • Your binary has a __DATA segment of 2.5GB virtual memory size. With a binary at the default load address of 0x100000000, that's gonna extend into the region reserved for the dyld_shared_cache that contains all the system libraries at 0x180000000. Both regions are subject to ASLR so the runtime addresses will diverge a bit, but 2.5GB is comfortably too much.

    The quick and dirty fix is to lower the default load address:

    -Wl,-pagezero_size,0x10000
    

    This will give you about 3.9GB more space that your binary can occupy.

    A fix that's less hacky would be to move these massive arrays into a shared library, which can be loaded anywhere by dyld and doesn't have to fit before the 0x180000000 region.

    Note, however, that you're really pushing the limit here. If the virtual memory size of your binary ever grows past 4GB, various thing start breaking and dyld will almost certainly fail to load your binary. So of course the proper fix here would be to allocate such large amounts of memory dynamically at runtime.