abicortex-mbare-metalcompiler-optionsarm-none-eabi-gcc

How do the `aapcs` and `aapcs-linux` ABI options differ when compiling for bare-metal ARM with gcc?


I am trying to port an application to ARM's arm-none-eabi-gcc toolchain. This application is intended to run on a bare-metal target.

The only two suitable values for the -mabi option in this case appear to be aapcs and aapcs-linux. From Debian documentation and Embedded Linux from Source I know that aapcs-linux uses a fixed 4-byte enum size, whereas aapcs defines enums as "variable length". However, I can't find any information on what other differences (if any) there might be.

Does anyone know the full list of differences between these two ABI options?


Solution

  • I have downloaded the source for arm-none-eabi-gcc version 10.3-2021.10.

    As well as the enum size, there is one other difference that I can find:

    #undef TARGET_INIT_LIBFUNCS
    #define TARGET_INIT_LIBFUNCS arm_init_libfuncs
    
    ...
    
    /* Set up library functions unique to ARM.  */
    static void
    arm_init_libfuncs (void)
    {
      machine_mode mode_iter;
    
      /* For Linux, we have access to kernel support for atomic operations.  */
      if (arm_abi == ARM_ABI_AAPCS_LINUX)
        init_sync_libfuncs (MAX_SYNC_LIBFUNC_SIZE);
    

    So from the above, it appears that additional library functions are set up with aapcs-linux. Here's the function called:

    void
    init_sync_libfuncs (int max)
    {
      if (!flag_sync_libcalls)
        return;
    
      init_sync_libfuncs_1 (sync_compare_and_swap_optab,
                "__sync_val_compare_and_swap", max);
      init_sync_libfuncs_1 (sync_lock_test_and_set_optab,
                "__sync_lock_test_and_set", max);
    
      init_sync_libfuncs_1 (sync_old_add_optab, "__sync_fetch_and_add", max);
      init_sync_libfuncs_1 (sync_old_sub_optab, "__sync_fetch_and_sub", max);
      init_sync_libfuncs_1 (sync_old_ior_optab, "__sync_fetch_and_or", max);
      init_sync_libfuncs_1 (sync_old_and_optab, "__sync_fetch_and_and", max);
      init_sync_libfuncs_1 (sync_old_xor_optab, "__sync_fetch_and_xor", max);
      init_sync_libfuncs_1 (sync_old_nand_optab, "__sync_fetch_and_nand", max);
    
      init_sync_libfuncs_1 (sync_new_add_optab, "__sync_add_and_fetch", max);
      init_sync_libfuncs_1 (sync_new_sub_optab, "__sync_sub_and_fetch", max);
      init_sync_libfuncs_1 (sync_new_ior_optab, "__sync_or_and_fetch", max);
      init_sync_libfuncs_1 (sync_new_and_optab, "__sync_and_and_fetch", max);
      init_sync_libfuncs_1 (sync_new_xor_optab, "__sync_xor_and_fetch", max);
      init_sync_libfuncs_1 (sync_new_nand_optab, "__sync_nand_and_fetch", max);
    }
    

    I haven't looked into this deeply enough to know in what circumstances flag_sync_libcalls is likely to be true and in which circumstances it would be false.

    That said, my searching has not shown up any direct calls to arm_init_libfuncs, or calls made with the TARGET_INIT_LIBFUNCS macro, so I don't know if this difference ever actually amounts to anything in practice.