I do not understand the following behaviour on my Nvidia Jetson Nano dev board.
C code sample:
//main.c
#include <stdio.h>
int main()
{
int fred = 123;
int i;
for(i = -10 ; i <= 10 ; i++)
printf("%d / %d == %d\n", fred, i, fred / i);
return 0;
}
Compiled with:
gcc main.c -ggdb
Running the resulting a.out executable yields the following output...
123 / -10 == -12
123 / -9 == -13
123 / -8 == -15
123 / -7 == -17
123 / -6 == -20
123 / -5 == -24
123 / -4 == -30
123 / -3 == -41
123 / -2 == -61
123 / -1 == -123
123 / 0 == 0 //unexpected!
123 / 1 == 123
123 / 2 == 61
123 / 3 == 41
123 / 4 == 30
123 / 5 == 24
123 / 6 == 20
123 / 7 == 17
123 / 8 == 15
123 / 9 == 13
123 / 10 == 12
The exact same code compiled on an ancient Pentium 4 using gcc 3.7 causes (as expected) a runtime exception to be thrown when i
reaches 0 and causes a division by zero.
The Nvidia board is running Ubuntu 18.04 LTS, gcc version 7.4.0 (latest) and in every other respect runs beautifully. I have also compiled the equivalent Ada language version of this code and a runtime exception is raised as one would expect (because Ada does safety checks ahead of time on my behalf).
I realise that in C, "division by zero yields undefined behaviour" is likely the explanation for this, but for two versions of the same compiler suite to give such different results to the same operation is puzzling to me.
What circumstances could cause an Nvidia Tegra ARM (64 bit) CPU to allow a division by zero to pass by unnoticed by the OS?
EDIT: Details about the CPU from /etc/cpuinfo...
$ cat /proc/cpuinfo
processor : 0
model name : ARMv8 Processor rev 1 (v8l)
BogoMIPS : 38.40
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x1
CPU part : 0xd07
CPU revision : 1
.... truncated ....
Nvidia Jetson Nano dev board uses ARM Cortex-A57 (Link) which is based on ARMv8 architecture. Based on the instruction set spec of ARMv8, integer division by zero returns zero and not trapped.
2.3 Divide instructions
ARMv8-A supports signed and unsigned division of 32-bit and 64-bit sized values.
Instruction Description
SDIV Signed divide
UDIV Unsigned divide
...
Overflow and divide-by-zero are not trapped:
• Any integer division by zero returns zero
So the compiler generates sdiv
in this case (see example) and the CPU returns 0 without any exception. When you compile the same code on different platforms, each other CPU may reacts differently to division by zero. As you mentioned in your question, in case of division by 0, the behavior is undefined by the C standard.