It's fairly common knowledge that if you access an element of an array as arr[i]
in C that you can also access the element as i[arr]
, because these just boil down to *(arr + i)
and addition is commutative. My question is why this works for data types larger than char
, because sizeof(char)
is 1, and to me this should advance the pointer by just one char.
Perhaps this example makes it clearer:
#include <string.h>
#include <stdio.h>
struct large { char data[1024]; };
int main( int argc, char * argv[] )
{
struct large arr[4];
memset( arr, 0, sizeof( arr ) );
printf( "%lu\n", sizeof( arr ) ); // prints 4096
strcpy( arr[0].data, "should not be accessed (and is not)" );
strcpy( arr[1].data, "Hello world!" );
printf( "%s, %s, %s\n", arr[1].data, 1[arr].data, (*(arr+1)).data );
// prints Hello world!, Hello world!, Hello world!
// I expected `hold not be accessed (and is not)' for #3 at least
return 0;
}
So why does adding one to the array pointer advance it by sizeof( struct large )
?
In C, pointer arithmetic is defined so that writing
ptr + k
does not advance the pointer by k bytes, but by k objects. Thus if you have a pointer to an integer array and writing
*(myIntArrayPointer + 3)
You are dereferencing a pointer to the element at index 3 in the array, not the integer that starts three bytes past the start of the object.
Similarly, if you subtract two pointers, you get the logical number of elements in-between them, not the total number of bytes. Thus writing
(myIntArrayPointer + 3) - myIntArrayPointer
yields the value 3, even though there are 3 * sizeof(int)
bytes in-between them.
Hope this helps!