I have the following test assembly program:
.section .rodata
a: .byte 17
.section .text
.globl _start
_start:
mov $1, %eax
mov a(%rip), %ebx
int $0x80
And I've compiled into an executable called file
. When I use objdump
to disassemble I get the following expected output:
$ objdump --disassemble --section=.text file
file: file format elf64-x86-64
Disassembly of section .text:
0000000000400078 <_start>:
400078: b8 01 00 00 00 mov $0x1,%eax
40007d: 8b 1d 02 00 00 00 mov 0x2(%rip),%ebx # 400085 <a>
400083: cd 80 int $0x80
However, when I just print the binary with $ xxd file
the memory doesn't even go up to 400078
:
00000000: 7f45 4c46 0201 0100 0000 0000 0000 0000 .ELF............
00000010: 0200 3e00 0100 0000 b000 4000 0000 0000 ..>.......@.....
00000020: 4000 0000 0000 0000 e001 0000 0000 0000 @...............
...
00000340: 2700 0000 0000 0000 0000 0000 0000 0000 '...............
00000350: 0100 0000 0000 0000 0000 0000 0000 0000 ................
What accounts for the differences in this? It seems like xxd
just offsets everything from 0, but then what 'offset' if you can call it that does objdump
use? How can I reconcile where 400078
would be in xxd
? Or do I need to use another program for that?
Why the differences in memory address or offset between xxd and objdump?
Because they show you largely unrelated views of the data.
xxd
shows you the raw bits of an arbitrary file, with no interpretation of their meaning.
objdump
(with the flags you used) shows you what the contents of memory would look like when your executable is loaded into memory.
objdump
arrives at that view by examining and understanding the meaning of the ELF
file header, program headers and section headers.
You can use readelf --segments
and readelf --sections
to examine these headers.