I'm currently fiddling around with DCPU-16 assembler (see http://0x10c.com/doc/dcpu-16.txt and http://jazzychad.net/dcpu.html).
There is one thing I don't understand in the way the assembler instructions are transformed to hex/binary.
As an example, take an instruction like
SET B, 0x0002
which is supposed to set the value of register B to decimal 2 (or hex 0x0002 or binary 0b000010)
Instruction format for DCPU-16 is
bbbbbbaaaaaaoooo
thus, 4 bits for the opcode at the lower end, 6 bits for the first value, 6 bits for the second value.
When transforming the instruction by hand, this is how I would do it:
SET == 0x1 == 0b0001
B == 0x01 == 0b000001
0x0002 == 0b000010
ending up with the complete instruction being
0b0000100000010001 == 0x811
but the correct value for DCPU-16 is
0b1000100000010001 == 0x8811
that is, a leading 1 is added - why is that?
I'm totally new to assembler and any other kind of hardcore low level machine instruction stuff, so please bear with me if this is a very stupid question.
According to the specs,
Values: (6 bits) 0x00-0x07: register (A, B, C, X, Y, Z, I or J, in that order) ... 0x20-0x3f: literal value 0x00-0x1f (literal)
Thus, literals 0x00-0x1f
are specified by the instruction-values 0x20-0x3f
- that is, the most significant-bit (out of the 6) is set. So the literal 0x02
would have the instruction-value 0x22
.
The instruction-value 0x02
refers to the C-register, so what you thought the assembled instruction should be, 0b0000100000010001 == 0x811
, would actually be the instruction SET B, C
.