I have a binary string (shown in hex below) and am using CRC-16-CCITT. I want all of my CRCs to come out to a set value, 0x1D0F. I know it is possible to make the CRCs match this value by appending on 2 bytes to the end of the original message, but am not sure how to find out what the appended value needs to be.
Ex.
0x01 0000 0000 0000 0000 0000 0000 0000 0000 13D8
Appended value is the 0x13D8
By adding this 13D8 onto the message the CRC gives me the desired 0x1D0F.
Any help on how to calculate this 0x13D8 value would greatly be appreciated.
If it can be done neatly in code thatd be a bonus!
The theory is quite simple, but of course it takes some care to implement correctly.
You can choose any 16 bits anywhere in the message (including two bytes at the end or 16 individual bits scattered wherever you like) to be undefined. Call them xi for i = 0..15. Then use a bit-by-bit CRC algorithm to process the message, but generate and update the coefficients of 16 linear equations in xi, representing the 16 bits of the crc.
You then have a simple matrix equation Ax + b = c. The operations for Ax + b = c are not the usual multiplication and addition, but rather single-bit and and exclusive-or operations.
Now you use the usual methods to invert the matrix A, which is actually easier with and and xor (addition and subtraction are now both the same thing, just exclusive-or), compute b ^ c and multiply that by the inverse. Now you have the values to put in the xi bits to get the desired crc.
An additional simplification is that you don't need the actual message, just the length and the location of the xi, and then do the above with all the other bits of the message set to zero. This is because if you have two messages P and Q of the same length, then crc(P) ^ crc(Q) = crc(P ^ Q). (This applies for the core crc algorithm, ignoring pre and post-processing of the crc.)
Update:
You can download spoof.c, which solves the problem of modifying a message to produce a particular CRC.