I have made the following intel hex file snippet to test with a couple 8051 processor simulators:
:1000F5007002501F500CED2562FDEE3561FEEF35A7
:1001050060FFE56233F562E56133F561E56033F57E
One I used is located here: http://www.jroweb.de/8051/
From my research, it is to my understanding that a checksum of the intel hex file is calculated by summing all the pairs of hex digits (except last), then ANDing the result with 255 to get the 8-bit value, inversing the value, adding 1, and doing mod 256.
I basically followed the math from a respondent from these forums:
When I performed the calculations, the checksum values for each line in the above snippet are correct, however in the 8051 processor simulator program I mentioned above, it shows me a checksum error on the last line. It thinks the value should be 7F and not 7E.
Is it a possibility that a false positive exists in the last hex file line (which confuses certain software to believe that 7F is the correct value)? If so, how should I arrange the last line in my hex file to fix it?
AFAICT your checksums are correct. Based on this and some other records I've fed to it, the simulator seems to have an off-by-one bug on some inputs.
You have several options, including:
edit the simulator's t8051m.ini file and set IgnoreChecksum
to 1
humour the simulator by editing your hex records to carry the checksums it expects instead of the correct checksums
disassemble the simulator's .exe file, find the bug, and fix it. (Before you go to those lengths, you could try reporting the problem to the author. It might be a known issue with a fix that the author just never got around to publishing on his website.)
use some other simulator