I am designing a protocol with fixed size header+payload fields and I want to add a CRC check to the protocol design.
When adding a CRC value at the end of packet, there are two way of checking packet integrity at the receiver side:
Generating the CRC value from the payload section (the section which the sender generates CRC from) and checking with received value.
Dividing the whole packet by a predefined polynomial and checking for zero remainder.
Both methods work well if we use fixed initial values for the CRC generator, so what is the correct way and why?
P.S: I mean correct way not the best. Both solutions works but I want to know that either of these solutions work as we expect in every situation. For example, when we see an all zero payload in CRC the remainder is zero and it may be considered as valid packet ... but it's not. So, the correct way doesn't mean an opinion-based answer.
The correct way is the way that works. Either of your options can work, though take care in what to expect if you use the division method. The remainder is not necessarily expected to be zero.
I prefer your option #1, since a) it requires less time to calculate the CRC on fewer bytes, and b) you could replace the CRC with some other check without having to replace the little bit of verification code you are asking about.
Update for clarified question:
Yes, both approaches will work identically in all cases. As mentioned, the result of calculating the CRC on the the message + CRC is not always zero, depending on the definition of the CRC. However it is always the same constant for a given CRC on a message and CRC with no errors. Assuming that is, that you fed the CRC bits in in the correct order, e.g. feeding the bytes of the CRC in little-endian order for a reflected CRC and big-endian for a non-reflected CRC.