I was studying error detection in computer networks and I came to know about the following methods -
But after studying only a bit (lmao pun), I came across cases where they fail.
The methods fail when -
Single Bit parity Check - If an even number of bits has been inverted.
2d parity Check - If an even number of bits are inverted in the same position.
Checksum - addition of 0 to a frame does not change the result, and sequence is not maintained. (for e.g. in the data - 10101010 11110000 11001100 10111001 if we add 0 to any of the four frames here)
CRC - A CRC of n-bit for g(x) = (x+l)*p(x) can detect: All burst errors of length less than or equal to n.
All burst errors affecting an odd number of bits.
All burst errors of length equal to n + 1 with probability (2^(n-1) − l)/2^n − 1
All burst errors of length greater than n + 1 with probability (2^(n-1) − l)/2^n
[the CRC-32 polynomial will detect all burst errors of length greater than 33 with
probability (2^32 − l)/2^32; This is equivalent to a 99.99999998% accuracy rate]
Copied from here - https://stackoverflow.com/a/65718709/16778741
As we can see these methods fail because of some very obvious shortcomings.
So my question is - why were these still allowed and not rectified and what do we use these days?
Its like the people who made them forgot to cross check
It is a tradeoff between effort and risk. The more redundant bits are added, the smaller the risk of undetected error.
Extra bits mean additional memory or network bandwidth consumption. It depends on the application, which additional effort is justified. Complicated checksums add some computational overhead as well.
Modern checksum or hash functions can drive the remaining risk to very small ranges tolerable for the vast majority of applications.