CDROM data use a 3rd layer of error detection using Reed-Solomon and an EDC using a 32_bits CRC polynomial.
The ECMA 130 standard define the EDC CRC polynomial as follow (page 16, 14.3):
P(X) = (X^16 + x^15 + x^2 + 1).(x^16 + x^2 + x + 1)
and
The least significant bit of a data byte is used first.
Usually, translating the polynomial into its integer value form is pretty straightforward. Using modulo math, the extended polynomial must be P(X) = x^32 + x^31 + x^18 + x^17 + x^16 + x^15 + x^4 + x^3 + x^2 + x + 1
, thus the value being 0x8007801F
The last sentence means that the polynomial is reversed (if I get it right).
But I didn't managed to get the right value so far. The Cdrtools source code use 0x08001801 as polynomial value. Can someone explain how did they find that value?
Posting the answer :
First, I made a mistake in the modulo-2 algebra used to expand the polynomial. Non-modulo expanded form is :
P(X) = x^32 + x^31 + 2x^18 + 2x^17 + 3x^16 + x^15 + x^4 + x^3 + 2x^2 + x + 1
Any even coefficient equals 0 and odd ones equals 1 in modulo-2 algebra, so the final expanded polynomial is :
P(X) = x^32 + x^31 + x^16 + x^15 + x^4 + x^3 + x + 1
So, the actual value is 0x8001801B
Second, I misread the cdrtools source, their value is 0x8001801B
too.