I ran a test on my graphing calculator to check for floating point error, and after forty eight hours of complete and utter randomness, the calculator had not lost a single digit of precision.
How does TI pull this off?
The TI-89 and TI-92 avoid error by using symbolic computation to store values exactly.
Actual floating-point computations ("approx" mode on the 89/92) do have errors. They're just harder to notice because the TI calculators display fewer digits than they store. Also, they use decimal instead of binary.
For example, if you enter the expression 1/3*3-1
on a TI-89 in "approx" mode, you get the answer ⁻1.ᴇ⁻14
instead of the 0
you get in exact mode. Internally, the calculation is done as follows:
1/3
gives 0.33333333333333
, rounded to 14 significant digits.0.99999999999999
. Because of rounding, this displays as 1.
-0.00000000000001
, or -1e-14.