I'm studying C, and the idea of guard digits and rounding errors came up. Do practitioners of scripting languages (I'm thinking of Python and Perl here) need to worry about this stuff? What if they are doing scientific programming?
I would have to disagree with Lutz... While the rounding errors you mentioned do exist in Python/Perl/Ruby, they have absolutely nothing to do with the languages being implemented in C. The problem goes deeper than that.
Floating-point numbers, like all data, are represented in binary on modern computers. Just as there are numbers with periodic decimal representations (e.g., 1/3 = 0.333333...), there are also numbers with periodic binary representations (e.g., 1/10 = 0.0001100110011...). Since these numbers cannot be exactly represented in (a finite amount of) computer memory, any calculations involving them will introduce error.
This can be worked around by using high-precision math libraries, which represent the numbers either as the two numbers of a fraction (i.e., "numerator = 1, denominator = 10") or as string instead of using a native binary representation. However, because of the extra work involved in doing any calculations on numbers that are being stored as something else, these libraries necessarily slow down any math that has to go through them.