Do you have any examples where writing the expression for the number may lead to floating point erros, while using scientific notation does not?
For example, I always give some examples where scientific notation is useful in engineering, such as specifying the young modulus of steel and the diameter of a bar:
E = 210e9; % Pa
d = 10e-3; % m
Some studentes, however, do not understand correctly the E notation, so many of them declare the variables as:
E = 210*10^9; % Pa
d = 10*10^-3; % m
Although I try to talk about how the E notation is easier to read, faster to type etc., some students still specify variables to big or to small by multiplying it to 10 raised to n power. I also try to explain that declaring the variables as such involves evaluating the power and multiplication, that may lead to floating point error arithmetics.
Do you know any examples where declaring the variables as such, instead of using the E notation, would lead to an erroneous declaration in the variables?
A small example might help your students. E.g., 0.3 is a simple case that gave me the same difference results in MATLAB, Python, Java, and C. Java and C are not directly part of this discussion since you have to call a raise-to-power function instead of a direct operator to get the result, but I wanted to see if their library code produced the same results and it did. Regardless, here is the MATLAB demo:
>> x1 = 0.3;
>> x2 = 3e-1;
>> x3 = 3*10^(-1);
>> x1 == x2
ans =
logical
1
>> x2 == x3
ans =
logical
0
>> format hex
>> x1
x1 =
3fd3333333333333
>> x2
x2 =
3fd3333333333333
>> x3
x3 =
3fd3333333333334
>> double(sym('0.3')) % best possible result using symbolic engine and converting
ans =
3fd3333333333333
>> 0.3 == 300000000000000e-15 % a somewhat absurd case but still matches
ans =
logical
1
>> 0.3 == 0.0000000000000003e15 % another absurd case but still matches
ans =
logical
1
The syntax where the parser used the 3e-1 notation got the best result. A slightly less accurate result off by 1 bit was obtained with the 3*10^(-1) notation. I image most if not all modern languages will always give you the closest IEEE double precision floating point bit pattern possible to the decimal string when using the e notation, whether the parsing happens at compile time like Java or C, or dynamically like MATLAB and Python. Library functions like raising-to-power are not in general required by language specs to always produce the closest possible floating point bit pattern result, so this alone is probably reason enough to steer your students to the e notation. Combined with subsequently multiplying by another value can easily result in differences as shown in this case.
I would hesitate to call the 10^(etc.) notation "erroneous" as you suggest, however. That is probably too strong a word to use, since floating point calculations in general typically can't be trusted in the trailing bits anyway. But since the e notation is very likely to give you the most accurate result to the intent, why lose accuracy (even if it is only a bit or so) with the 10^(etc.) notation if you don't have to?
A possible way to handle this: Dock point(s) on assignments/tests when students do the 10^(etc.) syntax, but allow them to change the code and turn it in again to get the point(s) back. That forces them to think about it and correct their code and learn without being overly punitive.