I have numbers in a file (so, as strings) in scientific notation, like:
8.99284722486562e-02
but I want to convert them to:
0.08992847
Is there any built-in function or any other way to do it?
I'm making this answer since the top voted one has misinformation and so i can explain my improvements.
TL;DR: Use ("%.17f" % n).rstrip('0').rstrip('.')
By default Python formats to scientific notation if there's 5 or more zeroes at the beginning.
0.00001
/ 1e-05
formats to "1e-05"
.
0.0001
/ 1e-04
formats to "0.0001"
.
So of course 8.99284722486562e-02
will format to "0.0899284722486562"
already.
A better example would've been 8.99284722486562e-05
. (0.00008992847224866
)
We can easily format to raw decimal places with "%f"
which is same as "%.6f"
by default.
"%f" % 8.99284722486562e-05
produces '0.000090'
.
"%f" % 0.01
produces '0.010000'
.
By default floats display upto 17 decimal places.
0.1234567898765432123
- (19 dp input)
0.12345678987654321
- (17 dp output)
So if we did "%.17f" % 8.99284722486562e-02
we'd get '0.08992847224865620'
. (note the extra 0)
But if we did "%.17f" % 0.0001
we surely wouldn't want '0.00010000000000000'
.
So to remove the trailing zeroes we can do: ("%.17f" % n).rstrip('0').rstrip('.')
(Notice we also strip the decimal point incase the number has no fraction left)
Also there's counterparts to %f
:
%f
shows standard notation
%e
shows scientific notation
%g
shows default (scientific if 5 or more zeroes)
Extra testing of Python's floating point accuracy:
Here we can see our 19 digit float cannot be represented exactly. The 23
becomes 03
.
"%.19f" % 0.1234567898765432123 # '0.1234567898765432103'
If we increase our value we'll see the 2 binary representations we're inbetween:
"%.19f" % 0.1234567898765432172 # '0.1234567898765432103' (172-69)
"%.19f" % 0.1234567898765432173 # '0.1234567898765432242' (173+69)
So when a value is stored into binary, it becomes the limited binary representation it's closest to.
Lets test more digits for our input. We'll average 2 representations.
"%.40f" % 0.1234567898765432172 # '0.1234567898765432103491690440932870842516'
"%.40f" % 0.1234567898765432173 # '0.1234567898765432242269568519077438395470'
a = 1234567898765432103491690440932870842516 # 103...
b = 1234567898765432242269568519077438395470 # 242...
(a+b)>>1 # 1234567898765432172880629480005154618993
# NOTE: doing (a+b)/2 would become a float and lose its accuracy. Use bit shift.
# Now lets test the average (as a float) and increment the last digit
"%.40f" % 0.1234567898765432172880629480005154618993 # '...103491690440932870842516'
"%.40f" % 0.1234567898765432172880629480005154618994 # '...242269568519077438395470'
So as we can see, the input can have as many digits as a deciding factor, but that can still of course be reduced down to the necessary amount of digits. Just like we also strip away unnecessary zeroes. So those values could be reduced to 0.12345678987654321
and 0.12345678987654322
.
Also now you know why one of these rounds down, while the other rounds up:
"%.17f"%0.123456789876543217 # '0.12345678987654321'
"%.17f"%0.123456789876543218 # '0.12345678987654322'