Firstly, I'm using this approximation of a natural log. Or look here (4.1.27) for a better representation of formula.
Here's my implementation:
constexpr double eps = 1e-12;
constexpr double my_exp(const double& power)
{
double numerator = 1;
ull denominator = 1;
size_t count = 1;
double term = numerator / denominator;
double sum = 0;
while (count < 20)
{
sum += term;
numerator *= power;
#ifdef _DEBUG
if (denominator > std::numeric_limits<ull>::max() / count)
throw std::overflow_error("Denominator has overflown at count " + std::to_string(count));
#endif // _DEBUG
denominator *= count++;
term = numerator / denominator;
}
return sum;
}
constexpr double E = my_exp(1);
constexpr double my_log(const double& num)
{
if (num < 1)
return my_log(num * E) - 1;
else if (num > E)
return my_log(num / E) + 1;
else
{
double s = 0;
size_t tmp_odd = 1;
double tmp = (num - 1) / (num + 1);
double mul = tmp * tmp;
while (tmp >= eps)
{
s += tmp;
tmp_odd += 2;
tmp *= mul / tmp_odd;
}
return 2 * s;
}
}
You probably can see why I want to implement these functions. Basically, I want to implement a pow function. But still my approach gives very imprecise answers, for example my_log(10) = 2.30256, but according to google (ln 10 ~ 2.30259).
my_exp() is very precise since it's taylor expansion is highly convergant. my_exp(1) = 2.718281828459, meanwhile e^1 = 2.71828182846 according to google. But unfortunately it's not the same case for natural log, and I don't even know how is this series for a natural log derived (I mean from the links above). And I couldn't find any source about this series.
Where's the precision errors coming from?
The line tmp *= mul / tmp_odd;
means that each term is also being divided by the denominators of all previous terms, i.e. 1, 1*3, 1*3*5, 1*3*5*7, ...
rather than 1, 3, 5, 7, ...
as the formula states.
The numerator and denominator should therefore be computed independently:
double sum = 0;
double value = (num - 1) / (num + 1);
double mul = value * value;
size_t denom = 1;
double power = value;
double term = value;
while (term > eps)
{
sum += term;
power *= mul;
denom += 2;
term = power / denom;
}
return 2 * sum;
...
// Output for num = 1.5, eps = 1e-12
My func: 0.405465108108004513
Cmath log: 0.405465108108164385
------------
Much better!
Reducing the epsilon to 1e-18
, we hit the accuracy limits of naïve summation:
// Output for num = 1.5, eps = 1e-18
My func: 0.40546510810816444
Cmath log: 0.405465108108164385
---------------
Kahan-Neumaier to the rescue:
double sum = 0;
double error = 0;
double value = (num - 1) / (num + 1);
double mul = value * value;
size_t denom = 1;
double power = value;
double term = value;
while (term > eps)
{
double temp = sum + term;
if (abs(sum) >= abs(term))
error += (sum - temp) + term;
else
error += (term - temp) + sum;
sum = temp;
power *= mul;
denom += 2;
term = power / denom;
}
return 2 * (sum + error);
...
// Output for num = 1.5, eps = 1e-18
My func: 0.405465108108164385
Cmath log: 0.405465108108164385