I'm trying to approximate a function using the Fourier series. As I increase N, the approximation initially gets closer to the given function, as expected. However, after a certain point, instead of continuing to converge, the approximation starts behaving erratically and becomes more distorted.
import numpy as np
from scipy.integrate import quad
def my_fourier_coef(f, n):
# Compute the An coefficient
integrand_a = lambda x: f(x) * np.cos(n * x)
an_integral, _ = quad(integrand_a, -np.pi, np.pi)
An = an_integral / np.pi
# Compute the Bn coefficient
integrand_b = lambda x: f(x) * np.sin(n * x)
bn_integral, _ = quad(integrand_b, -np.pi, np.pi)
Bn = bn_integral / np.pi
return [An, Bn]
# Example test case (as provided in the problem statement)
def plot_results(f, N):
import matplotlib.pyplot as plt
x = np.linspace(-np.pi, np.pi, 10000)
[A0, B0] = my_fourier_coef(f, 0)
y = A0 * np.ones(len(x)) / 2
for n in range(1, N):
[An, Bn] = my_fourier_coef(f, n)
y += An * np.cos(n * x) + Bn * np.sin(n * x)
plt.figure(figsize=(10, 6))
plt.plot(x, list(map(f,x)), label="analytic")
plt.plot(x, y, label="approximate")
plt.xlabel("x")
plt.ylabel("y")
plt.grid()
plt.legend()
plt.title(f"{N}th Order Fourier Approximation")
plt.show()
f = lambda x: round(x)
N = 10
plot_results(f, N)
When n=10 the approximation is fine but not good
the function is very close to the given function
when n=1000 I expected the graph to over lap on the given function but the graph gets very weird and random. I am not able to comprehend the reason for it. Can someone please explain what is happening here and how to make it work.
When N is high the integrand oscillates too fast to be resolved for the default maximum number of quadrature points (50, I think) in quad()
. You can increase the limit
parameter in the call to quad
:
an_integral, _ = quad(integrand_a, -np.pi, np.pi, limit=1000 )
bn_integral, _ = quad(integrand_b, -np.pi, np.pi, limit=1000 )
At the end of the day, it isn't a great idea to use these high-frequency fits, though. And you will always get those little undershoots and overshoots near the discontinuities in your function - it is called "Gibbs phenomenon".