I'm trying to calculate the Signal-to-Quantization-Noise Ratio (SQNR) between a floating-point signal (fp) and its quantized fixed-point version (FP) using NumPy. The formula I'm using is:
SQNR = 10 * log10(fp^2 / (fp - FP)^2)
My signals are NumPy arrays with shape (100, 300, 125). I'm iterating through the first dimension and calculating the SQNR for each (300, 125) slice. I've tried three different approaches, but I'm unsure which one correctly implements the formula.
Here's a minimal reproducible example:
import numpy as np
import math
fp = np.random.rand(100, 300, 125)
FP = fp + np.random.normal(0, 0.01, (100, 300, 125)) # Simulate quantization noise
SQNR = np.zeros(100)
for i in range(100):
# Approach 1
# SQNR[i] = 10 * math.log10(((fp[i]**2).sum()) / ((fp[i]-FP[i])**2).sum())
# Approach 2
# SQNR[i] = 10 * math.log10(((np.mean(fp[i])**2)) / (np.mean(fp[i]-FP[i])**2))
# Approach 3
SQNR[i] = 10 * math.log10(np.mean(fp[i]**2) / np.mean((fp[i]-FP[i])**2))
print(np.mean(SQNR))
I'm looking for a clear explanation of which approach aligns with the formula and if there is a more efficient, vectorized NumPy approach to calculate the SQNR without using a loop.
The choice depends on what you're trying to achieve in your signal processing. For total signal energy and error, use the first one. For average power, use the third one. The second one is not correct, the mean squared value is not the same as the mean of the squared values.
NumPy's vectorized approach, which performs calculations on entire arrays at once, is far more efficient than the element-by-element processing of Python loops and the math
module.
def calculate_sqnr(fp, FP):
epsilon = 1e-10 # A small value
noise_power = np.sum((fp - FP)**2, axis=(1, 2))
signal_power = np.sum(fp**2, axis=(1, 2)) # first
# signal_power = np.mean(fp**2, axis=(1, 2)) # third
return 10 * np.log10(signal_power / (noise_power + epsilon))
# Calculate SQNR
SQNR = calculate_sqnr(fp, FP)
Adding a small value like epsilon
ensures that even if noise_power is very close to zero, you won't encounter division-by-zero errors or extremely large SQNR values.