I'm writing a simple program to determine the difference between two musical pitches in cents; one cent is equal to 1/100th of a semitone. Dealing in cents is preferable for comparing musical pitches because the frequency scale is logarithmic, not linear. In theory, this is an easy calculation: the formula for determining the number of cents between two frequencies is:
1200 * log2(pitch_a / pitch_b)
I've written a small piece of code to automate this process:
import numpy as np
import math
def cent_difference(pitch_a, pitch_b)
cents = 1200 * np.abs(math.log2(pitch_a / pitch_b))
return cents
This works perfectly when I give the program octaves:
In [28]: cent_difference(880, 440)
Out[28]: 1200.0
...but misses the mark by about two cents on a perfect fifth:
In [29]: cent_difference(660, 440)
Out[29]: 701.9550008653875
...and keeps getting worse as I go, missing by about 14 cents on a major third:
In [30]: cent_difference(550, 440)
Out[30]: 386.31371386483477
Is this all float precision nonsense? Why does the perfect 5th example overestimate the cents, but the major third example underestimate the cents? What's going on here?
Much obliged for any help!
The issue you're having isn't about the accuracy of Python's float
type, but about the discrepancy between equal temperament and just intonation in music.
>>> cent_difference(660, 440)
701.9550008653874
This is assuming that a P5 interval represents a frequency ratio of 3/2. But in 12-ET, it doesn't: It has a ratio of 27/12 ≈ 1.4983070768766815. With the proper ET value for the higher note, you do get the expected 700.
>>> cent_difference(659.2551138257398, 440)
700.0