How can you determine the best audio quality in a list of audio files of the same audio clip, with out looking at the audio file's header. The tricky part is that all of the files came from different formats and bit rates and they where all transcoded to the same format and bit rate. How can this be done efficiently?
Many of the answers outlined here refer to common audio measurements such as THD+N, SNR, etc. However, these do not always correlate well with human hearing of audio artifacts. Lossy audio compression techniques typically function by increasing THD+N and SNR, but aim to do so in ways that are difficult for the human ear to detect. A more traditional audio measurement technique may find decreased SNR in a certain frequency band, but does that matter if there's so much energy in adjacent bands that no one would ever notice the difference?
The research paper titled "A Perceptual Audio Quality Measure Based on a Psychoacoustic Sound Representation" outlines an algorithm for quantifying the ability of the human ear to detect audible differences, based on a model of how the ear hears. It takes into factors that do correlate with audio quality as perceived by humans. The paper includes a study comparing their algorithm's results to subjective double-blind testing, to give you an idea of how well their model works.
I could not find a free copy of this paper but a decent university library should have it on file.
Implementing the algorithm would require some knowledge of audio signal processing in the frequency domain. An undergraduate with DSP experience should be able to implement it. If you don't have the reference waveform, you could use information in this paper to quantify how objectionable artifacts may be.
The algorithm would work on PCM audio, preferably time-aligned, and certainly does not require knowledge of the file type or header.