speech-recognitionspeech-to-textazure-cognitive-servicesaudioformat

What audio formats are supported by Microsoft cognitive services SST? Why does 16-bit PCM x.wav succeed while 32-bit PCM y.wav doesn't?


I'm trying to use Microsoft cognitive services for speech-to-text problem through a python API. I have two files, harvard.wav and Optagelse_0.wav, which I want transcribed, but I only succeed with harvard.wav.

The file harvard.wav have this properties:

{'filename': 'harvard.wav', 'nb_streams': '1', 'format_name': 'wav', 'format_long_name': 'WAV / WAVE (Waveform Audio)', 'start_time': 'N/A', 'duration': '18.356190', 'size': '3249924.000000', 'bit_rate': '1411200.000000', 'TAG': {'encoder': 'Adobe Audition CC 2018.0 (Windows)', 'date': '2018-03-03', 'creation_time': '18\\:52\\:53', 'time_reference': '0'}, 'index': '0', 'codec_name': 'pcm_s16le', 'codec_long_name': 'PCM signed 16-bit little-endian', 'codec_type': 'audio', 'codec_time_base': '1/44100', 'codec_tag_string': '[1][0][0][0]', 'codec_tag': '0x0001', 'sample_rate': '44100.000000', 'channels': '2', 'bits_per_sample': '16', 'avg_frame_rate': '0/0', 'time_base': '1/44100'}

while Optagelse_0.wav have:

{'filename': 'Optagelse_0.wav', 'nb_streams': '1', 'format_name': 'wav', 'format_long_name': 'WAV / WAVE (Waveform Audio)', 'start_time': 'N/A', 'duration': '29.056000', 'size': '5578796.000000', 'bit_rate': '1536000.000000', 'index': '0', 'codec_name': 'pcm_s32le', 'codec_long_name': 'PCM signed 32-bit little-endian', 'codec_type': 'audio', 'codec_time_base': '1/48000', 'codec_tag_string': '[1][0][0][0]', 'codec_tag': '0x0001', 'sample_rate': '48000.000000', 'channels': '1', 'bits_per_sample': '32', 'avg_frame_rate': '0/0', 'time_base': '1/48000'}

I have tried to change the sampling rate OF harvard.wav according to What audio formats are supported by Azure Cognitive Services' Speech Service (SST)? but without any improvement.

speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
audio_config = speechsdk.audio.AudioConfig(filename='sound.wav')
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
result = speech_recognizer.recognize_once()

# Check the result
if result.reason == speechsdk.ResultReason.RecognizedSpeech:
    print("Recognized: {}".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
    print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
    cancellation_details = result.cancellation_details
    print("Speech Recognition canceled: {}".format(cancellation_details.reason))
    if cancellation_details.reason == speechsdk.CancellationReason.Error:
        print("Error details: {}".format(cancellation_details.error_details))

I expected a transcribed print-output but I get the error

Speech Recognition canceled: CancellationReason.Error
Error details: Invalid parameter or unsupported audio format in the request. Response text:{"Duration":0,"Offset":0,"RecognitionStatus":"BadRequest"}

Solution

  • According to Azure documentation you need 16-bit PCM while Optalgese.wav is 32 bit from your question - " 'codec_long_name': 'PCM signed 32-bit little-endian'"

    OP was able to change audio file from 32 bit to 16 bit using ffmpeg

     ffmpeg -i Optagelse.wav -acodec pcm_s16le Opt_pcm_16.wav