pythonspeech-to-textlanguage-modelmozilla-deepspeech

Subprocess call error while calling generate_lm.py of DeepSpeech


I am trying to build customised scorer (language model) for speech-to-text using DeepSpeech in colab. While calling generate_lm.py getting this error:

    main()
  File "generate_lm.py", line 201, in main
    build_lm(args, data_lower, vocab_str)
  File "generate_lm.py", line 126, in build_lm
    binary_path,
  File "/usr/lib/python3.7/subprocess.py", line 363, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/content/DeepSpeech/native_client/kenlm/build/bin/build_binary', '-a', '255', '-q', '8', '-v', 'trie', '/content/DeepSpeech/data/lm/lm_filtered.arpa', '/content/DeepSpeech/data/lm/lm.binary']' died with <Signals.SIGSEGV: 11>.```

Calling the script generate_lm.py like this :

```! python3 generate_lm.py --input_txt hindi_tokens.txt --output_dir /content/DeepSpeech/data/lm --top_k 500000 --kenlm_bins /content/DeepSpeech/native_client/kenlm/build/bin/ --arpa_order 5 --max_arpa_memory "85%" --arpa_prune "0|0|1" --binary_a_bits 255 --binary_q_bits 8 --binary_type trie```

Solution

  • Able to find a solution for the above question. Successfully created language model after reducing the value of top_k to 15000. My phrases file has about 42000 entries only. We have to adjust top_k value based on the number of phrases in our collection. top_k parameter says - this much of less frequent phrases will be removed before processing.