mozilla-deepspeechctc

(0) Invalid argument: Not enough time for target transition sequence (required: 28, available: 24) During the Training in Mozilla Deepspeech


I am using below command to start the training of deepspeech model

%cd /content/DeepSpeech
!python3 DeepSpeech.py \
--drop_source_layers 2 --scorer /content/DeepSpeech/data/lm/kenlm-nigerian.scorer\
 --train_cudnn True --early_stop True --es_epochs 6 --n_hidden 2048 --epochs 5 \
  --export_dir /content/models/ --checkpoint_dir /content/model_checkpoints/ \
  --train_files /content/train.csv --dev_files /content/dev.csv --test_files /content/test.csv \
  --learning_rate 0.0001 --train_batch_size 64 --test_batch_size 32 --dev_batch_size 32 --export_file_name 'he_model_5' \
  --max_to_keep 3

I keep getting following error again and again.

(0) Invalid argument: Not enough time for target transition sequence (required: 28, available: 24)0You can turn this error into a warning by using the flag ignore_longer_outputs_than_inputs
(1) Invalid argument: Not enough time for target transition sequence (required: 28, available: 24)0You can turn this error into a warning by using the flag ignore_longer_outputs_than_inputs

Solution

  • Following worked for me

    Go to

    DeepSpeech/training/deepspeech_training/train.py
    

    Now look for following particular line (Normally in 240-250)

    total_loss = tfv1.nn.ctc_loss(labels=batch_y, inputs=logits, sequence_length=batch_seq_len)
    

    Change it to as following

    total_loss = tfv1.nn.ctc_loss(labels=batch_y, inputs=logits, sequence_length=batch_seq_len, )