pythongoogle-colaboratoryspeech-to-textmozilla-deepspeechcustom-training

Error during training in deepspeech Internal: Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]


Getting following error when trying to excecute

%cd /content/DeepSpeech
!python3 DeepSpeech.py --train_cudnn True --early_stop True --es_epochs 6 --n_hidden 2048 --epochs 20 \
  --export_dir /content/models/ --checkpoint_dir /content/model_checkpoints/ \
  --train_files /content/train.csv --dev_files /content/dev.csv --test_files /content/test.csv \
  --learning_rate 0.0001 --train_batch_size 64 --test_batch_size 32 --dev_batch_size 32 --export_file_name 'ft_model' \
   --augment reverb[p=0.2,delay=50.0~30.0,decay=10.0:2.0~1.0] \
   --augment volume[p=0.2,dbfs=-10:-40] \
   --augment pitch[p=0.2,pitch=1~0.2] \
   --augment tempo[p=0.2,factor=1~0.5] 

tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found. (0) Internal: Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, max_seq_length, batch_size, cell_num_units]: [1, 2048, 2048, 1, 798, 64, 2048] [[{{node tower_0/cudnn_lstm/CudnnRNNV3}}]] [[tower_0/gradients/tower_0/BiasAdd_2_grad/BiasAddGrad/_87]] (1) Internal: Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, max_seq_length, batch_size, cell_num_units]: [1, 2048, 2048, 1, 798, 64, 2048] [[{{node tower_0/cudnn_lstm/CudnnRNNV3}}]] 0 successful operations. 0 derived errors ignored.


Solution

  • If i try it as below it worked fine.

    %cd /content/DeepSpeech
    !python3 DeepSpeech.py --train_cudnn True --early_stop True --es_epochs 6 --n_hidden 2048 --epochs 20 \
      --export_dir /content/models/ --checkpoint_dir /content/model_checkpoints/ \
      --train_files /content/train.csv --dev_files /content/dev.csv --test_files /content/test.csv \
      --learning_rate 0.0001 --train_batch_size 64 --test_batch_size 32 --dev_batch_size 32 --export_file_name 'ft_model' \
      # --augment reverb[p=0.2,delay=50.0~30.0,decay=10.0:2.0~1.0] \
      # --augment volume[p=0.2,dbfs=-10:-40] \
      # --augment pitch[p=0.2,pitch=1~0.2] \
      # --augment tempo[p=0.2,factor=1~0.5]
    

    Basically augment was doing something to break our training in between