I am trying to understand how CTC loss is working for speech recognition and how it can be implemented in Keras.
Grossly, the CTC loss is added on top of a classical network in order to decode a sequential information element by element (letter by letter for text or speech) rather than directly decoding an element block directly (a word for example).
Let's say we're feeding utterances of some sentences as MFCCs.
The goal in using CTC-loss is to learn how to make each letter match the MFCC at each time step. Thus, the Dense+softmax output layer is composed by as many neurons as the number of elements needed for the composition of the sentences:
Then, the softmax layer has 29 neurons (26 for alphabet + some special characters).
To implement it, i found that i can do something like this:
# CTC implementation from Keras example found at https://github.com/keras-
# team/keras/blob/master/examples/image_ocr.py
def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args
# the 2 is critical here since the first couple outputs of the RNN
# tend to be garbage:
# print "y_pred_shape: ", y_pred.shape
y_pred = y_pred[:, 2:, :]
# print "y_pred_shape: ", y_pred.shape
return K.ctc_batch_cost(labels, y_pred, input_length, label_length)
input_data = Input(shape=(1000, 20))
#let's say each MFCC is (1000 timestamps x 20 features)
x = Bidirectional(lstm(...,return_sequences=True))(input_data)
x = Bidirectional(lstm(...,return_sequences=True))(x)
y_pred = TimeDistributed(Dense(units=ALPHABET_LENGTH, activation='softmax'))(x)
loss_out = Lambda(function=ctc_lambda_func, name='ctc', output_shape=(1,))(
[y_pred, y_true, input_length, label_length])
model = Model(inputs=[input_data, y_true, input_length,label_length],
outputs=loss_out)
With ALPHABET_LENGTH = 29 (alphabet length + special characters)
And:
(source)
Now, i'm facing some problems:
y_true
your ground truth data. The data you are going to compare with the model's outputs in training. (On the other hand, y_pred
is the model's calculated output) input_length
, the length (in steps, or chars this case) of each sample (sentence) in the y_pred
tensor (as said here) label_length
, the length (in steps, or chars this case) of each sample (sentence) in the y_true
(or labels) tensor. It seems this loss expects that your model's outputs (y_pred
) have different lengths, as well as your ground truth data (y_true
). This is probably to avoid calculating the loss for garbage characters after the end of the sentences (since you will need a fixed size tensor for working with lots of sentences at once)
Since the function's documentation is asking for shape (samples, length)
, the format is that... the char index for each char in each sentence.
There are some possibilities.
If all lengths are the same, you can easily use it as a regular loss:
def ctc_loss(y_true, y_pred):
return K.ctc_batch_cost(y_true, y_pred, input_length, label_length)
#where input_length and label_length are constants you created previously
#the easiest way here is to have a fixed batch size in training
#the lengths should have the same batch size (see shapes in the link for ctc_cost)
model.compile(loss=ctc_loss, ...)
#here is how you pass the labels for training
model.fit(input_data_X_train, ground_truth_data_Y_train, ....)
This is a little more complicated, you need that your model somehow tells you the length of each output sentence.
There are again several creative forms of doing this:
I like the first idea, and will exemplify it here.
def ctc_find_eos(y_true, y_pred):
#convert y_pred from one-hot to label indices
y_pred_ind = K.argmax(y_pred, axis=-1)
#to make sure y_pred has one end_of_sentence (to avoid errors)
y_pred_end = K.concatenate([
y_pred_ind[:,:-1],
eos_index * K.ones_like(y_pred_ind[:,-1:])
], axis = 1)
#to make sure the first occurrence of the char is more important than subsequent ones
occurrence_weights = K.arange(start = max_length, stop=0, dtype=K.floatx())
#is eos?
is_eos_true = K.cast_to_floatx(K.equal(y_true, eos_index))
is_eos_pred = K.cast_to_floatx(K.equal(y_pred_end, eos_index))
#lengths
true_lengths = 1 + K.argmax(occurrence_weights * is_eos_true, axis=1)
pred_lengths = 1 + K.argmax(occurrence_weights * is_eos_pred, axis=1)
#reshape
true_lengths = K.reshape(true_lengths, (-1,1))
pred_lengths = K.reshape(pred_lengths, (-1,1))
return K.ctc_batch_cost(y_true, y_pred, pred_lengths, true_lengths)
model.compile(loss=ctc_find_eos, ....)
If you use the other option, use a model branch to calculate the lengths, concatenate these length to the first or last step of the output, and make sure you do the same with the true lengths in your ground truth data. Then, in the loss function, just take the section for lengths:
def ctc_concatenated_length(y_true, y_pred):
#assuming you concatenated the length in the first step
true_lengths = y_true[:,:1] #may need to cast to int
y_true = y_true[:, 1:]
#since y_pred uses one-hot, you will need to concatenate to full size of the last axis,
#thus the 0 here
pred_lengths = K.cast(y_pred[:, :1, 0], "int32")
y_pred = y_pred[:, 1:]
return K.ctc_batch_cost(y_true, y_pred, pred_lengths, true_lengths)