Considering the fact that you have a basic model similar to this:
input_layer = layers.Input(shape=(50,20))
layer = layers.Dense(123, activation = 'relu')
layer = layers.LSTM(128, return_sequences = True)(layer)
outputs = layers.Dense(20, activation='softmax')(layer)
model = Model(input_layer,outputs)
How would you implement CTC loss? I tried something from the keras code tutorial on OCR like this:
class CTCLayer(layers.Layer):
def __init__(self, name=None):
super().__init__(name=name)
self.loss_fn = keras.backend.ctc_batch_cost
def call(self, y_true, y_pred):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64")
input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64")
label_length = tf.cast(tf.shape(y_true)[1], dtype="int64")
input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")
loss = self.loss_fn(y_true, y_pred, input_length, label_length)
self.add_loss(loss)
# At test time, just return the computed predictions
return y_pred
labels = layers.Input(shape=(None,), dtype="float32")
input_layer = layers.Input(shape=(50,20))
layer = layers.Dense(123, activation = 'relu')
layer = layers.LSTM(128, return_sequences = True)(layer)
outputs = layers.Dense(20, activation='softmax')(layer)
output = CTCLayer()(labels,outputs)
model = Model(input_layer,outputs)
However when it came to the model.fit part it started to fall apart due to me not knowing how to feed the model the "label" input layer thing. I think that the approach in the tutorial is quite unambiguous so what would be a better and more efficient way to do implement the CTC loss?
The only thing you are doing wrong is the Model creation model = Model(input_layer,outputs)
it should be model = Model([input_layer,labels],output)
that said you can also compile the model with tf.nn.ctc_loss
as loss if you don't want to have 2 inputs
def my_loss_fn(y_true, y_pred):
loss_value = tf.nn.ctc_loss(y_true, y_pred, y_true_length, y_pred_length,
logits_time_major = False)
return tf.reduce_mean(loss_value, axis=-1)
model.compile(optimizer='adam', loss=my_loss_fn)
Something like this, Note that the code above is not tested and you need to find the the y_pred and y_true length but you can do that as is done in the ctc layer