I work on importing TensorFlow models into MATLAB, this includes de-serializing the SavedModel format, identifying the different tf.keras.layers present in the SavedModel and creating an equivalent deep learning network in MATLAB.
In TensorFlow 2.3.0 and earlier, if we use a TF symbol (like: tf.nn.relu()) between tf.keras.layers instances, it was serialized as a 'TensorFlowOpLayer', that was similar to a layer subclassing tf.keras.layer, i.e., the graph for its 'call' method (call_and_return_conditional_losses) was stored in the SavedModel. Specifically, this 'call_and_return_conditional_losses' function was stored as a child of the node corresponding to the TensorFlowOpLayer in the SavedModel's object_graph_def.
In TensorFlow 2.6.0 and later, a TF symbol used between tf.keras.layers instances is serialized as a 'TFOpLambda' layer. Saving models containing these TFOpLambda layers into a SavedModel does not serialize the graph for its 'call' method (call_and_return_conditional_losses) anymore. There is no child node of the TFOpLambda node in the SavedModel's object_graph_def that corresponds to the 'call_and_return_conditional_losses' function anymore.
This creates a problem for me since I rely on decoding the 'call_and_return_conditional_losses' function, in order to import these TensorFlowOpLayer / TFOpLambda into MATLAB. For instance, consider the following model as an example:
x = tf.keras.layers.Input(shape=[None, 1])
z = tf.keras.layers.Conv1D(32, kernel_size=2, padding="causal")(x)
z = tf.nn.relu(z)
model = tf.keras.models.Model(inputs=[x], outputs=[z])
model.summary()
model.save('ModelWithTFSymbol')
Saving this in TensorFlow 2.3.0 will give you a TensorFlowOpLayer that has its 'call' graph serialized, containing the tf.raw_ops.Relu node that I use to concretely identify this as a ReLU activation operation.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, None, 1)] 0
conv1d (Conv1D) (None, None, 32) 96
tf_op_layer_Relu (TensorFlo (None, None, 32) 0
wOpLayer)
=================================================================
Saving the same model in TensorFlow 2.6.0+ will give you a TFOpLambda layer for the 'tf.nn.relu' symbol, that does not have its 'call' graph serialized in the object_graph_def.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_13 (InputLayer) [(None, None, None, 1)] 0
conv1d_13 (Conv1D) (None, None, None, 32) 96
tf.nn.relu_2 (TFOpLambda) (None, None, None, 32) 0
=================================================================
The most information we have in this case is in the TFOpLambda node's metadata (meta_graphs > object_graph_def > TFOpLambda node >user_object > metadata) as follows:
{
"name": "tf.nn.relu_13",
"trainable": true,
"expects_training_arg": false,
"dtype": "float32",
"batch_input_shape": null,
"stateful": false,
"must_restore_from_config": true,
"class_name": "TFOpLambda",
"config": {
"name": "tf.nn.relu_13",
"trainable": true,
"dtype": "float32",
"function": "nn.relu"
},
"inbound_nodes": [
[
"conv1d_100",
0,
0,
{
"name": null
}
]
],
"shared_object_id": 4
}
I have the following questions regarding TFOpLambda layers:
Quoting the answers I got from TensorFlow developers on here: