I am trying to convert a .h5 keras model to a .tflite model. But the conversion results in core dumped error. Here's the script that I am running,
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense
# Create a simple Keras model
model = Sequential([
Dense(64, activation='relu', input_shape=(784,)),
Dense(64, activation='relu'),
Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Convert the Keras model to TensorFlow Lite model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TensorFlow Lite model to a file
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
print("TensorFlow Lite model saved successfully!")
I am getting this error, if I run the script
2024-04-01 12:27:04.793910: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
loc(fused["ReadVariableOp:", callsite("sequential_1/dense_1/Add/ReadVariableOp@__inference_serving_default_98"
callsite("/home/spoon/Documents/GTSRB/lib/python3.9/site-packages/keras/src/ops/numpy.py":311:1 at callsite("/home/spoon/Documents/GTSRB/lib/python3.9/site-packages/keras/src/backend/tensorflow/sparse.py":491:1 at callsite("/home/spoon/Documents/GTSRB/lib/python3.9/site-packages/keras/src/backend/tensorflow/numpy.py":35:1 at "/home/spoon/Documents/GTSRB/lib/python3.9/site-packages/keras/src/backend/tensorflow/core.py":64:1)))))))))))))))))))))))))))]): error: missing attribute 'value'
LLVM ERROR: Failed to infer result type(s).
Aborted (core dumped)
OS: Ubuntu 20.04.6 LTS
Python version: 3.9.18
pip freeze info:
keras==3.1.1
keras-core==0.1.7
keras-cv==0.8.2
tensorboard==2.16.2
tensorboard-data-server==0.7.2
tensorflow==2.16.1
tensorflow-datasets==4.9.3
tensorflow-io-gcs-filesystem==0.36.0
tensorflow-metadata==1.14.0
This is a bug caused by inconsistent Keras versions.
TensorFlow versions ≥ 2.16 switched from Keras 2 to Keras 3, that is good since Keras 3 introduced many improvements with a significant performance uplift, but unfortunately they haven't properly ported all the TensorFlow modules to make them work as well with the latest Keras 3 module (and not even properly documented what this change has broken), so converter still expects a model generated by old Keras 2 module and crashes when a Keras 3 model is provided instead.
Hoping for a future update of the converter module, in meanwhile as workaround, you need to:
1) install tf_keras package (so that Keras 2 legacy mode is available) with:
pip install tf_keras
2) enable legacy mode adding in your code:
import os
os.environ["TF_USE_LEGACY_KERAS"] = "1"
placing it before
import tensorflow as tf
so that TensorFlow will be initialized with Keras 2 module.
Alternatively you can downgrade your TensorFlow to the latest Keras 2-based version with:
pip install tensorflow==2.15.0