androidtensorflow-litefirebase-machine-learning

Internal error: Cannot create interpreter: Didn't find op for builtin opcode 'FULLY_CONNECTED' version '9' with firebase ml kit and tensorflow-lite


I created a google cloud function that creates a basic tflite model file with the following configuration:

inputs => fully connected layer => outputs.

I retrieve my model with Firebase machine learning through an Android plugin, but when I try to initialize the interpreter, my application crashes and I get the following error in the log :

2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime java.lang.IllegalArgumentException: Internal error: Cannot create interpreter: Didn't find op for builtin opcode 'FULLY_CONNECTED' version '9' 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime Registration failed. 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at org.tensorflow.lite.NativeInterpreterWrapper.createInterpreter(Native Method) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at org.tensorflow.lite.NativeInterpreterWrapper.init(NativeInterpreterWrapper.java:72) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at org.tensorflow.lite.NativeInterpreterWrapper.(NativeInterpreterWrapper.java:48) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at org.tensorflow.lite.Interpreter.(Interpreter.java:207) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at org.tensorflow.lite.Interpreter.(Interpreter.java:182) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at com.fawfulized.machine_learning.ModelManager$1.onSuccess(ModelManager.java:62) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at com.fawfulized.machine_learning.ModelManager$1.onSuccess(ModelManager.java:55) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at com.google.android.gms.tasks.zzm.run(com.google.android.gms:play-services-tasks@@18.0.2:1) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at android.os.Handler.handleCallback(Handler.java:971) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at android.os.Handler.dispatchMessage(Handler.java:107) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at android.os.Looper.loopOnce(Looper.java:206) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at android.os.Looper.loop(Looper.java:296) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at android.app.ActivityThread.main(ActivityThread.java:9170) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at java.lang.reflect.Method.invoke(Native Method) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:591) 2024/03/19 23:23:54.205 4629 4629 Error AndroidRuntime at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1018)

in my google cloud function, I specified the tensorflow version as :

tensorflow==2.15.0

in the build.gradle.kts of my plugin, I specify the tensorflow lite version as:

implementation("org.tensorflow:tensorflow-lite:2.15.0")

in the [tensorflow repository] (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/versioning/runtime_version.cc) at line 134, it is specified that FULLY_CONNECTED for version 2.15.0 should be at version 11, so I don't know why my plugin is looking for the version 9.

Additionally, here is how I initialize the interpreter in my Android plugin :

 private void initializeInterpreter() {

        CustomModelDownloadConditions conditions = new CustomModelDownloadConditions.Builder()
                .build();
        FirebaseModelDownloader.getInstance()
                .getModel("Price-Prediction", DownloadType.LOCAL_MODEL_UPDATE_IN_BACKGROUND, conditions)
                .addOnSuccessListener(new OnSuccessListener<CustomModel>() {
                    @Override
                    public void onSuccess(CustomModel model) {
                        pricePredictionModel = model;
                        Log.d("tensorflow model", "Model downloaded.");
                        File modelFile = model.getFile();
                        if (modelFile != null) {
                            interpreter = new Interpreter(modelFile);
                            Log.d("tensorflow model", "Interpreter initialized.");
                        }
                    }
                });
    }

Any help would be appreciated, thank you.

Expected the interpreter for my tflite file model to be initialized, with a fully connected layer of version 11, but got an error where the builtin operator is looking for FULLY_CONNECTED version 9


Solution

  • I was able to figure out what was wrong, here are the steps I took :

    I opened my model in a tflite visualizer (Netron) and looked into the 'FULLY_CONNECTED' layer, I saw it had an attribute called 'asymmetric_quantize_inputs', did some research and stumbled upon this link : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs

    at line 923 it is specified that you should set asymmetric_quantize_inputs for FullyConnected version 7 or above.

    And below at line 927 is the parameter 'quantized_bias_type', which is for version 11 or above.

    to enable quantized_bias_type I changed my code with the following :

    
         converter.optimizations = [tf.lite.Optimize.DEFAULT]
    
         converter.target_spec.supported_ops = [
              tf.lite.OpsSet.TFLITE_BUILTINS,
              tf.lite.OpsSet.SELECT_TF_OPS 
         ]
    
         converter.inference_input_type = tf.float32
         converter.inference_output_type = tf.float32
    
         converter.representative_dataset = lambda: generate_representative_data(X_data)
    
         tflite_model = converter.convert()
    

    The converter needs to have representative_dataset set to enable 'quantized_bias_type'. In my case, I defined generate_representative_data(X_data) as :

    #generate representative datas 
    def generate_representative_data(X_data):
         num_samples = X_data.shape[0]
    
         for i in range(num_samples):
              sample = X_data[i]
              yield {'input_1': sample}
    

    Also make sure to have the same name for your representative datas and input layer : tf.keras.layers.Input(shape=tensor_shape, dtype=tf.float32, name='input_1')

    If you get an error regarding input signatures, make sure to use tf.lite.TFLiteConverter.from_saved_model(saved_model_path) and not tf.lite.TFLiteConverter.from_keras_model, since the latter doesn't generate a signature.

    Finally, in my Android plugin, I encountered the following error : Java.lang.IncompatibleClassChangeError: Found class org.tensorflow.lite.Tensor, but interface was expected

    So what I did was downgrade my version of tensorflow-lite to a version where org.tensorflow.lite.Tensor is an interface, and not a Class.

    I hope it helps someone that encounters the same issue as I had.