tensorflowtensorflow2.0tensorflow-litetflitetflm

tflite::MicroInterpreter::input returns a TfLiteTensor with data pointer set to nullptr


While running a simple TfLiteMicro inference model on a Linux/x64 machine I am running into segfaults when trying to set input tensor or reading from output tensor. To summarize:

  1. I can see the input and output tensors get allocated, the tensor pointers are valid and can be dereferenced (prior to calling AllocateTensors() they're set to nullptr).
  2. Trying to dereference data pointer of a valid input/output tensor (however obtained) results in a segfault.
  3. I've tried different models and different arena sizes, but nothing changes.
  4. I can call Invoke() method of the interpreter without an issue (however, I'd expect it to segfault given that it probably accesses input/output tensors' data).

I've cloned the TfLiteMicro repository and ran the make -f tensorflow/lite/micro/tools/make/Makefile to compile the shared library. I've built the final executable with:

g++ -o test.out test.cpp ../tflite-micro/gen/linux_x86_64_default/lib/libtensorflow-microlite.a -o test.out -I../tflite-micro/ -I/home/gstukelj/projects/plume/tflite-micro/tensorflow/lite/micro/tools/make/downloads/flatbuffers/include -I../tflite-micro/tensorflow/lite/micro/tools/make/downloads/gemmlowp/

Where test.cpp includes:

#include <stdio.h>

#include "hw-float.h"
#include "tensorflow/lite/core/c/common.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/micro/system_setup.h"
#include "tensorflow/lite/schema/schema_generated.h"

int main(void) {

  tflite::InitializeTarget();

  const tflite::Model* model =
      ::tflite::GetModel(g_hello_world);

  TFLITE_CHECK_EQ(model->version(), TFLITE_SCHEMA_VERSION);

  static ::tflite::MicroMutableOpResolver<4> op_resolver;

  TF_LITE_ENSURE_STATUS(op_resolver.AddFullyConnected());
  TF_LITE_ENSURE_STATUS(op_resolver.AddSoftmax());
  TF_LITE_ENSURE_STATUS(op_resolver.AddReadVariable());
  TF_LITE_ENSURE_STATUS(op_resolver.AddRelu());

  constexpr int kTensorArenaSize = 60 * 1024;
  uint8_t tensor_arena[kTensorArenaSize];

  tflite::MicroInterpreter interpreter(model, op_resolver, tensor_arena,
                                       kTensorArenaSize);

  if (interpreter.AllocateTensors() != kTfLiteOk) {
    printf("ERROR: AllocateTensors() failed\r\n");
  }
  printf("Tensors allocated\n");

  // If AllocateTensors() is skipped this will segfault
  TfLiteTensor* input = interpreter.input(0);

  printf("bytes == %ld\n", input->bytes);

  if (input == nullptr) {
    printf("input == nullptr\n");
  }

  if (input->dims == nullptr) {
    printf("input->dims == nullptr\n");
  }

  if (input->data.f == nullptr) {
    printf("input->data.f == nullptr\n");
  }

  if (input->data.data == nullptr) {
    printf("input->data.data == nullptr \n");
  }

  printf("input tensor type = %s\n", TfLiteTypeGetName(input->type));

  TF_LITE_ENSURE_STATUS(interpreter.Invoke());

  // // Either one of these two will segfault
  // float y_pred = interpreter.output(0)->data.f[0];
  // auto y_pred = interpreter.typed_output_tensor<float>(0)[0];

  return 0;
}

The output of running the executable is:

Tensors allocated
bytes == 4
input->dims == nullptr
input->data.f == nullptr
input->data.data == nullptr 
input tensor type = NOTYPE

I'm using the hello_world model from the TfLiteMicro repo, but when trying with a custom model at first, I've ran into the same issues, the only difference was that the input->bytes held a different value.


Solution

  • The issue got resolved by adding #define TF_LITE_STATIC_MEMORY at the top of the client source file. This was not TFLM specific issue. For more details see the discussion under Valid pointer from static library code treated as a nullptr in the application code.