raspberry-pitensorflow-litetpugoogle-coraledge-tpu

Image classification using tensorflow lite without Google Coral USB


I am trying to evaluate a Raspberry Pi performance with a Google Goral Edge TPU USB device and without it for an image classification task on a video file. I have managed to evaluate the peformance using the Edge TPU USB device already. However, when I try running a tensorflow lite code to run inference it gets me an error that tells me I need to plugin the device:

ValueError: Failed to load delegate from libedgetpu.so.1

What I am doing specifically is running inference on a video using the coral device and saving every frame in the video to benchmark the hardware.

import argparse
import time
import cv2
import numpy as np
from pycoral.adapters import classify, common
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import make_interpreter
from utils import visualization as visual

WINDOW_NAME = "Edge TPU Image classification"


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--model", help="File path of Tflite model.", required=True)
    parser.add_argument("--label", help="File path of label file.", required=True)
    parser.add_argument("--top_k", help="keep top k candidates.", default=2, type=int)
    parser.add_argument("--threshold", help="Score threshold.", default=0.0, type=float)
    parser.add_argument("--width", help="Resolution width.", default=640, type=int)
    parser.add_argument("--height", help="Resolution height.", default=480, type=int)
    parser.add_argument("--videopath", help="File path of Videofile.", default="")
    args = parser.parse_args()

    # Initialize window.
    cv2.namedWindow(WINDOW_NAME)
    cv2.moveWindow(WINDOW_NAME, 100, 200)

    # Initialize engine and load labels.
    count = 0
    interpreter = make_interpreter(args.model)
    interpreter.allocate_tensors()
    labels = read_label_file(args.label) if args.label else None
    elapsed_list = []
    cap = cv2.VideoCapture('/home/pi/coral-usb/pycoral/test_data/video.mkv)
    while cap.isOpened():
        _, frame = cap.read()
        im = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        cv2.imwrite("/home/pi/Desktop/frames/frame_%d.jpeg" % count, frame)
        print('gravou o frame_%d'% count, frame)      
        cv2.imshow('Frame', frame)
        cap_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
        cap_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
        # Run inference.
        start = time.perf_counter()

        _, scale = common.set_resized_input(
            interpreter, (cap_width, cap_height), lambda size: cv2.resize(im, size)
        )
        interpreter.invoke()

        # Check result.
        results = classify.get_classes(interpreter, args.top_k, args.threshold)
        elapsed_ms = (time.perf_counter() - start) * 1000
        if results:
            for i in range(len(results)):
                label = "{0} ({1:.2f})".format(labels[results[i][0]], results[i][1])
                pos = 60 + (i * 30)
                visual.draw_caption(frame, (10, pos), label)


        # display
        cv2.imshow(WINDOW_NAME, frame)
        if cv2.waitKey(10) & 0xFF == ord("q"):
            break

This code is used to run inference with coral device. I would like to know how can I do the same thing but without coral? I would like to test the differences between using my model with and without the edge tpu usb device.

Lastly, I have tried Image classification from this link using tensorflow lite. However, I am getting the following error:

RuntimeError: Encountered unresolved custom op: edgetpu-custom-op.Node number 0 (edgetpu-custom-op) failed to prepare.


Solution

  • I recently came into this for a thesis supervision. We tested face detection in a raspberry pi 4 with Coral USB an without (inference on rpi CPU). Are you using the same model file for both? If this is the case, then this is the problem. You need to use the bare tflite model for the CPU inference and the TPU-compiled model for the inference with TPU. You can take a look at this repo where you can find the code I was mentioned before (it's not well documented but it's working, look at the inference CPU and inference CORAL files).