I am using the YoloV5 model for custom object recognition, and when I export it to tflite model for inclusion in the mobile app, the resulting time to object recognition is 5201.2ms inference. How can I reduce the inference to optimal for faster recognition? The dataset I use to train is 2200 images and use the model yolov5x to train. Thank for help me !!
You have several options:
None of these options exclude each other, all can be used at the same time for maximum inference speed. 1, 2 & 3 will sacrifice model performance for inference speed.