I have trained an object detection model using Tensorflow API, following an example based on this Google Colaboratory notebook by Roboflow. https://colab.research.google.com/drive/1wTMIrJhYsQdq_u7ROOkf0Lu_fsX5Mu8a
So far so good and i have successfully extracted my trained model as an Inference graph, again following the same notebook:
import re
import numpy as np
output_directory = './fine_tuned_model'
lst = os.listdir(model_dir)
lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l]
steps=np.array([int(re.findall('\d+', l)[0]) for l in lst])
last_model = lst[steps.argmax()].replace('.meta', '')
last_model_path = os.path.join(model_dir, last_model)
print(last_model_path)
!python /content/models/research/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path={pipeline_fname} \
--output_directory={output_directory} \
--trained_checkpoint_prefix={last_model_path}
That gives me a frozen_inference_graph.pb
file that i can use to make my object detection program in OpenCV DNN. Also following this example https://stackoverflow.com/a/57055266/9914815 i prepared a .pbtxt file of the model and pipeline config as the second argument for the cv2.dnn.readNetFromTensorflow
function. Here is the code just enough to reproduce the error i'm having:
model = cv2.dnn.readNetFromTensorflow('models/trained/frozen_inference_graph.pb',
'models/trained/output.pbtxt')
This code works successfully when i used the pretrained SSD MobileNet V2 COCO model, ssd_mobilenet_v2_coco_2018_03_29.pbtxt
however using my trained .pbtxt file, it will throw this error:
C:\Users\Satria\Desktop\ExploreOpencvDnn-master>python trainedmodel_video.py -i test1.mp4 -o test1result.mp4
Traceback (most recent call last):
File "trainedmodel_video.py", line 48, in <module> 'models/trained/output.pbtxt') cv2.error:
OpenCV(4.1.1) C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:544:error:
(-2:Unspecified error) Input layer not found: FeatureExtractor/MobilenetV2/Conv/weights in function
'cv::dnn::dnn4_v20190621::`anonymous-namespace'::TFImporter::connect'
It says that Input Layer is not found. Why does this happen? Also Notice the error message points out to a directory:
C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_importer.cpp
which is incredibly strange, because i do not have that directory at all in my computer. I tried diffchecking the pbtxt and config files of my and the sample SSD mobilenet model and i cannot find any instance of that particular directory used in anywhere, nor even they have a directory path inside.
Is this caused by training using Google Colab? Is there any correct way i can use Colab-trained Tensorflow models in OpenCV DNN?
Thanks in advance!
Solved after adding an additional input node in my own generated pbtxt file
Someone suggested that OpenCV Version 4.11 which i was using is outdated. I updated to 4.30, still not working, however it now lets me to use FusedBatchNormV3 which is very important in the future.
Now, after taking a close look at the diffcheck in the sample and the generated pbtxt,
In the sample .pbtxt file ssd_mobilenet_v2_coco_2018_03_29.pbtxt
, line 30 onward
node {
name: "Preprocessor/mul"
op: "Mul"
input: "image_tensor"
input: "Preprocessor/mul/x"
}
node {
name: "Preprocessor/sub"
op: "Sub"
input: "Preprocessor/mul"
input: "Preprocessor/sub/y"
}
node {
name: "FeatureExtractor/MobilenetV2/Conv/Conv2D"
op: "Conv2D"
input: "Preprocessor/sub"
input: "FeatureExtractor/MobilenetV2/Conv/weights"
It has an additional Input nodes which uses Preprocessor
, not only FeatureExtractor/MobilenetV2/Conv/Conv2D
meanwhile on the generated pbtxt it only has this
node {
name: "FeatureExtractor/MobilenetV2/Conv/Conv2D"
op: "Conv2D"
input: "FeatureExtractor/MobilenetV2/Conv/weights"
I copied the Input nodes of the sample .pbtxt and into my own generated .pbtxt and it worked!!!