I develop a Qt app to let user choose a DICOM file, then it will show the inference result.
I use dcmread
to read a DICOM image as many slices of images.
For example, single DICOM image can convert to 60 JPG images.
The following is how I input a DICOM file.
dcm_file = "1037973"
ds = dcmread(dcm_file, force=True)
ds.PixelRepresentation = 0
ds_arr = ds.pixel_array
core = ov.Core()
model = core.read_model(model="frozen_darknet_yolov4_model.xml")
model.reshape([ds_arr.shape[0], ds_arr.shape[1], ds_arr.shape[2], 3])
compiled_model = core.compile_model(model, "CPU")
infer_request = compiled_model.create_infer_request()
input_tensor = ov.Tensor(array=ds_arr, shared_memory=True)
#infer_request.set_input_tensor(input_tensor)
infer_request.start_async()
infer_request.wait()
output = infer_request.get_output_tensor()
print(output)
I use model.reshape
to make my YOLOv4 model fit with the batch, heigh, width of my input file.
But the below error seems like I can't let my batch more than 1.
Traceback (most recent call last):
File "C:\Users\john0\Desktop\hf_inference_tool\controller.py", line 90, in show_inference_result
yolov4_inference_engine(gv.gInImgPath)
File "C:\Users\john0\Desktop\hf_inference_tool\inference.py", line 117, in yolov4_inference_engine
output = infer_request.get_output_tensor()
RuntimeError: get_output_tensor() must be called on a function with exactly one parameter.
How can I use dynamic input in API 2.0 correctly?
My environment is Windows 11 with openvino_2022.1.0.643 version.
The ov::InferRequest::get_output_tensor method without arguments can be used for model with only one output.
Use ov::InferRequest::get_output_tensor method with argument (index: int) as your model has three outputs.
output_tensor1 = infer_request.get_output_tensor(0)
output_tensor2 = infer_request.get_output_tensor(1)
output_tensor3 = infer_request.get_output_tensor(2)
print(output_tensor1)
print(output_tensor2)
print(output_tensor3)