pythonpytorchonnxfast-aionnxruntime

Using a pre-trained exported Pytorch resnet18 model with ONNX


I'm fairly new to deep learning and I've managed to train a resnet18 model with FastAI for multilabel prediction.

learn = cnn_learner(dls, resnet18, metrics=partial(accuracy_multi, thresh=0.2))

Next, I exported the model to Torch:

torch.save(learn.model, "resnet18_5_epochs.pth")

And then I converted it to ONNX:

import torch

model_path = "resnet18_5_epochs.pth"

model = torch.load(model_path)
model.eval()

dummy_input = torch.randn(1, 3, 224, 224)
torch.onnx.export(model, dummy_input, "resnet18_5_epochs.onnx", export_params=True)

Then I queried the ONNX model:

import onnxruntime as ort

ort_sess = ort.InferenceSession(model_path, providers=['CUDAExecutionProvider'])

# transform image to tensor
import torchvision.transforms as transforms

transform = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize(
       mean=[0.485, 0.456, 0.406],
        std=[0.229, 0.224, 0.225]
    )
])

from PIL import Image

img = Image.open("12.jpg")
x = transform(img)
x = x.unsqueeze(0)  # add batch dimension

# run model
outputs = ort_sess.run(None, {'input.1': x.numpy()})

I am stuck in interpreting the output of the model. I've tried using a softmax function but I got the wrong classes. For example, the top class is wrong:

top = np.argmax(outputs)
print(categories[top])

I have no clue what the cause of my problem is and why the ONNX model outputs the predictions wrong. The predictions are right when I query the model with FastAI.

I've used the following code to export the output categories:

categories = dls.vocab
with open("categories.txt", "w") as f:
    for category in categories:
        f.write(category + "\n")

Thank you!


Solution

  • I'm not sure if you have to use onnx, but my suggestion is that you can get the correct results in pytorch and port it to ONNX after that. By following https://pytorch.org/hub/pytorch_vision_resnet, you can do something like

    # Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
    print(output[0])
    # The output has unnormalized scores. To get probabilities, you can run a softmax on it.
    probabilities = torch.nn.functional.softmax(output[0], dim=0)
    print(probabilities)
    

    And then, when you port to ONNX, you can compare every intermediate results with pytorch and it would be easy to debug which step is wrong.