When running
torch.onnx.export(
model,
(im, im),
'wow.onnx',
verbose=True,
opset_version=opset,
input_names=['t0', 't1'],
output_names=['output']
)
on my model. I get:
Unsupported: ONNX export of operator Unfold , input size not accessible. Please feel free to request support or submit a pull request on PyTorch GitHub.
Any idea which operation Unfold
(https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html) could be replaced by so that the export succeeds?
I faced the same issue and struggled with it for a long time. You have to go into the model that you are trying to convert to ONNX. And replace the portion that is using "Unfold" operator with a different implementation.
For example, in my case, here is the original code that I had:
# do patching
if self.padding_patch == 'end':
z = self.padding_patch_layer(z)
z = z.unfold(dimension=-1, size=self.patch_len, step=self.stride) # z: [bs x nvars x patch_num x patch_len]
z = z.permute(0,1,3,2)
Here is what I replaced it with:
# do patching manually
if self.padding_patch == 'end':
z = self.padding_patch_layer(z)
# Manually unfold the input tensor
batch_size, n_vars, seq_len = z.size()
patches = []
for i in range(0, seq_len - self.patch_len + 1, self.stride):
patches.append(z[:, :, i:i+self.patch_len])
z = torch.stack(patches, dim=-1) # z: [bs x nvars x patch_len x patch_num]
Doing that made the ONNX conversion successful. Note that the second implementation may be a little inefficient and slower because of the introduction of the for loop, but it definitely works.