I cloned transfer-learning-library repo and working on maximum classifier discrepancy. I am trying to change the augmentation but getting the following error
Traceback (most recent call last):
File "mcd.py", line 378, in <module>
main(args)
File "mcd.py", line 145, in main
results = validate(val_loader, G, F1, F2, args)
File "mcd.py", line 290, in validate
for i, (images, target) in enumerate(val_loader):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "../../../common/vision/datasets/imagelist.py", line 48, in __getitem__
img = self.transform(img)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py", line 60, in __call__
img = t(img)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py", line 750, in forward
return F.perspective(img, startpoints, endpoints, self.interpolation, fill)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py", line 647, in perspective
return F_pil.perspective(img, coeffs, interpolation=pil_interpolation, fill=fill)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional_pil.py", line 289, in perspective
return img.transform(img.size, Image.PERSPECTIVE, perspective_coeffs, interpolation, **opts)
File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2371, in transform
im = new(self.mode, size, fillcolor)
File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2578, in new
return im._new(core.fill(mode, size, color))
TypeError: integer argument expected, got float
The previous code was
# Data loading code
normalize = T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
if args.center_crop:
train_transform = T.Compose([
ResizeImage(256),
T.CenterCrop(224),
T.RandomHorizontalFlip(),
T.ToTensor(),
normalize
])
else:
train_transform = T.Compose([
ResizeImage(256),
T.RandomResizedCrop(224),
T.RandomHorizontalFlip(),
T.ToTensor(),
normalize
])
val_transform = T.Compose([
ResizeImage(256),
T.CenterCrop(224),
T.ToTensor(),
normalize
])
I just added T.RandomPerspective(distortion_scale = 0.8, p=0.5, fill=0.6)
for val_transform.
Before this I also added few other transforms for train_transform but still got the same error.
What could be the problem?
The fill
argument needs to be an integer.
This transform does not support the fill
parameter for Tensor
types; therefore, if you wish to use the fill
parameter, then you must use this transform before the ToTensor
transform. At this point, the data is integral.