pythonpython-3.xneural-networkpytorchrobustness

Failed to generate adversarial examples using trained NSGA-Net PyTorch models


I have used NSGA-Net neural architecture search to generate and train several architectures. I am trying to generate PGD adversarial examples using my trained PyTorch models. I tried using both Adversarial Robustness Toolbox 1.3 (ART) and torchattacks 2.4 but I get the same error.

These few lines of code describe the main functionality of my code and what I am trying to achieve here:

# net is my trained NSGA-Net PyTorch model

# Defining PGA attack

pgd_attack = PGD(net, eps=4 / 255, alpha=2 / 255, steps=3)

# Creating adversarial examples using validation data and the defined PGD attack

for images, labels in valid_data:
    images = pgd_attack(images, labels).cuda()
    outputs = net(images)


So here is what the error generally looks like:

Traceback (most recent call last):
  File "torch-attacks.py", line 296, in <module>
    main()
  File "torch-attacks.py", line 254, in main
    images = pgd_attack(images, labels).cuda()
  File "\Anaconda3\envs\GPU\lib\site-packages\torchattacks\attack.py", line 114, in __call__
    images = self.forward(*input, **kwargs)
  File "\Anaconda3\envs\GPU\lib\site-packages\torchattacks\attacks\pgd.py", line 57, in forward
    outputs = self.model(adv_images)
  File "\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "\codes\NSGA\nsga-net\models\macro_models.py", line 79, in forward
    x = self.gap(self.model(x))
  File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\container.py", line 100, in forward
    input = module(input)
  File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "\codes\NSGA\nsga-net\models\macro_decoder.py", line 978, in forward
    x = self.first_conv(x)
  File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward
    return self.conv2d_forward(input, self.weight)
  File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\conv.py", line 342, in conv2d_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #2 'weight' in call to _thnn_conv2d_forward

I have used the same the code with a simple PyTorch model and it worked but I am using NSGA-Net so I haven't designed the model myself. I also tried using .float() on both the model and inputs and still got the same error.

Keep in mind that I only have access to the following files:


Solution

  • You should convert images to the desired type (images.float() in your case). Labels must not be converted to any floating type. They are allowed to be either int or long tensors.