pythonpytorchgetattr

Interesting bug caused by getattr


I try to train 8 CNN models with the same structures simultaneously. After training a model on a batch, I need to synchronize the weights of the feature extraction layers in other 7 models.

This is the model:

class GNet(nn.Module):
    def __init__(self, dim_output, dropout=0.5):
        super(GNet, self).__init__()
        self.out_dim = dim_output
        # Load the pretrained AlexNet model
        alexnet = models.alexnet(pretrained=True)

        self.pre_filtering = nn.Sequential(
            alexnet.features[:4]
        )

        # Set requires_grad to False for all parameters in the pre_filtering network
        for param in self.pre_filtering.parameters():
            param.requires_grad = False

        # construct the feature extractor
        # every intermediate feature will be fed to the feature extractor

        # res: 25 x 25
        self.feat_ex1 = nn.Conv2d(192, 128, kernel_size=3, stride=1)

        # res: 25 x 25
        self.feat_ex2 = nn.Sequential(
            nn.BatchNorm2d(128),
            nn.Dropout(p=dropout),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
        )

        # res: 25 x 25
        self.feat_ex3 = nn.Sequential(
            nn.BatchNorm2d(128),
            nn.Dropout(p=dropout),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
        )

        # res: 13 x 13
        self.feat_ex4 = nn.Sequential(
            nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
            nn.BatchNorm2d(128),
            nn.Dropout(p=dropout),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
        )

        # res: 13 x 13
        self.feat_ex5 = nn.Sequential(
            nn.BatchNorm2d(128),
            nn.Dropout(p=dropout),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
        )

        # res: 13 x 13
        self.feat_ex6 = nn.Sequential(
            nn.BatchNorm2d(128),
            nn.Dropout(p=dropout),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
        )

        # res: 13 x 13
        self.feat_ex7 = nn.Sequential(
            nn.BatchNorm2d(128),
            nn.Dropout(p=dropout),
            nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1)
        )

        # define the flexible pooling field of each layer
        # use a full convolution layer here to perform flexible pooling
        self.fpf13 = nn.Conv2d(in_channels=448, out_channels=448, kernel_size=13, groups=448)
        self.fpf25 = nn.Conv2d(in_channels=384, out_channels=384, kernel_size=25, groups=384)
        self.linears = {}
        for i in range(self.out_dim):
            self.linears[f'linear_{i+1}'] = nn.Linear(832, 1)

        self.LogTanh = LogTanh()
        self.flatten = nn.Flatten()

And this is the function to synchronize the weights:

def sync_weights(models, current_sub, sync_seqs):
    for sub in range(1, 9):
        if sub != current_sub:
            # Synchronize the specified layers
            with torch.no_grad():
                for seq_name in sync_seqs:
                    reference_layer = getattr(models[current_sub], seq_name)[2]
                    layer = getattr(models[sub], seq_name)[2]
                    layer.weight.data = reference_layer.weight.data.clone()
                    if layer.bias is not None:
                        layer.bias.data = reference_layer.bias.data.clone()

then an error is raised:

'Conv2d' object is not iterable

which means the getattr() returns a Conv2D object. But if I remove [2]:

def sync_weights(models, current_sub, sync_seqs):
    for sub in range(1, 9):
        if sub != current_sub:
            # Synchronize the specified layers
            with torch.no_grad():
                for seq_name in sync_seqs:
                    reference_layer = getattr(models[current_sub], seq_name)
                    layer = getattr(models[sub], seq_name)
                    layer.weight.data = reference_layer.weight.data.clone()
                    if layer.bias is not None:
                        layer.bias.data = reference_layer.bias.data.clone()

I get another error:

'Sequential' object has no attribute 'weight'

which means the getattr() returns a Sequential. But previously it returns a Conv2D object. Does anyone know anything about this? For your information, the sync_seqs parameter passed in sync_weights is:

sync_seqs = [
    'feat_ex1',
    'feat_ex2',
    'feat_ex3',
    'feat_ex4',
    'feat_ex5',
    'feat_ex6',
    'feat_ex7'
]

Solution

  • In both instances, getattr is returning a Sequential, which in turn contains a bunch of objects. In the second case, you're directly assigning that Sequential to a variable, so reference_layer ends up containing a Sequential.

    In the first case, however, you're not doing that direct assignemnt. You're taking the Sequential object and then indexing it with [2]. That means reference_layer contains the third item in the Sequential, which is a Conv2d object.

    Take a more simple example. Suppose I had a ListContainer class that did nothing except hold a list. I could then recreate your example as follows, with test1 corresponding to your first test case and vice versa:

    class ListContainer:
        def __init__(self, list_items):
            self.list_items = list_items
    
    letters = ["a", "b", "c"]
    container = ListContainer(letters)
    
    test1 = getattr(container, "list_items")[0]
    test2 = getattr(container, "list_items")
    
    print(type(test1)) # <class 'str'>
    print(type(test2)) # <class 'list'>
    

    In both tests, getattr itself is returning a list - but in the first, we're doing something with that list after we get it, so test1 ends up being a string instead.