theanolasagnenolearn

Aggregate predictions with data augmentation in lasagne


I am working on the MNIST dataset and using data augmentation to train a neural network. I have a BatchIterator which randomly extracts from each picture a 24, 24 subimage, and uses it as input for the NN.

As far as training is concerned, everything goes fines. But for prediction, I want to extract 5 sub-images from a given image, and average the predictions, but I cannot get it to work:

Here's my BatchIterator:

class CropIterator(BatchIterator):

    def __init__(self, batch_size, crop=4, testing=False):
        super(CropIterator, self).__init__(batch_size)
        self.testing = testing
        self.crop = crop


    def transform(self, Xb, yb):
        crop = self.crop
        batch_size, channels, width, height = Xb.shape
        if not self.testing:
            y_new = yb      
            X_new = np.zeros([batch_size, channels, width - crop, height - crop]).astype(np.float32)
            for i in range(batch_size):
                x = np.random.randint(0, crop+1)
                y = np.random.randint(0, crop+1)
                X_new[i] = Xb[i, :, x:x+width-crop, y:y+height-crop]
        else:
            X_new = np.zeros([5 * batch_size, channels, width - crop, height - crop]).astype(np.float32)
            y_new = np.zeros(5 * batch_size).astype(np.int32)
            for i in range(batch_size):
                for idx, position in enumerate([(0,0), (0, crop), (crop, 0), (crop, crop), (crop//2, crop//2)]):
                    # all extreme cropppings + the middle one
                    x_idx = position[0]
                    y_idx = position[1]
                    X_new[5*i+idx, :] = Xb[i, :, x_idx:x_idx+width-crop, y_idx:y_idx+height-crop]
                    y_new[5*i+idx] = yb[i]
        return X_new, y_new

Fitting my net to the training data works, but when I do net.predict(X_test), I get an error because CropIterator.transform() is, I believe, called with yb equal to None.

here's the full call stack:

/usr/local/lib/python2.7/site-packages/nolearn/lasagne/base.pyc in predict(self, X)
    526             return self.predict_proba(X)
    527         else:
--> 528             y_pred = np.argmax(self.predict_proba(X), axis=1)
    529             if self.use_label_encoder:
    530                 y_pred = self.enc_.inverse_transform(y_pred)

/usr/local/lib/python2.7/site-packages/nolearn/lasagne/base.pyc in predict_proba(self, X)
    518     def predict_proba(self, X):
    519         probas = []
--> 520         for Xb, yb in self.batch_iterator_test(X):
    521             probas.append(self.apply_batch_func(self.predict_iter_, Xb))
    522         return np.vstack(probas)

/usr/local/lib/python2.7/site-packages/nolearn/lasagne/base.pyc in __iter__(self)
     78             else:
     79                 yb = None
---> 80             yield self.transform(Xb, yb)
     81 
     82     @property

<ipython-input-56-59463a9f9924> in transform(self, Xb, yb)
     33                     y_idx = position[1]
     34                     X_new[5*i+idx, :] = Xb[i, :, x_idx:x_idx+width-crop, y_idx:y_idx+height-crop]
---> 35                     y_new[5*i+idx] = yb[i]
     36         return X_new, y_new
     37 

TypeError: 'NoneType' object has no attribute '__getitem__'

Any idea on how to fix it in the testing part of CropIterator.transform() ?


Solution

  • Looking at the code for nolearn.lasagne.BatchIterator and how it is used by the nolearn.lasagne.NeuralNet class, it looks like BatchIterators need to work when y is not provided, i.e. in prediction mode. Note the call at line 520 where X is provided but no value is given for y so it defaults to None.

    Your CropIterator currently assumes that yb is always a non-None value. I don't know if it makes sense to do anything useful when yb is not provided but I assume you could just transform Xb and return None for y_new if yb is None.