pythontensorflowcomputer-visionoutput

Neural segmentation network gives different output based on test batch size


I have implemented and trained a neural segmentation model on (224, 224) images. However, during testing, the model returns slightly different results based on the shape of the test batch.

The following images are results obtained during testing on my pre-trained model.

The first image is the prediction I get when I predict a single example (let's call it img0) (so the input is [img0] and has shape (1,224,224))

The second image is the prediction I get for the same image but when it's among a batch with 7 other images (so the input is [img0, img1, ..., img7] and has shape (8,224,224)).

The first output is closer than what I expected, compared to second output.

However, I don't understand why the outputs are different to begin with... Is this supposed to be normal behaviour? Thanks in advance.


Solution

  • This behavior was coming from the batch normalization layers that were in my model. I use training=true during my calls to the model.

    As a result, batch normalization normalizes the batches based on their norm, and that norm changes based on batch size.

    Therefore, this is normal behavior!