pythontensorflowdeep-learning

What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?


What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?

In my opinion, 'VALID' means there will be no zero padding outside the edges when we do max pool.

According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use 'VALID' of tensorflow. But what is 'SAME' padding of max pool in tensorflow?


Solution

  • I'll give an example to make it clearer:

    The output shapes are:


    x = tf.constant([[1., 2., 3.],
                     [4., 5., 6.]])
    
    x = tf.reshape(x, [1, 2, 3, 1])  # give a shape accepted by tf.nn.max_pool
    
    valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
    same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
    
    valid_pad.get_shape() == [1, 1, 1, 1]  # valid_pad is [5.]
    same_pad.get_shape() == [1, 1, 2, 1]   # same_pad is  [5., 6.]