tensorflowdeep-learningkeras-layertensorflow-layers

Keras custom layer not returning weights, unlike normal layer


I am trying to get weights of the layer. It seems to work properly when keras layer is used and the input is connected to it. However, while wrapping it into my custom layer, that does not work anymore. Is that a bug or what am I missing?

Edit: considerations:

I read that one can define in build() of custom layer the trainable variables. However, since custom layer consists of keras layer Dense (and potentially more keras layers later), those should already have defined trainable variables and weight/bias initializers. (I would not see a way to overwrite them, in init() of TestLayer, with variables that would be defined in build() of TestLayer.

class TestLayer(layers.Layer):
    def __init__(self):
        super(TestLayer, self).__init__()
        self.test_nn = layers.Dense(3)

    def build(self, input_shape):
        super(TestLayer, self).build(input_shape)


    def call(self, inputs, **kwargs):
        test_out = test_nn(inputs) # which is test_in
        return test_out


test_in = layers.Input((2,))
test_nn = layers.Dense(3)
print(test_nn.get_weights()) # empty, since no connection to the layer
test_out = test_nn(test_in)
print(test_nn.get_weights()) # layer returns weights+biases

testLayer = TestLayer()
features = testLayer(test_in)
print(testLayer.get_weights()) # Problem: still empty, even though connected to input.

Solution

  • The documentation says that build() method should have calls to add_weight() which you do not have:

    Should have the calls to add_weight(), and then call the super's build()

    You also don't need to define a dense layer inside of your class if you are subclassing layers.Layer. This is how you should subclass:

    import tensorflow as tf
    from tensorflow.keras import layers
    
    class TestLayer(layers.Layer):
        def __init__(self, outshape=3):
            super(TestLayer, self).__init__()
            self.outshape = outshape
    
        def build(self, input_shape):
            self.kernel = self.add_weight(name='kernel',
                                          shape=(int(input_shape[1]), self.outshape),
                                          trainable=True)
    
            super(TestLayer, self).build(input_shape)
    
    
        def call(self, inputs, **kwargs):
            return tf.matmul(inputs, self.kernel)
    
    test_in = layers.Input((2,))
    
    testLayer = TestLayer()
    features = testLayer(test_in)
    print(testLayer.get_weights())
    #[array([[-0.68516827, -0.01990592,  0.88364804],
    #       [-0.459718  ,  0.19161093,  0.39982545]], dtype=float32)]
    

    Here are some more examples of subclassing Layer class.

    However, if you insist on implementing it your way and if you want to use get_weights() you have to override it (in this case you can just create a class without subclassing):

    import tensorflow as tf
    from tensorflow.keras import layers
    
    class TestLayer(layers.Layer):
        def __init__(self, outshape=3):
            super(TestLayer, self).__init__()
            self.test_nn = layers.Dense(outshape)
            self.outshape = outshape
    
        def build(self, input_shape):
            super(TestLayer, self).build(input_shape)
    
        def call(self, inputs, **kwargs):
            return self.test_nn(inputs)
    
        def get_weights(self):
            with tf.Session() as sess:
                sess.run([x.initializer for x in self.test_nn.trainable_variables])
                return sess.run(self.test_nn.trainable_variables)
    
    test_in = layers.Input((2,))
    
    testLayer = TestLayer()
    features = testLayer(test_in)
    print(testLayer.get_weights())
    #[array([[ 0.5692867 ,  0.726858  ,  0.37790012],
    #       [ 0.2897135 , -0.7677493 , -0.58776844]], dtype=float32), #array([0., 0., 0.], dtype=float32)]