pythontensorflowkeraskeras-rl

keras rl - dqn model update


I am reading through the DQN implementation in keras-rl /rl/agents/dqn.py and see that in the compile() step essentially 3 keras models are instantiated:

The only model on which train_on_batch() is called is trainable_model, however - and this is what I don't understand - this also updates the weights of model.

In the definition of trainable_model one of the output tensors y_pred is referencing the output from model:

        y_pred = self.model.output
        y_true = Input(name='y_true', shape=(self.nb_actions,))
        mask = Input(name='mask', shape=(self.nb_actions,))
        loss_out = Lambda(clipped_masked_error, output_shape=(1,), name='loss')([y_true, y_pred, mask])
        ins = [self.model.input] if type(self.model.input) is not list else self.model.input
        trainable_model = Model(inputs=ins + [y_true, mask], outputs=[loss_out, y_pred])

When trainable_model.train_on_batch() is called, BOTH the weights in trainable_model and in model change. I am surprised because even though the two models reference the same output tensor object (trainable_model.y_pred = model.output), the instantiation of trainable_model = Model(...) should also instantiate a new set of weights, no?

Thanks for the help!


Solution

  • This is a small example to show that when you instantiate a new keras.models.Model() using the input and output tensor from another model, then the weights of these two models are shared. They don't get re-initialized.

    # keras version: 2.2.4
    import numpy as np
    
    from keras.models import Sequential, Model
    from keras.layers import Dense, Input
    from keras.optimizers import SGD
    
    np.random.seed(123)
    
    model1 = Sequential()
    model1.add(Dense(1, input_dim=1, activation="linear", name="model1_dense1", weights=[np.array([[10]]),np.array([10])]))
    model1.compile(optimizer=SGD(), loss="mse")
    
    model2 = Model(inputs=model1.input, outputs=model1.output)
    model2.compile(optimizer=SGD(), loss="mse")
    
    x = np.random.normal(size=2000)
    y = 2 * x + np.random.normal(size=2000)
    
    print("model 1 weights", model1.get_weights())
    print("model 2 weights", model2.get_weights())
    model2.fit(x,y, epochs=3, batch_size=32)
    print("model 1 weights", model1.get_weights())
    print("model 2 weights", model2.get_weights())
    

    Definitely something to keep in mind. Wasnt intuitive to me.