tensorflowkeraskeras-layertensorflow2.0multiview

Tensorflow 2.0 How make share parameters among convolutional layers?


I am trying to re-implement Multi-View CNN (MVCNN) in Tensorflow 2.0. However, from what I see, keras layers do not have the options reuse=True|False like in tf.layers. Is there any way that I can define my layers which share parameters using the new API? Or I need to build my model in a TFv1 fashion?

Thank you very much!


Solution

  • To share the parameters of a model you just have to use the same model. This is the new paradigm introduced in TensorFlow 2.0; In TF 1.xt we were using a graph-oriented approach, where we need to re-use the same graph to share the variables, but now we can just re-use the same tf.keras.Model object with different inputs.

    Is the object that carries its own variables.

    Using a Keras model and tf.GradientTape you can train a model sharing the variables easily as shown in the example below.

    
    # This is your model definition
    model = tf.keras.Sequential(...)
    
    #input_1,2 are model different inputs
    
    with tf.GradientTape() as tape:
      a = model(input_1)
      b = model(input_2)
      # you can just copute the loss
      loss = a + b
    
    # Use the tape to compute the gradients of the loss
    # w.r.t. the model trainable variables
    
    grads = tape.gradient(loss, model.trainable_varibles)
    
    # Opt in an optimizer object, like tf.optimizers.Adam
    # and you can use it to apply the update rule, following the gradient direction
    
    opt.apply_gradients(zip(grads, model.trainable_variables))