pythontensorflowdeep-learningdeconvolution

Load a tensorflow model inside another one, and concatenate two models


The idea is to create a deconvolution model following my convolution model to see the importance of the learned pixels.

Overview of the model

I am having problems that I cannot explain. The first is that when I have created my 3 models, if I train the convolution model and test my auto model everything works fine. Whereas if I create my 3 models then I load my convolution model (trained previously) and if I test my auto model everything looks as if I had not loaded the weights of my convolution model. My first question is how to load the weights into my convolution model and that they are taken into account for my auto model ?

The second problem is perhaps related to the first one, it's that everything works well when I use predict on my auto model but if I decompose it, it doesn't work. By decomposing I mean take a x_test predict it with convolution model then use what we get to predict the deconv model. I'm getting an error when I give the result of the convolution model to the deconv model. "Invalid argument: You must feed a value for placeholder tensor 'input_CNN' with dtype float"

To create the auto model I'm doing :

inputs = layers.Input(shape(128,128,1),name='input_CNN')
model_auto = models.Model(inputs,model_deconv(model_conv(inputs)))

I can give you more details if needed.

Edit :

def CNN():
   inputs = layers.Input(shape=(128,128,1),name="input_CNN")
   layers_CNN = CNN_1_layers(inputs)
   model_conv = models.Model(inputs,layers_CNN,name='CNN_1')
   model_conv.compile(loss='categorical_crossentropy',optimizer=Adam(), metrics=[accuracy"])

   inputs_deconv = layers.Inputs(shape=(1,10),name="input_CNN_deconv")
   layers_CNN_deconv = CNN_1_deconv_layers(inputs_deconv)
   model_deconv = models.Model(inputs_deconv, layers_CNN_deconv,name="CNN_1_deconv")
   model_deconv.compile(loss='categorical_crossentropy',optimize=Adam(), metrics=["accuracy"])

   model_auto = models.Model(inputs,model_deconv(model_conv(inputs)))
   model_auto.compile(loss='categorical_crossentropy',optimize=Adam(), metrics=["accuracy"])

   return model_auto, model_conv, model_deconv

The last layer of my model_conv :

def CNN_1_layers(inputs):
   x = layers.Flattend(input_shape(1,1,10))(x)
   x = layers.Dense(10,activation='softmax')(x)

   return x

And the deconv layers :

def CNN_1_deconv_layers(inputs_deconv):
   x = layers.Reshape((1,1,10))(x)
   w1 = tf.get_variable("w1",shape=[4,4,128,10],dtype=tf.float32)
   x = layers.Lambda(lambda x: tf.nn.conv2d_transpose(x,w1,output_shape=[1,4,4,128],strides=(1,1,1,1),padding='VALID'),name='deconv_0')(x)
   ...
   return x


def apprentissage(model,nb_epoch=100):
   checkpoint = ModelCheckpoint(filepath="CNN_1.h5",monitor='vall_acc', save_best_only=True,save_weights_only=False,mode='auto')
   hist= model.fit(train_X, train_y, batch_size=128, nb_epoch=nb_epoch, validation_data=(test_X,test_y), callbacks=[checkpoint])
   return hist

I'm coding with another computer off the network so I have to rewrite everything by hand so I'm summarizing a bit.


Solution

  • Well, I just give up and upload all my lib to the last one it seems to work.