tensorflowkerasconv-neural-networkslicesiamese-network

How to access embeddings for triplet loss


I am trying to create a siamese network with triplet loss and I am using a github example to help me. I am fairly new to this and I am having trouble understanding how to extract the embeddings from the out of the model. Below is the architecture :

enter image description here

The code to extract the embeddings that I have found on several pages is this:

def triplet_loss(y_true, y_pred):
anchor, positive, negative = y_pred[:,:emb_size], y_pred[:,emb_size:2*emb_size], y_pred[:,2*emb_size:]
positive_dist = tf.reduce_mean(tf.square(anchor - positive), axis=1)
negative_dist = tf.reduce_mean(tf.square(anchor - negative), axis=1)
return tf.maximum(positive_dist - negative_dist + alpha, 0.)

What has me confused I'm finding difficult to visualise the matrix and I don't understand why the anchor is y[:,:emb_size], positive is y_pred[:,emb_size:2emb_size] and negative y_pred[:,2emb_size:].

The full code if more context is needed: https://github.com/pranjalg2308/siamese_triplet_loss/blob/master/Siamese_With_Triplet_Loss.ipynb


Solution

  • In snippet of full code

    in_anc = Input(shape=(105,105,1))
    in_pos = Input(shape=(105,105,1))
    in_neg = Input(shape=(105,105,1))
    
    em_anc = embedding_model(in_anc)
    em_pos = embedding_model(in_pos)
    em_neg = embedding_model(in_neg)
    
    out = concatenate([em_anc, em_pos, em_neg], axis=1)
    
    siamese_net = Model(
        [in_anc, in_pos, in_neg],
        out
    )
    

    The anchor, pos and neg are concatenated to an output tensor, so anchor is y_pred[:,:emb_size]...

    And embedding_model.predict(np.expand_dims(anchor_image[3], axis=0)) would give your embeddings.