I am using below code to get the 128d array vector
of face embeddings.
embedder = cv2.dnn.readNetFromTorch('openface_nn4.small2.v1.t7')
embedder.setInput(face_blob) # face_blob is the blob of face image
vec = embedder.forward() # vec contains the 128d
How can I calculate the 256d array vector
like above for a face image.? Thanks
You have to build your own or modify the existing NN, so it returns 256d instead of 128d from the last layer. Could be as simple as replacing Dense( 128, ...)
to `Dense( 256, ...)' or as complicated as retraining the whole network after that replacement.