I have a trained sequential
keras
model.
The last layer is a Dense layer with softmax activation function:
model = keras.models.Sequential()
model.add(...)
model.add(...)
model.add(...)
model.add(keras.layers.Dense(50, activation='softmax'))
How can I get the the output of the model, before the softmax
, without changing the model architecture ? I have trained model, which I can't change or train.
I have tried with:
probs = model.predict(X_train)
logits = probs - np.log(np.sum(np.exp(probs), axis=-1, keepdims=True))
But it seems that if I'm running softmax on logtis, it give me different results from probs.
def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=1, keepdims=True)
probabilities = softmax(logits)
There is actually no need to invert the logits.
You can just create the same model architecture in a new keras.models.Model
instance, without the softmax activation in the last layer, load the weights of the original model into the new model (using model.load_weights
), and then you would have a model without a softmax at the end, where you can make predictions.
model = keras.models.Sequential()
model.add(...)
model.add(...)
model.add(...)
model.add(keras.layers.Dense(50, activation='linear'))
model.load_weights('model.h5')
# Now predicts logits.
model.predict(some_input)