I am trying to understand how to make a confusion matrix and ROC curve for my multilabel classification problem. I am building a neural network. Here are my classes:
mlb = MultiLabelBinarizer()
ohe = mlb.fit_transform(as_list)
# loop over each of the possible class labels and show them
for (i, label) in enumerate(mlb.classes_):
print("{}. {}".format(i + 1, label))
[INFO] class labels:
1. class1
2. class2
3. class3
4. class4
5. class5
6. class6
My labels are transformed:
ohe
array([[0, 1, 0, 0, 1, 1],
[0, 1, 1, 1, 1, 0],
[1, 1, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 1],...]]
Training data:
array([[[[ 1.93965047e+04, 8.49532852e-01],
[ 1.93965047e+04, 8.49463479e-01],
[ 1.93965047e+04, 8.49474722e-01],
...,
Model:
model.compile(loss="binary_crossentropy", optimizer=opt,metrics=["accuracy"])
H = model.fit(trainX, trainY, batch_size=BS,
validation_data=(testX, testY),
epochs=EPOCHS, verbose=1)
I am able to get precentages but I am a bit clueless in how to calculate confusion matrix or ROC curve, or get classification report.. here are the precentages:
proba = model.predict(testX)
idxs = np.argsort(proba)[::-1][:2]
for i in proba:
print ('\n')
for (label, p) in zip(mlb.classes_, i):
print("{}: {:.2f}%".format(label, p * 100))
class1: 69.41%
class2: 76.41%
class3: 58.02%
class4: 63.97%
class5: 48.91%
class6: 58.28%
class1: 69.37%
class2: 76.42%
class3: 58.01%
class4: 63.92%
class5: 48.88%
class6: 58.26%
How to do it, preferably with an example?
From v0.21 onwards, scikit-learn includes a multilabel confusion matrix; adapting the example from the docs for 5 classes:
import numpy as np
from sklearn.metrics import multilabel_confusion_matrix
y_true = np.array([[1, 0, 1, 0, 0],
[0, 1, 0, 1, 1],
[1, 1, 1, 0, 1]])
y_pred = np.array([[1, 0, 0, 0, 1],
[0, 1, 1, 1, 0],
[1, 1, 1, 0, 0]])
multilabel_confusion_matrix(y_true, y_pred)
# result:
array([[[1, 0],
[0, 2]],
[[1, 0],
[0, 2]],
[[0, 1],
[1, 1]],
[[2, 0],
[0, 1]],
[[0, 1],
[2, 0]]])
The usual classification_report
also works fine:
from sklearn.metrics import classification_report
print(classification_report(y_true, y_pred))
# result
precision recall f1-score support
0 1.00 1.00 1.00 2
1 1.00 1.00 1.00 2
2 0.50 0.50 0.50 2
3 1.00 1.00 1.00 1
4 0.00 0.00 0.00 2
micro avg 0.75 0.67 0.71 9
macro avg 0.70 0.70 0.70 9
weighted avg 0.67 0.67 0.67 9
samples avg 0.72 0.64 0.67 9
Regarding ROC, you can take some ideas from the Plot ROC curves for the multilabel problem example in the docs (not quite sure the concept itself is very useful though).
Confusion matrix and classification report require hard class predictions (as in the example); ROC requires the predictions as probabilities.
To convert your probabilistic predictions to hard classes, you need a threshold. Now, usually (and implicitly), this threshold is taken to be 0.5, i.e. predict 1 if y_pred > 0.5
, else predict 0. Nevertheless, this is not necessarily the case always, and it depends on the particular problem. Once you have set such a threshold, you can easily convert your probabilistic predictions to hard classes with a list comprehension; here is a simple example:
import numpy as np
y_prob = np.array([[0.9, 0.05, 0.12, 0.23, 0.78],
[0.11, 0.81, 0.51, 0.63, 0.34],
[0.68, 0.89, 0.76, 0.43, 0.27]])
thresh = 0.5
y_pred = np.array([[1 if i > thresh else 0 for i in j] for j in y_prob])
y_pred
# result:
array([[1, 0, 0, 0, 1],
[0, 1, 1, 1, 0],
[1, 1, 1, 0, 0]])