Below is the code for multi class classifier from Chapter 4 in 'Deep Learning with Python' by François Chollet. The textbook mentions this code will yield >95% training accuracy, but my environment seems to yield very low accuracy of <50% compared to the textbook.
Keras version - 3.6
Tensorflow - 2.18
Hardware - Apple M1 Pro
import keras
from tensorflow.keras.datasets import reuters
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import numpy as np
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
for j in sequence:
results[i, j] = 1.
return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = to_categorical(train_labels)
y_test = to_categorical(test_labels)
model = keras.Sequential([
layers.Dense(64, activation="relu"),
layers.Dense(64, activation="relu"),
layers.Dense(46, activation="softmax")
])
model.compile(
optimizer="rmsprop",
loss="categorical_crossentropy",
metrics=["accuracy"]
)
# setting aside validation set
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = y_train[:1000]
partial_y_train = y_train[1000:]
# training the model
history = model.fit(
partial_x_train,
partial_y_train,
epochs=10,
batch_size=512,
validation_data=(x_val, y_val)
)
# plotting training & validation accuracy
history_dict = history.history
loss_values = history_dict["loss"]
val_loss_values = history_dict["val_loss"]
epochs = range(1, len(loss_values) + 1)
acc = history_dict["accuracy"]
val_acc = history_dict["val_accuracy"]
plt.plot(epochs, acc, "bo", label="Training acc")
plt.plot(epochs, val_acc, "b", label="Validation acc")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
Firstly,
There is an indentation problem. The vectorize_sequence
return is indented incorrectly.
Should be this:
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
for j in sequence:
results[i, j] = 1.
return results
Secondly:
Import layers:
from tensorflow.keras import layers
Thirdly:
You need to have the labels in categorical format:
y_train = to_categorical(train_labels)
y_test = to_categorical(test_labels)
Then you should get an accuracy of mid 80's as stated in the textbook. I looked at the textbook and it says 95% is for state of the art methods not the naïve approach defined in the book. (Unless you were referring to training accuracy? In which case, yes, >95% is correct)