The predict
method of a tf.keras.Model
takes the following arguments:
predict(
x,
batch_size=None,
verbose='auto',
steps=None,
callbacks=None,
max_queue_size=10,
workers=1,
use_multiprocessing=False
)
What is the point of specifying the batch_size? What are the ways in which it impacts the predictions?
Batching only affects the CPU/GPU RAM memory needs and not the predicted values.