pythontensorflowkerasbatchsize

How do I use batchsize in keras?


Good morning!

I just want to clarify - is the batch_size parameter in model.fit declaring how many samples go in, or is the number of samples that go in at a time x/batch_size, where there are a total of x samples?

That is, suppose we have 20,000 samples and we make a batch size of 100. Does that mean 200 samples are passed in at a time (meaning 100 batches), or 100 samples?

I ask because https://deeplizard.com/learn/video/Skc8nqJirJg says "If we passed our entire training set to the model at once (batch_size=1), then the process we just went over for calculating the loss will occur at the end of each epoch during training", implying that it's one batch. However, batch_size seems to mean something difference based on its name, so I wanted to clarify.

Thank you!

Note: there is another question like this, but it wasn't answered - How BatchSize in Keras works ? LSTM-WithState-Time Series

That adds: how are those samples chosen?


Solution

  • From TensorFlow documentation:

    batch_size: Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

    So it's the number of samples used before a gradient update. If batch_size is equal to 1, then there will be one gradient update for each sample (and therefore, num_samples for each epoch)

    For instance, for the example you cited: if we have 20,000 samples and we make a batch size of 100, 100 samples are passed at a time.

    That adds: how are those samples chosen?

    It depends whether the shuffle argument of the fit method is True or not. If it is, they are taken randomly until all the samples have been selected (end of epoch). If it's not, they are taken sequentially