tensorflowkerasimagenet

Keras slicing index out of range beginner question


I am using Keras and Tensorflow for the first time. I am trying to get vgg16 to train on the imagenette dataset.

Im not sure why I am getting an index out of range error. Maybe I am doing something obviously wrong. Ive been messing with this for awhile now so any help would be great!

Attached is the error and source code

I resized my images to the correct size based on the vgg16 documentation: https://keras.io/api/applications/vgg/#vgg16-function

`

tfds_name = 'imagenette'
(ds_train, ds_validation), ds_info= tfds.load(
    name=tfds_name,
    split=['train', 'validation'],
    with_info = True,
    as_supervised=True)

#model from assignment link https://keras.io/api/applications/vgg/#vgg16-function
ourModel = tf.keras.applications.VGG16(
    include_top=True,                 #3 fill layers on top
    weights="imagenet",               #use imagenet
    input_tensor=None,                #use another layer as input 
    input_shape=None,                 #inly set if include to false 
    pooling=None,                     #use with include top false 
    classes=1000,                     #number of classes to set, we use imagenet values
    classifier_activation="softmax",  # classifier on input can only be none or softmax on pretrained 
)

#make it so layers frozen 
#for layer in ourModel.layers[:-1]:
#  layer.trainable = False

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
ourModel.compile(optimizer="adam",
              loss=loss_fn,
              metrics=['accuracy'])

def reshape(img,label):
  img = tf.cast(img, tf.float32)
  img = tf.image.resize(img, (224,224))
  resize_image = tf.reshape(img, [-1, 224, 224, 3])
  resize_image = preprocess_input(resize_image)
  return resize_image, label

ds_train = ds_train.map(reshape)
ds_validation = ds_validation.map(reshape)
ourModel.fit(ds_train,
             epochs=10,
             validation_data = ds_validation)

`

Error:

ValueError: in user code:

    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1051, in train_function  *
        return step_function(self, iterator)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1040, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1030, in run_step  **
        outputs = model.train_step(data)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 890, in train_step
        loss = self.compute_loss(x, y, y_pred, sample_weight)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 949, in compute_loss
        y, y_pred, sample_weight, regularization_losses=self.losses)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 212, in __call__
        batch_dim = tf.shape(y_t)[0]

    ValueError: slice index 0 of dimension 0 out of bounds. for '{{node strided_slice}} = StridedSlice[Index=DT_INT32, T=DT_INT32, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](Shape, strided_slice/stack, strided_slice/stack_1, strided_slice/stack_2)' with input shapes: [0], [1], [1], [1] and with computed input tensors: input[1] = <0>, input[2] = <1>, input[3] = <1>.

Solution

  • Your reshape function is creating a single batch for the image data but not for label data. So return resize_image, label[None, ...] should fix the above bug.

    But the right way to batch the train (and valid dataset) is to do: ds_train=ds_train.batch(BATCH_SIZE) and then removing tf.reshape(img, [-1, 224, 224, 3])