pythontensorflowkerasspp

How to give different size images to cnn model in keras/tensorflow


i confused to how to input two size pictures and it also can't use resize and crop.I have seen this question but it is also not resolved.this is my code but i get follow error:

StopIteration: 'NoneType' object cannot be interpreted as an integer

i hope you can give me some advice

model = Sequential()
model.add(Conv2D(filters=6,kernel_size=(5,5),padding='same',input_shape=(None,None,3)))
model.add(Activation('tanh'))  
model.add(MaxPooling2D(pool_size=(2,2))) 

model.add(Conv2D(filters=16,kernel_size=(5,5),padding='same'))  
model.add(Activation('tanh')) 
model.add(GlobalAveragePooling2D())
model.add(Dense(1))  
model.add(Activation('sigmoid'))
#sgd = optimizers.RMSprop(lr=0.01, clipvalue=0.5)
model.compile(loss='binary_crossentropy',#'binary_crossentropy'categorical_crossentropy,
              optimizer='sgd',
              metrics=['accuracy'],
              )
train_datagen = ImageDataGenerator(rescale=1./255,
                                   vertical_flip=True,
                                   horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        train_data_dir,
        target_size=(img_width, img_height),
        batch_size=batch_size,
        class_mode='binary')#'binary'categorical)

validation_generator = test_datagen.flow_from_directory(
        validation_data_dir,
        target_size=(img_width, img_height),
        batch_size=batch_size,
        class_mode='binary')#'binary')

early_stopping = EarlyStopping(monitor='val_acc',patience=10,mode='max')
model.fit_generator(train_generator,
                    steps_per_epoch=nb_train_samples//batch_size,
                    epochs=nb_epoch,
                    validation_data=validation_generator,
                    validation_steps=nb_validation_samples,
                    callbacks=[early_stopping,
                               TensorBoard(log_dir='C:\\Users\\ccri\\Desktop\\new\\iou30\\426\\lenet\\log', write_images=True),
                               ModelCheckpoint(filepath='C:\\Users\\ccri\\Desktop\\new\\iou30\\426\\lenet\\canshu\\weights.{epoch:02d}-{val_loss:.2f}.h5', 
                               monitor='val_acc',                                   
                               save_best_only=True,
                               mode='auto')]
)

Solution

  • The only limitation is creating a numpy array that can fit images of different sizes.

    You can solve this using either batch_size=1 (then your numpy arrays will never be incompatible).

    Or you can try to manually group all images of the same size in an array, train this array as a big batch, then do the same for other sizes.