google-colaboratoryhuggingface-transformerslanguage-modelgpt-2

OOM while fine-tuning medium sized model with DialoGPT on colab


I am trying to finetune DialoGPT with a medium-sized model, I am getting Cuda error while the training phase, I reduced the batch size from 4, but still, the error persists. My parameters are

        #self.output_dir = 'output-small'
        self.output_dir = 'output-medium'
        self.model_type = 'gpt2'
        #self.model_name_or_path = 'microsoft/DialoGPT-small'
        self.model_name_or_path = 'microsoft/DialoGPT-medium'
        #self.config_name = 'microsoft/DialoGPT-small'
        self.config_name = 'microsoft/DialoGPT-medium'
        #self.tokenizer_name = 'microsoft/DialoGPT-small'
        self.tokenizer_name = 'microsoft/DialoGPT-medium'
        self.cache_dir = 'cached'
        self.block_size = 512
        self.do_train = True
        self.do_eval = True
        self.evaluate_during_training = False
        self.per_gpu_train_batch_size = 2
        self.per_gpu_eval_batch_size = 2
        self.gradient_accumulation_steps = 1
        self.learning_rate = 5e-5
        self.weight_decay = 0.0
        self.adam_epsilon = 1e-8
        self.max_grad_norm = 1.0
        self.num_train_epochs = 5
        self.max_steps = -1
        self.warmup_steps = 0
        self.logging_steps = 1000
        self.save_steps = 3500
        self.save_total_limit = None
        self.eval_all_checkpoints = False
        self.no_cuda = False
        self.overwrite_output_dir = True
        self.overwrite_cache = True
        self.should_continue = False
        self.seed = 42
        self.local_rank = -1
        self.fp16 = False
        self.fp16_opt_level = 'O1'

The GPU allocated is Tesla P100-PCIE with 16GB memory. Please kindly let me know how to resolve this issue. Any suggestion is appreciated.


Solution

  • just reduce the tokenizer input max_len from 1028 to 516 it worked perfectly for me