pythonpipelinehuggingface-transformerstext-classificationhuggingface

Huggingface - Pipeline with a fine-tuned pre-trained model errors


I have a pre-trained model from facebook/bart-large-mnli I used the Trainer in order to train it on my own dataset.

model = BartForSequenceClassification.from_pretrained("facebook/bart-large-mnli", num_labels=14, ignore_mismatched_sizes=True)

And then after I train it, I try to use the following (creating a pipeline with the fine-tuned model):

# Import the Transformers pipeline library
from transformers import pipeline

# Initializing Zero-Shot Classifier
classifier = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer, id2label=id2label)

I get the following error from it:

Failed to determine 'entailment' label id from the label2id mapping in the model config. Setting to -1. Define a descriptive label2id mapping in the model config to ensure correct outputs.

I tried searching the web for a solution but I can't find anything, you can refer to my previous question when I had trouble training it here


How to solve the first error:

Applying this solves the first error.

Second error:

I'm getting the following error:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)

I tried deleting my custom metrics and it fixed it for a while but it didn't last, this error keeps coming.

The error is coming from here:

sequences = "Some text sequence"
classifier = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer)
classifier(sequences, list(id2label.values()), multi_label=False)
# id2label is a dictionary mapping each label to its integer ID

I also tried trainer.save_model(actual_model) but it saved only some of the stuff and when I loaded it it was like I didn't train it at all.


If I change the line to:

classifier = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer) # OLD

classifier = pipeline("zero-shot-classification", model=model.to('cpu'), tokenizer=tokenizer) # NEW

It works fine, but if I change it to:

classifier = pipeline("zero-shot-classification", model=model.to('cuda'), tokenizer=tokenizer)

I get the same error too, my model was trained on a GPU cluster and Iw ant to test it as such, is it possible of am I missing something?

From what I checked the option the to function can get are: cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, privateuseone


Solution

  • After the model training, your model seems to be still placed on your GPU. The error message you receive:

    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)

    is thrown, because the input tensors that are generated from the pipeline are still on cpu. That is also the reason why the pipeline works as expected when you move the model to cpu with model.to('cpu').

    Per default, the pipeline will perform its actions on cpu, you change that behavior by specifying the device parameter.

    # cuda
    classifier = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer, device=0)
    
    #cpu
    classifier = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer, device="cpu")