pythonpytorchhuggingface-transformerspythonanywheresentence-transformers

Running sentence transformers at PythonAnywhere


I am trying to run a HuggingFace model for computing vector embeddings as explained here at PythonAnywhere (it worked just fine locally on my laptop under Ubuntu under WSL2).

The installation went fine:

pip install -U sentence-transformers

However, when I run the following code:

from sentence_transformers import SentenceTransformer
import time

def ms_now():
    return int(time.time_ns() / 1000000)

class Timer():
    def __init__(self):
        self.start = ms_now()
    
    def stop(self):
        return ms_now() - self.start

sentences = ["This is an example sentence each sentence is converted"] * 10

timer = Timer()
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
print("Model initialized", timer.stop())
for _ in range(10):
    timer = Timer()
    embeddings = model.encode(sentences)
    print(timer.stop())

I get the error:

Traceback (most recent call last):
  File "/home/DrMeir/test/test.py", line 17, in <module>
    model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py", line 95, in __init__
    modules = self._load_sbert_model(model_path)
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py", line 840, in _load_sbert_model
    module = module_class.load(os.path.join(model_path, module_config['path']))
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py", line 137, in load
    return Transformer(model_name_or_path=input_path, **config)
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py", line 29, in __init__
    self._load_model(model_name_or_path, config, cache_dir)
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py", line 49, in _load_model
    self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
  File "/home/DrMeir/.local/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 493, in from_pretrained
    return model_class.from_pretrained(
  File "/home/DrMeir/.local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2903, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/DrMeir/.local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3061, in _load_pretrained_model
    id_tensor = id_tensor_storage(tensor) if tensor.device != torch.device("meta") else id(tensor)
RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: meta

They have torch 1.8.1+cpu at PythonAnywhere. On my laptop, it's 2.0.1.

What is the reason for the error and how can I get this to work?


Solution

  • As mentioned in the comments, the meta device was added in the PyTorch version 1.9. And the PythonAnywhere comes with PyTorch version 1.8.1.

    Downgrading transformers library to 4.6.0 which was released in May 12, 2021 (before torch 1.9 was released) solved this issue.