pythonhuggingface-transformerslarge-language-modeldeepseek

Where to import the models for DeepSeek VL2 from?


I am trying to run the DeepSeek VL2 model (tiny version) locally and use the following code found on Huggingface:

import torch
from transformers import AutoModelForCausalLM

from deepseek_vl.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM
from deepseek_vl.utils.io import load_pil_images


# specify the path to the model
model_path = "deepseek-ai/deepseek-vl2-tiny"  # replaced 'small' with 'tiny'
vl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer

vl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()

## single image conversation example
conversation = [
    {
        "role": "<|User|>",
        "content": "<image>\n<|ref|>The giraffe at the back.<|/ref|>.",
        "images": ["./images/visual_grounding.jpeg"],
    },
    {"role": "<|Assistant|>", "content": ""},
]

# load images and prepare for inputs
pil_images = load_pil_images(conversation)
prepare_inputs = vl_chat_processor(
    conversations=conversation,
    images=pil_images,
    force_batchify=True,
    system_prompt=""
).to(vl_gpt.device)

# run image encoder to get the image embeddings
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)

# run the model to get the response
outputs = vl_gpt.language_model.generate(
    inputs_embeds=inputs_embeds,
    attention_mask=prepare_inputs.attention_mask,
    pad_token_id=tokenizer.eos_token_id,
    bos_token_id=tokenizer.bos_token_id,
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=512,
    do_sample=False,
    use_cache=True
)

answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True)
print(f"{prepare_inputs['sft_format'][0]}", answer)

However, I am unable to import

from deepseek_vl.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM
from deepseek_vl.utils.io import load_pil_images

I tried to install deepseek_vl through pip but the packages listed there with a similar name seem to be something completely different. Then I thought it might be required to download the DeepSeek-VL repository from Github. But the repository does not contain classes named DeepseekVLV2Processor or DeepseekVLV2ForCausalLM in the models directory. The load_pil_images function can be found under utils.io though.

I know that the Installation gudieline on Huggingface states that the dependencies are to be installed through pip install -e .. But when do I have to run this command? After downloading the repository? Obviously, since I am new to running such models locally, I must be doing something completely wrong. How am I supposed to run the code above and to import the requirements correctly?


Solution

  • I think there is mistake on Huggingface - maybe they fogot to add 2 to vl

    It mixes old code for DeepSeek-VL from deepseek_vl. ...
    with code for DeepSeek-VL2 ... import DeepseekVLV2Processor

    But there is also link to repo DeepSeek-VL2 which shows code with vl2 instead of vl

    from deepseek_vl2.models import DeepseekVLV2Processor
    

    and this should be correct code.

    You have to download DeepSeek-VL2 instead of DeepSeek-VL to use DeepseekVLV2Processor because first version doesn't have it.


    I send this problem to author(s) of page Huggingface
    as Update README.md - typo in example code