pythonlarge-language-modelhuggingface

Does unsloth support cache directory for models?


I want to download a model from hugging face to be used with unsloth for trainig:

from unsloth import FastLanguageModel,

max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Llama-3.2-1B-Instruct",
    max_seq_length=max_seq_length,
    load_in_4bit=False,
)

However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.

My question: How can I load unsloth model from local hard drive?


Solution

  • Turns out it is actually really simple, you load the model like this:

    from unsloth import FastLanguageModel,
    
    model, tokenizer = FastLanguageModel.from_pretrained(
        "/content/model"
    )