I installed the Llama 3.1 8B model through Meta's Github page, but I can't get their example code to work. I'm running the following code in the same directory as the Meta-Llama-3.1-8B folder:
import transformers
import torch
pipeline = transformers.pipeline(
"text-generation",
model="Meta-Llama-3.1-8B",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda"
)
The error is
OSError: Meta-Llama-3.1-8B does not appear to have a file named config.json
Where can I get config.json
?
I've installed the latest transformers
module, and I understand that I can access the remote model on HuggingFace. But I'd rather use my local model. Is this possible?
The issue isn't on your end. The confusion arises from Meta not clearly distinguishing between the distributions via Hugging Face and download.sh.
To resolve this, you can download the model files using the Hugging Face CLI:
!huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --local-dir meta-llama/Meta-Llama-3-8B-Instruct
This method will provide you with the config.json and tokenizer.json files.
Additionally, you can try downloading other versions manually. For instance, someone shared a link to the configuration file on Hugging Face: