I am confused on how the installation of the packages are performed. Currently I was working on a StableDiffusion model and every-time I run the code its again and again downloading files which are 3 to 4 Gigs big.
This is the code I was trying to run at first:
from torch import autocast
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
use_auth_token=True
).to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt)["sample"][0]
image.save("astronaut_rides_horse.png")
When I run the code the following appears in my shell:
Fetching 16 files: 0%| | 0/16 [00:00<?, ?it/s]
vae/diffusion_pytorch_model.safetensors: 0%| | 0.00/335M [00:00<?, ?B/s]
unet/diffusion_pytorch_model.safetensors: 0%| | 0.00/3.44G [00:00<?, ?B/s]
safety_checker/model.safetensors: 0%| | 0.00/1.22G [00:00<?, ?B/s]
text_encoder/model.safetensors: 0%| | 0.00/492M [00:00<?, ?B/s]
and this happens each and everytime I run the code.
I tried installing and cloning the whole git repo. (I honestly don't know why I did that even though I know it wasn't gonna affect a thing!) Also I tried searching for many forums for this issue but not even a single clue, maybe its because of my in-experienced approach.
There are 2 possible solutions:
Saving manually:
from diffusers import StableDiffusionPipeline
model = StableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
use_auth_token=True,
)
model.save_pretrained("./my_model_directory/") # only needed first run
model = StableDiffusionPipeline.from_pretrained("./my_model_directory/")
Cache dir:
from diffusers import StableDiffusionPipeline
model = StableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
cache_dir="./my_model_directory/",
use_auth_token=True,
)