I am trying to run a hugging face AI model, which gives me an error when I try to import the Diffuser module. From here, I take this model, Huggingface Text to image generation model Error log:
cannot import name 'AutoPipelineForText2Imag' from 'diffusers
Code use to run this AI model,
import torch
from diffusers import LCMScheduler, AutoPipelineForText2Image
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
adapter_id = "latent-consistency/lcm-lora-sdxl"
pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")
# load and fuse lcm lora
pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
# disable guidance_scale by passing 0
image = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0).images[0]
i have found an alternative way to launch this model on google colab, here it is:
import torch
from diffusers import DiffusionPipeline, LCMScheduler
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
variant="fp16",
torch_dtype=torch.float16
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LoRAs
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm")
pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut")
prompt = "itachi crying with sharingan, 4k image, high quality"
negative_prompt = "extra digit, fewer digits, cropped, worst quality, blurry, blur, bad quality, low quality, glitch, deformed, mutated, ugly, disfigured"
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=20, guidance_scale=3, negative_prompt=negative_prompt, generator=generator).images[0]
image
this whole script will load model for you, although, it is llm colab is not enough for it but it loads almost 90 percent weights on free GPU of google colab, so if you see any error like this so don't get confused (it will work beside of this error):
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
```
pip install accelerate
```
.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-d34e644404d0> in <cell line: 15>()
13
14 # Combine LoRAs
---> 15 pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8])
1 frames
/usr/local/lib/python3.10/dist-packages/diffusers/loaders.py in set_adapters(self, adapter_names, weights)
721 """
722 if not USE_PEFT_BACKEND:
--> 723 raise ValueError("PEFT backend is required for `set_adapters()`.")
724
725 adapter_names = [adapter_names] if isinstance(adapter_names, str) else adapter_names
ValueError: PEFT backend is required for `set_adapters()`.
here is my google colab notebook for this model as reference.