CogVideoX LoRA - Zlikwid/ZlikwidCogVideoXLoRa

Model description

These are Zlikwid/ZlikwidCogVideoXLoRa LoRA weights for THUDM/CogVideoX-2b.

The weights were trained using the CogVideoX Diffusers trainer.

Was LoRA for the text encoder enabled? No.

Download model

Download the *.safetensors LoRA in the Files & versions tab.

Use it with the 🧨 diffusers library

from diffusers import CogVideoXPipeline
import torch

pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16).to("cuda")
pipe.load_lora_weights("Zlikwid/ZlikwidCogVideoXLoRa", weight_name="pytorch_lora_weights.safetensors", adapter_name=["cogvideox-lora"])

# The LoRA adapter weights are determined by what was used for training.
# In this case, we assume `--lora_alpha` is 32 and `--rank` is 64.
# It can be made lower or higher from what was used in training to decrease or amplify the effect
# of the LoRA upto a tolerance, beyond which one might notice no effect at all or overflows.
pipe.set_adapters(["cogvideox-lora"], [32 / 64])

video = pipe("None", guidance_scale=6, use_dynamic_cfg=True).frames[0]

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

License

Please adhere to the licensing terms as described here and here.

Intended uses & limitations

How to use

# TODO: add an example code snippet for running this diffusion pipeline

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training details

[TODO: describe the data used to train the model]

Downloads last month
2
Inference API
Inference API (serverless) does not yet support diffusers models for this pipeline type.

Model tree for Zlikwid/ZlikwidCogVideoXLoRa

Base model

THUDM/CogVideoX-2b
Adapter
(1)
this model