--- license: mit base_model: warp-ai/wuerstchen-prior datasets: - dongOi071102/meme-image-no-text tags: - wuerstchen - text-to-image - diffusers - diffusers-training - lora inference: true --- # LoRA Finetuning - dongOi071102/wuerstchen-prior-meme-image-no-text-lora-v1 This pipeline was finetuned from **warp-ai/wuerstchen-prior** on the **dongOi071102/meme-image-no-text** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['a cartoon character sitting at a desk with a computer']: ![val_imgs_grid](./val_imgs_grid.png) ## Pipeline usage You can use the pipeline like so: ```python from diffusers import DiffusionPipeline import torch pipeline = AutoPipelineForText2Image.from_pretrained( "warp-ai/wuerstchen", torch_dtype=float32 ) # load lora weights from folder: pipeline.prior_pipe.load_lora_weights("dongOi071102/wuerstchen-prior-meme-image-no-text-lora-v1", torch_dtype=float32) image = pipeline(prompt=prompt).images[0] image.save("my_image.png") ``` ## Training info These are the key hyperparameters used during training: * LoRA rank: 8 * Epochs: 100 * Learning rate: 0.0001 * Batch size: 8 * Gradient accumulation steps: 1 * Image resolution: 512 * Mixed-precision: fp16 More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/2111818-no/text2image-fine-tune/runs/w0wd57a9).