--- language: - en license: creativeml-openrail-m thumbnail: "https://huggingface.co./Guizmus/SDArt_synesthesia768/resolve/main/showcase.jpg" tags: - stable-diffusion - text-to-image - image-to-image --- # SDArt : Synesthesia (version based on 2.1 768px) ![Showcase](https://huggingface.co./Guizmus/SDArt_synesthesia768/resolve/main/showcase.jpg) ## Theme Dear @Challengers , > In a world where colors have flavor, and sounds have texture, > The senses intertwine to create a sensory rapture. > Welcome to “Synesthesia”- where the ordinary is extraordinary, > And art takes on a meaning that’s truly visionary! :Rainbowpink: Synesthesia /ˌsɪn.əsˈθiː.ʒə/: an anomalous blending of the senses in which the stimulation of one modality simultaneously produces sensation in a different modality. Synesthetes hear colors, feel sounds, and taste shapes. * Create an image that captures the essence of synesthesia sensory! Explore the intersections of sound, color, and texture. What’s it like tasting a melody, feeling a scent, or seeing colored words? * What would it look it look like to taste the sound of a thunderstorm? How would you visualize the sensation of feeling the warmth of the sun on your skin? Or how about tasting a particular color–such as a bright red apple or a cool blueberry? * Explore the possibilities of synesthesia and its many interpretations! ## Model description This is a model related to the "Picture of the Week" contest on Stable Diffusion discord.. I try to make a model out of all the submission for people to continue enjoy the theme after the even, and see a little of their designs in other people's creations. The token stays "SDArt" and I balance the learning on the low side, so that it doesn't just replicate creations. The total dataset is made of 39 pictures. It was trained on [Stable diffusion 2.1 768px](https://huggingface.co./stabilityai/stable-diffusion-2-1). I used [EveryDream](https://github.com/victorchall/EveryDream2trainer) to do the training, 100 total repeat per picture. The pictures were tagged using the token "SDArt", and an arbitrary token I choose. The dataset is provided below, as well as a list of usernames and their corresponding token. The recommended sampling is k_Euler_a or DPM++ 2M Karras on 20 steps, CFGS 7.5 . [The model is also available here](https://huggingface.co./Guizmus/SDArt_synesthesia) in a version trained on 1.5 as a base. ## Trained tokens * SDArt * dyce * bnp * keel * aten * fcu * lpg * mth * elio * gani * pfa * kprc * cpec * kuro * asot * psst * sqm * irgc * cq * utm * guin * crit * mlas * isch * vedi * dds * acu * oxi * kohl * maar * mako * mds * mert * mgt * miki * minh * mohd * mss * muc * mwf ## Download links [SafeTensors](https://huggingface.co./Guizmus/SDArt_synesthesia768/resolve/main/SDArt_synesthesia768.safetensors) [CKPT](https://huggingface.co./Guizmus/SDArt_synesthesia768/resolve/main/SDArt_synesthesia768.ckpt) [Config (yaml)](https://huggingface.co./Guizmus/SDArt_synesthesia768/resolve/main/SDArt_synesthesia768.yaml) [Dataset](https://huggingface.co./Guizmus/SDArt_synesthesia768/resolve/main/dataset.zip) ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co./docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co./docs/diffusers/optimization/onnx), [MPS](https://huggingface.co./docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "Guizmus/SDArt_synesthesia768" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "SDArt minh" image = pipe(prompt).images[0] image.save("./SDArt.png") ```