Radamés Ajna's picture

Radamés Ajna

radames

AI & ML interests

None yet

Recent Activity

liked a model 1 day ago
city96/FLUX.1-dev-gguf
liked a model 1 day ago
city96/FLUX.1-schnell-gguf
liked a model 2 days ago
alimama-creative/FLUX.1-Turbo-Alpha
View all activity

Articles

Organizations

Spaces-explorers's profile picture CVPR Demo Track's profile picture MONAI's profile picture Gradio-Blocks-Party's profile picture Webhooks Explorers (BETA)'s profile picture Open Access AI Collective's profile picture The Team Ten's profile picture Open-Source AI Meetup's profile picture mangoes AI's profile picture Stable Diffusion concepts library's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Daily's profile picture DragGan's profile picture meta-private's profile picture temp-org's profile picture Blog-explorers's profile picture Editing Images's profile picture leditsplusplus's profile picture sci-blender's profile picture Lilac AI's profile picture Latent Consistency's profile picture rtemp's profile picture ZeroGPU Explorers's profile picture cvmistralhackathon's profile picture Shizuku's profile picture Journalists on Hugging Face's profile picture Hugging Face - Visual Blocks's profile picture Social Post Explorers's profile picture +RAIN film festival's profile picture

Posts 10

view post
Post
5700
Thanks to @OzzyGT for pushing the new Anyline preprocessor to https://github.com/huggingface/controlnet_aux. Now you can use the TheMistoAI/MistoLine ControlNet with Diffusers completely.

Here's a demo for you: radames/MistoLine-ControlNet-demo
Super resolution version: radames/Enhance-This-HiDiffusion-SDXL

from controlnet_aux import AnylineDetector

anyline = AnylineDetector.from_pretrained(
    "TheMistoAI/MistoLine", filename="MTEED.pth", subfolder="Anyline"
).to("cuda")

source = Image.open("source.png")
result = anyline(source, detect_resolution=1280)
view post
Post
6473
At Google I/O 2024, we're collaborating with the Google Visual Blocks team (https://visualblocks.withgoogle.com) to release custom Hugging Face nodes. Visual Blocks for ML is a browser-based tool that allows users to create machine learning pipelines using a visual interface. We're launching nodes with Transformers.js, running models on the browser, as well as server-side nodes running Transformers pipeline tasks and LLMs using our hosted inference. With @Xenova @JasonMayes

You can learn more about it here https://huggingface.co./blog/radames/hugging-face-google-visual-blocks

Source-code for the custom nodes:
https://github.com/huggingface/visual-blocks-custom-components