Introducing the Synthetic Data Generator, a user-friendly application that takes a no-code approach to creating custom datasets with Large Language Models (LLMs). The best part: A simple step-by-step process, making dataset creation a non-technical breeze, allowing anyone to create datasets and models in minutes and without any code.
Open Preference Dataset for Text-to-Image Generation by the ๐ค Community
Open Image Preferences is an Apache 2.0 licensed dataset for text-to-image generation. This dataset contains 10K text-to-image preference pairs across common image generation categories, while using different model families and varying prompt complexities.
This is amazing for cheap models fine-tunes without the hassle of actual deployment! TIL: LoRA fine-tunes for models on the Hub can directly be used for inference!
BlackForest Labs Flux Dev VS. Stability AI Stable Diffusion Large 3.5
Together with the โ data-is-better-together community, we've worked on an Apache 2.0 licensed open image preference dataset based on the fal ai imgsys prompts dataset. Thanks to the awesome community, we have managed to get 5K preference pairs in less than 2 days. The annotation alignment among annotators is great too.
Aashish Kumar won a month of Hugging Face Pro by making the most contributions! Congrats from the entire team ๐ฅ
The best thing?! We are not done yet! Let's keep the annotations coming for 5K more in the second part of the sprint! (with more prices to go around).
Letโs make a generation of amazing image-generation models
The best image generation models are trained on human preference datasets, where annotators have selected the best image from a choice of two. Unfortunately, many of these datasets are closed source so the community cannot train open models on them. Letโs change that!
The community can contribute image preferences for an open-source dataset that could be used for building AI models that convert text to image, like the flux or stable diffusion families. The dataset will be open source so everyone can use it to train models that we can all use.
For anyone who struggles with NER or information extraction with LLM.
We showed an efficient workflow for token classification including zero-shot suggestions and model fine-tuning with Argilla, GliNER, the NuMind NuExtract LLM and SpanMarker. @argilla
Import any dataset from the Hub and configure your labeling tasks without needing any code!
Really excited about extending the Hugging Face Hub integration with many more streamlined features and workflows, and we would love to hear your feedback and ideas, so don't feel shy and reach out ๐ซถ๐ฝ
You can now build a custom text classifier without days of human labeling!
๐ LLMs work reasonably well as text classifiers. ๐ They are expensive to run at scale and their performance drops in specialized domains.
๐ Purpose-built classifiers have low latency and can potentially run on CPU. ๐ They require labeled training data.
Combine the best of both worlds: the automatic labeling capabilities of LLMs and the high-quality annotations from human experts to train and deploy a specialized model.
The Synthetic Data Generator now directly integrates with Argilla, so you can generate and curate your own high-quality datasets from pure natural language!
Up next -> include dataset generation for text classification. Other suggestions? Let us know.
Thursday 10 October 17:00 CEST, I will show a good way to get started with a text classification project on the Hugging Face Hub with Argilla and Setfit.
Why is argilla/FinePersonas-v0.1 great for synthetic data generation? It can be used to synthesise realistic and diverse data of the customer personas your company is interested in!
We've got a number of great community meetups coming up again where we'll be discussing the basics of getting started and using Argilla for TextCat, TokenCat/NER and RAG. We will walk you through common scenario's and everything you might need to know to get your projects started.
First meetup that is coming up: Setting up a text classification project using Argilla and SetFit!
Deploy Argilla on Spaces Vibe check your dataset Configure and create an Argilla dataset Add records Add zero-shot suggestions Evaluate model suggestions in Argilla Train a SetFit model
Hope to see all of you guys there and looking forward to your questions and AI use cases. Don't be shy about bringing your own issues and questions to the table. We would love to answer them.