Datasets Maintainers

non-profit
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

datasets-maintainers's activity

lhoestqΒ 
posted an update 13 days ago
view post
Post
1606
Made a HF Dataset editor a la gg sheets here: lhoestq/dataset-spreadsheets

With Dataset Spreadsheets:
✏️ Edit datasets in the UI
πŸ”— Share link with collaborators
🐍 Use locally in DuckDB or Python

Available for the 100,000+ parquet datasets on HF :)
cfahlgren1Β 
posted an update 22 days ago
view post
Post
1810
You can just ask things πŸ—£οΈ

"show me messages in the coding category that are in the top 10% of reward model scores"

Download really high quality instructions from the Llama3.1 405B synthetic dataset πŸ”₯

argilla/magpie-ultra-v1.0

cfahlgren1Β 
posted an update 24 days ago
view post
Post
2991
We just dropped an LLM inside the SQL Console 🀯

The amazing, new Qwen/Qwen2.5-Coder-32B-Instruct model can now write SQL for any Hugging Face dataset ✨

It's 2025, you shouldn't be hand writing SQL! This is a big step in making it where anyone can do in depth analysis on a dataset. Let us know what you think πŸ€—
cfahlgren1Β 
posted an update about 1 month ago
view post
Post
910
observers πŸ”­ - automatically log all OpenAI compatible requests to a datasetπŸ’½

β€’ supports any OpenAI compatible endpoint πŸ’ͺ
β€’ supports DuckDB, Hugging Face Datasets, and Argilla as stores

> pip install observers

No complex framework. Just a few lines of code to start sending your traces somewhere. Let us know what you think! @davidberenstein1957 and I will continue iterating!

Here's an example dataset that was logged to Hugging Face from Ollama: cfahlgren1/llama-3.1-awesome-chatgpt-prompts
cfahlgren1Β 
posted an update about 1 month ago
view post
Post
870
You can create charts, leaderboards, and filters on top of any Hugging Face dataset in less than a minute

β€’ ASCII Bar Charts πŸ“Š
β€’ Powered by DuckDB WASM ⚑
β€’ Download results to Parquet πŸ’½
β€’ Embed and Share results with friends πŸ“¬

Do you have any interesting queries?
cfahlgren1Β 
posted an update about 1 month ago
albertvillanovaΒ 
posted an update about 1 month ago
view post
Post
1387
🚨 How green is your model? 🌱 Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!
πŸ‘‰ open-llm-leaderboard/comparator
Now, you can not only compare models by performance, but also by their environmental footprint!

🌍 The Comparator calculates COβ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... πŸ› οΈ
Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
cfahlgren1Β 
posted an update about 1 month ago
view post
Post
3091
You can clean and format datasets entirely in the browser with a few lines of SQL.

In this post, I replicate the process @mlabonne used to clean the new microsoft/orca-agentinstruct-1M-v1 dataset.

The cleaning process consists of:
- Joining the separate splits together / add split column
- Converting string messages into list of structs
- Removing empty system prompts

https://huggingface.co./blog/cfahlgren1/the-beginners-guide-to-cleaning-a-dataset

Here's his new cleaned dataset: mlabonne/orca-agentinstruct-1M-v1-cleaned
  • 1 reply
Β·
cfahlgren1Β 
posted an update about 1 month ago
view post
Post
2226
Why use Google Drive when you can have:

β€’ Free storage with generous limitsπŸ†“
β€’ Dataset Viewer (Sorting, Filtering, FTS) πŸ”
β€’ Third Party Library Support
β€’ SQL Console 🟧
β€’ Security πŸ”’
β€’ Community, Reach, and Visibility πŸ“ˆ

It's a no brainer!

Check out our post on what you get instantly out of the box when you create a dataset.
https://huggingface.co./blog/researcher-dataset-sharing
  • 1 reply
Β·
albertvillanovaΒ 
posted an update about 2 months ago
view post
Post
1476
πŸš€ New feature of the Comparator of the πŸ€— Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!

πŸ› οΈ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!

Ready to dive in? πŸ† Try the πŸ€— Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator 🌐
asoriaΒ 
posted an update about 2 months ago
view post
Post
1802
πŸš€ Exploring Topic Modeling with BERTopic πŸ€–

When you come across an interesting dataset, you often wonder:
Which topics frequently appear in these documents? πŸ€”
What is this data really about? πŸ“Š

Topic modeling helps answer these questions by identifying recurring themes within a collection of documents. This process enables quick and efficient exploratory data analysis.

I’ve been working on an app that leverages BERTopic, a flexible framework designed for topic modeling. Its modularity makes BERTopic powerful, allowing you to switch components with your preferred algorithms. It also supports handling large datasets efficiently by merging models using the BERTopic.merge_models approach. πŸ”—

πŸ” How do we make this work?
Here’s the stack we’re using:

πŸ“‚ Data Source ➑️ Hugging Face datasets with DuckDB for retrieval
🧠 Text Embeddings ➑️ Sentence Transformers (all-MiniLM-L6-v2)
⚑ Dimensionality Reduction ➑️ RAPIDS cuML UMAP for GPU-accelerated performance
πŸ” Clustering ➑️ RAPIDS cuML HDBSCAN for fast clustering
βœ‚οΈ Tokenization ➑️ CountVectorizer
πŸ”§ Representation Tuning ➑️ KeyBERTInspired + Hugging Face Inference Client with Meta-Llama-3-8B-Instruct
🌍 Visualization ➑️ Datamapplot library
Check out the space and see how you can quickly generate topics from your dataset: datasets-topics/topics-generator

Powered by @MaartenGr - BERTopic
albertvillanovaΒ 
posted an update about 2 months ago
view post
Post
3118
πŸš€ Exciting update! You can now compare multiple models side-by-side with the Hugging Face Open LLM Comparator! πŸ“Š

open-llm-leaderboard/comparator

Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
cfahlgren1Β 
posted an update about 2 months ago
view post
Post
1161
If you are like me, I like to find up and coming datasets and spaces before everyone else.

I made a trending repo space cfahlgren1/trending-repos where it shows:

- New up and coming Spaces in the last day
- New up and coming Datasets in the last 2 weeks

It's a really good way to find some new gems before they become popular. For example, someone is working on a way to dynamically create assets inside a video game here: gptcall/AI-Game-Creator

albertvillanovaΒ 
posted an update 2 months ago
view post
Post
1222
🚨 Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? πŸ“Š Compare models: open-llm-leaderboard/comparator
albertvillanovaΒ 
posted an update 2 months ago
view post
Post
1911
Finding the Best SmolLM for Your Project

Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? πŸ€”

If the model you’re interested in is evaluated on the Hugging Face Open LLM Leaderboard, there’s an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator
Let’s walk through an exampleπŸ‘‡

Let’s compare two solid options:
- Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params)
- gemma-2-2b-it from Google (2.5B params)

For an assistant, you want a model that’s great at instruction following. So, how do these two models stack up on the IFEval task?

What about other evaluations?
Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! πŸ“Š

This is a great example of how parameter size isn’t everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.

Looking for other comparisons? Drop your model suggestions below! πŸ‘‡
albertvillanovaΒ 
posted an update 2 months ago
view post
Post
1950
🚨 We’ve just released a new tool to compare the performance of models in the πŸ€— Open LLM Leaderboard: the Comparator πŸŽ‰
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Let’s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. πŸ¦™πŸ§΅πŸ‘‡

1/ Load the Models' Results
- Go to the πŸ€— Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab πŸ“Š
- Head over to the Results tab.
- Here, you’ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! 🌟
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab βš™οΈ
- To ensure you’re comparing apples to apples, head to the Configs tab.
- Review both models’ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, it’s good to know before drawing conclusions! βœ…

4/ Compare Predictions by Sample in the Details Tab πŸ”
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each model’s outputs.

5/ With this tool, it’s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether you’re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

πŸš€ Try the πŸ€— Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
asoriaΒ 
posted an update 3 months ago
view post
Post
2459
πŸ“ I wrote a tutorial on how to get started with the fine-tuning process using Hugging Face tools, providing an end-to-end workflow.

The tutorial covers creating a new dataset using the new SQL Console πŸ›’ and fine-tuning a model with SFT, guided by the Notebook Creator App πŸ“™.

πŸ‘‰ You can read the full article here:
https://huggingface.co./blog/asoria/easy-fine-tuning-with-hf
asoria/auto-notebook-creator
albertvillanovaΒ 
posted an update 3 months ago
asoriaΒ 
posted an update 3 months ago
view post
Post
961
πŸš€ Excited to share the latest update to the Notebook Creator Tool!

Now with basic fine-tuning support using Supervised Fine-Tuning! 🎯

How it works:
1️⃣ Choose your Hugging Face dataset and notebook type (SFT)
2️⃣ Automatically generate your training notebook
3️⃣ Start fine-tuning with your data!

Link to the app πŸ‘‰ https://lnkd.in/e_3nmWrB
πŸ’‘ Want to contribute with new notebooks? πŸ‘‰https://lnkd.in/eWcZ92dS