It's 2025, you shouldn't be hand writing SQL! This is a big step in making it where anyone can do in depth analysis on a dataset. Let us know what you think π€
observers π - automatically log all OpenAI compatible requests to a datasetπ½
β’ supports any OpenAI compatible endpoint πͺ β’ supports DuckDB, Hugging Face Datasets, and Argilla as stores
> pip install observers
No complex framework. Just a few lines of code to start sending your traces somewhere. Let us know what you think! @davidberenstein1957 and I will continue iterating!
π¨ How green is your model? π± Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research! π open-llm-leaderboard/comparator Now, you can not only compare models by performance, but also by their environmental footprint!
π The Comparator calculates COβ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... π οΈ Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
The cleaning process consists of: - Joining the separate splits together / add split column - Converting string messages into list of structs - Removing empty system prompts
π New feature of the Comparator of the π€ Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!
π οΈ Here's how to use it: 1. Select your model from the leaderboard. 2. Load its model tree. 3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison. 4. Press Load. See side-by-side performance metrics instantly!
Ready to dive in? π Try the π€ Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator π
When you come across an interesting dataset, you often wonder: Which topics frequently appear in these documents? π€ What is this data really about? π
Topic modeling helps answer these questions by identifying recurring themes within a collection of documents. This process enables quick and efficient exploratory data analysis.
Iβve been working on an app that leverages BERTopic, a flexible framework designed for topic modeling. Its modularity makes BERTopic powerful, allowing you to switch components with your preferred algorithms. It also supports handling large datasets efficiently by merging models using the BERTopic.merge_models approach. π
π How do we make this work? Hereβs the stack weβre using:
π Data Source β‘οΈ Hugging Face datasets with DuckDB for retrieval π§ Text Embeddings β‘οΈ Sentence Transformers (all-MiniLM-L6-v2) β‘ Dimensionality Reduction β‘οΈ RAPIDS cuML UMAP for GPU-accelerated performance π Clustering β‘οΈ RAPIDS cuML HDBSCAN for fast clustering βοΈ Tokenization β‘οΈ CountVectorizer π§ Representation Tuning β‘οΈ KeyBERTInspired + Hugging Face Inference Client with Meta-Llama-3-8B-Instruct π Visualization β‘οΈ Datamapplot library Check out the space and see how you can quickly generate topics from your dataset: datasets-topics/topics-generator
Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
- New up and coming Spaces in the last day - New up and coming Datasets in the last 2 weeks
It's a really good way to find some new gems before they become popular. For example, someone is working on a way to dynamically create assets inside a video game here: gptcall/AI-Game-Creator
π¨ Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? π Compare models: open-llm-leaderboard/comparator
Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? π€
If the model youβre interested in is evaluated on the Hugging Face Open LLM Leaderboard, thereβs an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator Letβs walk through an exampleπ
Letβs compare two solid options: - Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params) - gemma-2-2b-it from Google (2.5B params)
For an assistant, you want a model thatβs great at instruction following. So, how do these two models stack up on the IFEval task?
What about other evaluations? Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! π
This is a great example of how parameter size isnβt everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.
Looking for other comparisons? Drop your model suggestions below! π
π¨ Weβve just released a new tool to compare the performance of models in the π€ Open LLM Leaderboard: the Comparator π open-llm-leaderboard/comparator
Want to see how two different versions of LLaMA stack up? Letβs walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. π¦π§΅π
1/ Load the Models' Results - Go to the π€ Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator - Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns. - Press the Load button. Ready to dive into the results!
2/ Compare Metric Results in the Results Tab π - Head over to the Results tab. - Here, youβll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! π - Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.
3/ Check Config Alignment in the Configs Tab βοΈ - To ensure youβre comparing apples to apples, head to the Configs tab. - Review both modelsβ evaluation configurations, such as metrics, datasets, prompts, few-shot configs... - If something looks off, itβs good to know before drawing conclusions! β
4/ Compare Predictions by Sample in the Details Tab π - Curious about how each model responds to specific inputs? The Details tab is your go-to! - Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button. - Check out the side-by-side predictions and dive into the nuances of each modelβs outputs.
5/ With this tool, itβs never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether youβre a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.
π Try the π€ Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
π Excited to share the latest update to the Notebook Creator Tool!
Now with basic fine-tuning support using Supervised Fine-Tuning! π―
How it works: 1οΈβ£ Choose your Hugging Face dataset and notebook type (SFT) 2οΈβ£ Automatically generate your training notebook 3οΈβ£ Start fine-tuning with your data!
Link to the app π https://lnkd.in/e_3nmWrB π‘ Want to contribute with new notebooks? πhttps://lnkd.in/eWcZ92dS