Albert Villanova del Moral

albertvillanova

AI & ML interests

ML Engineer @ Hugging Face: Evaluations (Science)

Recent Activity

Organizations

Language Technology Research Group at the University of Helsinki's profile picture Hugging Face's profile picture AI4Bharat's profile picture WMT: Workshop on Statistical Machine Translation's profile picture DAIR.AI's profile picture BigScience Workshop's profile picture Neuropark's profile picture Hugging Face Internal Testing Organization's profile picture superb's profile picture OSCAR's profile picture GEM benchmark's profile picture Tmp Test's profile picture Col·lectivaT's profile picture Wikimedia's profile picture BigScience Catalogue Data's profile picture tmp avm 1's profile picture Softcatalà's profile picture PubMed Central's profile picture Speech Recognition Community Event Version 2's profile picture BIG-bench's profile picture I Hackathon Somos NLP: PLN en Español's profile picture BigScience Biomedical Datasets's profile picture OpenSLR's profile picture BigScience Data's profile picture The UIT Natural Language Processing Group's profile picture Evaluation datasets's profile picture WebNLG's profile picture SomosNLP's profile picture Data's profile picture Datasets Maintainers's profile picture Open-Source AI Meetup's profile picture EuroPython 2022's profile picture FEVER's profile picture BigLAM: BigScience Libraries, Archives and Museums's profile picture BigCode's profile picture Hugging Face H4's profile picture Center for AI Safety's profile picture Hugging Face OSS Metrics's profile picture BigBang's profile picture OPUS's profile picture Aiinnova's profile picture Research Computing Center of Lomonosov Moscow State University's profile picture Open LLM Leaderboard's profile picture University of Edinburgh - Institute for Language, Cognition and Computation's profile picture EdinburghNLP - Natural Language Processing Group at the University of Edinburgh's profile picture Datasets examples's profile picture Demo leaderboard with an integrated backend's profile picture La Leaderboard's profile picture Paris AI Running Club's profile picture HuggingFaceEval's profile picture Legacy Datasets's profile picture Department of Cognitive Science @ JHU's profile picture Google Research Datasets's profile picture Defunct Datasets's profile picture ADE Benchmark Corpus's profile picture Natural Language Processing Group - Athens University of Economics and Business's profile picture OMILab, The Open University of Israel's profile picture hotpotqa's profile picture Universidad de Sevilla - Departamento de Lenguajes y Sistemas Informáticos's profile picture AILAB-VNUHCM's profile picture tweets-hate-speech-detection's profile picture Software Evolution and Architecture Lab's profile picture GRIT ID's profile picture National Center for Sign Language and Gesture Resources's profile picture ParaPat's profile picture Center for SuperIntelligence's profile picture cornell_movie_dialog's profile picture Abuelkhair Corpus's profile picture dataset-org's profile picture Consumer Financial Protection Bureau's profile picture Project Ben-Yehuda - פרויקט בן-יהודה's profile picture Maluuba's profile picture ParaCrawl's profile picture boschresearch's profile picture uestc-swahili's profile picture Language Technology Group, TU Darmstadt) 's profile picture ufldl-stanford's profile picture Statistical and Neural Machine Translation's profile picture Linguateca's profile picture sonos-nlu-benchmark's profile picture Department of Computer Science and Technology (University of Cambridge)'s profile picture Jeopardy Datasets's profile picture ptb-text-only's profile picture BnL Open Data's profile picture china-ai-law-challenge's profile picture hover-nlp's profile picture WHUIR's profile picture cornell-movie-review-data's profile picture Centre for Speech Technology Research - University of Edinburgh's profile picture webnlg-challenge's profile picture Building Educational Applications 2019 Shared Task's profile picture bookcorpus's profile picture convai-challenge's profile picture Large Text Compression Benchmark's profile picture GermanEval's profile picture PKU-TANGENT's profile picture Narodowego Korpusu Języka Polskiego's profile picture Arabic Language Technologies - Qatar Computing Research Institute's profile picture ubuntu-dialogs-corpus's profile picture Korea Maritime and Ocean University's profile picture scan-tasks's profile picture TruthfulQA's profile picture conceptnet5's profile picture li2017dailydialog's profile picture zalando-datasets's profile picture hirupert's profile picture Tokyo Metropolitan University Natural Language Processing Group's profile picture Machine Reading for Question Answering Workshop's profile picture Clinc: Conversational AI Technology's profile picture CMU Festvox Project's profile picture corona-tweet's profile picture SemEval's profile picture quora-competitions's profile picture Electricity Transformer Dataset (ETDataset)'s profile picture ParlAI's profile picture emotone-ar-cicling2017's profile picture SpellOnYou's profile picture Shanasai LLC's profile picture hate-speech-filipino's profile picture code-search-net's profile picture esnli's profile picture dravidianlangtech's profile picture kmi-linguistics's profile picture jnlpba's profile picture Read The Web - Carnegie Mellon University's profile picture Iowa State University's profile picture HDLTex's profile picture peoples-daily-ner's profile picture timit-asr's profile picture ArXiv Community's profile picture UCSD-AI4H's profile picture Ixa - HiTZ's profile picture Large Scale Visual Recognition Challenge's profile picture SNOW - Natural Language Processing Laboratory, Nagaoka University of Technology's profile picture KorQuAD's profile picture LSDSem's profile picture nfL6's profile picture Community Datasets's profile picture ontonotes's profile picture factckbr's profile picture QAngaroo's profile picture Centre for Text Technology - Humanities - NWU's profile picture halabi2016's profile picture dki-lab's profile picture billion-word-benchmark's profile picture Universal Morphology's profile picture hate-speech-portuguese's profile picture senti-lex's profile picture universal-dependencies's profile picture achrafothman's profile picture lince-benchmark's profile picture eraser-benchmark's profile picture Turkic Interlingua - TIL's profile picture Center for Language Technologies - De La Salle University's profile picture Wongnai's profile picture open-llm-leaderboard-react's profile picture Prompt Leaderboard's profile picture

albertvillanova's activity

posted an update about 1 month ago
view post
Post
1387
🚨 How green is your model? 🌱 Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!
👉 open-llm-leaderboard/comparator
Now, you can not only compare models by performance, but also by their environmental footprint!

🌍 The Comparator calculates CO₂ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... 🛠️
Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
posted an update about 2 months ago
view post
Post
1476
🚀 New feature of the Comparator of the 🤗 Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!

🛠️ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!

Ready to dive in? 🏆 Try the 🤗 Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator 🌐
posted an update about 2 months ago
view post
Post
3118
🚀 Exciting update! You can now compare multiple models side-by-side with the Hugging Face Open LLM Comparator! 📊

open-llm-leaderboard/comparator

Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
posted an update 2 months ago
view post
Post
1222
🚨 Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? 📊 Compare models: open-llm-leaderboard/comparator
posted an update 2 months ago
view post
Post
1911
Finding the Best SmolLM for Your Project

Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? 🤔

If the model you’re interested in is evaluated on the Hugging Face Open LLM Leaderboard, there’s an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator
Let’s walk through an example👇

Let’s compare two solid options:
- Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params)
- gemma-2-2b-it from Google (2.5B params)

For an assistant, you want a model that’s great at instruction following. So, how do these two models stack up on the IFEval task?

What about other evaluations?
Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! 📊

This is a great example of how parameter size isn’t everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.

Looking for other comparisons? Drop your model suggestions below! 👇
posted an update 2 months ago
view post
Post
1950
🚨 We’ve just released a new tool to compare the performance of models in the 🤗 Open LLM Leaderboard: the Comparator 🎉
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Let’s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. 🦙🧵👇

1/ Load the Models' Results
- Go to the 🤗 Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab 📊
- Head over to the Results tab.
- Here, you’ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! 🌟
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab ⚙️
- To ensure you’re comparing apples to apples, head to the Configs tab.
- Review both models’ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, it’s good to know before drawing conclusions! ✅

4/ Compare Predictions by Sample in the Details Tab 🔍
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each model’s outputs.

5/ With this tool, it’s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether you’re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

🚀 Try the 🤗 Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
posted an update 3 months ago
posted an update 7 months ago
posted an update 8 months ago
view post
Post
4059
Recently, the Hugging Face 🤗 datasets team met with the Language Technologies team led by Marta Villegas ( @mvillegas ) at Barcelona Supercomputing Center @BSC-LT . Eager to collaborate to promote AI across Catalan, Spanish, Basque, and Galician languages and share open-source datasets/models. 🤝 #AI #LanguageTech #OpenSource
  • 1 reply
·
posted an update 8 months ago
view post
Post
1661
🚀 We recently released datasets 2.19.0! 📦

🔥 What's New:
- Polars integration 🐻‍❄️
- fsspec support for conversion to JSON, CSV, and Parquet
- Mode parameter for Image feature
- CLI function to convert script-datasets to Parquet
- Dataset.take and Dataset.skip

Plus, a bunch of general improvements & bug fixes!

Check out the release notes: https://github.com/huggingface/datasets/releases/tag/2.19.0

Upgrade now and power up your data workflows! 💥
  • 2 replies
·
reacted to Wauplin's post with 🚀 8 months ago
view post
Post
1827
🚀 Just released version 0.23.0 of the huggingface_hub Python library!

Exciting updates include:
📁 Seamless download to local dir!
💡 Grammar and Tools in InferenceClient!
🌐 Documentation full translated to Korean!
👥 User API: get likes, upvotes, nb of repos, etc.!
🧩 Better model cards and encoding for ModelHubMixin!

Check out the full release notes for more details:
Wauplin/huggingface_hub#6
👀
reacted to julien-c's post with 🤗❤️ 11 months ago
view post
Post
📣 NEW on HF

the Dataset Viewer is now available on *private datasets* too

You need to be a PRO or a Enterprise Hub user. 🔥

Great work from our Datasets team 🥰: @lhoestq @severo @polinaeterna @asoria @albertvillanova and the whole team 🥰
  • 1 reply
·