Yacine Jernite's picture

Yacine Jernite

yjernite

AI & ML interests

Technical, community, and regulatory tools of AI governance @HuggingFace

Recent Activity

liked a model 6 days ago
PatronusAI/glider
liked a Space 6 days ago
PatronusAI/GLIDER
liked a model 6 days ago
answerdotai/ModernBERT-large
View all activity

Articles

Organizations

Hugging Face's profile picture Society & Ethics's profile picture BigScience Workshop's profile picture GEM benchmark's profile picture BigScience Catalogue Data's profile picture BigScience Data's profile picture HF Task Exploration's profile picture HuggingFaceM4's profile picture BigCode's profile picture Stable Bias's profile picture Hugging Face H4's profile picture πŸ€— H4 Community's profile picture BigCode Data's profile picture Stable Diffusion Bias Eval's profile picture Librarian Bots's profile picture Blog-explorers's profile picture Evaluating Social Impacts of Generative AI's profile picture llm-values's profile picture Bias Leaderboard Development's profile picture AI Energy Score Project's profile picture Journalists on Hugging Face's profile picture Social Post Explorers's profile picture

yjernite's activity

reacted to merve's post with πŸ‘€ 8 days ago
view post
Post
2905
Apollo is a new family of open-source video language models by Meta, where 3B model outperforms most 7B models and 7B outperforms most 30B models 🧢

✨ the models come in 1.5B https://huggingface.co./Apollo-LMMs/Apollo-1_5B-t32, 3B https://huggingface.co./Apollo-LMMs/Apollo-3B-t32 and 7B https://huggingface.co./Apollo-LMMs/Apollo-7B-t32 with A2.0 license, based on Qwen1.5 & Qwen2
✨ the authors also release a benchmark dataset https://huggingface.co./spaces/Apollo-LMMs/ApolloBench

The paper has a lot of experiments (they trained 84 models!) about what makes the video LMs work ⏯️

Try the demo for best setup here https://huggingface.co./spaces/Apollo-LMMs/Apollo-3B
they evaluate sampling strategies, scaling laws for models and datasets, video representation and more!
> The authors find out that whatever design decision was applied to small models also scale properly when the model and dataset are scaled πŸ“ˆ scaling dataset has diminishing returns for smaller models
> They evaluate frame sampling strategies, and find that FPS sampling is better than uniform sampling, and they find 8-32 tokens per frame optimal
> They also compare image encoders, they try a variation of models from shape optimized SigLIP to DINOv2
they find google/siglip-so400m-patch14-384 to be most powerful πŸ”₯
> they also compare freezing different parts of models, training all stages with some frozen parts give the best yield

They eventually release three models, where Apollo-3B outperforms most 7B models and Apollo 7B outperforms 30B models πŸ”₯
  • 3 replies
Β·
reacted to fdaudens's post with πŸ‘€ 8 days ago
view post
Post
1262
Did a fun experiment: What are the main themes emerging from the 100+ Nieman Journalism Lab predictions for 2025?

I used natural language processing to cluster and map them β€” really helps spot patterns that weren't obvious when reading predictions one by one. So what will shape journalism next year? A lot of AI and US politics (surprise!), but there's also this horizontal axis that spans from industry strategies to deep reflections on how to talk to the public.

Click any dot to explore the original prediction. What themes surprise/interest you the most?

πŸ‘‰ fdaudens/nieman_lab_2025_predictions_visualization

P.s.: I discovered that Nieman Lab's content is under Creative Commons license!
upvoted an article 8 days ago
posted an update 13 days ago
view post
Post
2035
πŸ‡ͺπŸ‡Ί Policy Thoughts in the EU AI Act Implementation πŸ‡ͺπŸ‡Ί

There is a lot to like in the first draft of the EU GPAI Code of Practice, especially as regards transparency requirements. The Systemic Risks part, on the other hand, is concerning for both smaller developers and for external stakeholders.

I wrote more on this topic ahead of the next draft. TLDR: more attention to immediate large-scale risks and to collaborative solutions supported by evidence can help everyone - as long as developers disclose sufficient information about their design choices and deployment contexts.

Full blog here, based on our submitted response with @frimelle and @brunatrevelin :

https://huggingface.co./blog/yjernite/eu-draft-cop-risks#on-the-proposed-taxonomy-of-systemic-risks
  • 2 replies
Β·
upvoted an article 13 days ago
view article
Article

πŸ‡ͺπŸ‡ΊβœοΈ EU AI Act: Systemic Risks in the First CoP Draft Comments ✍️πŸ‡ͺπŸ‡Ί

By yjernite β€’
β€’ 11
reacted to dvilasuero's post with ❀️πŸ”₯ 19 days ago
view post
Post
2259
🌐 Announcing Global-MMLU: an improved MMLU Open dataset with evaluation coverage across 42 languages, built with Argilla and the Hugging Face community.

Global-MMLU is the result of months of work with the goal of advancing Multilingual LLM evaluation. It's been an amazing open science effort with collaborators from Cohere For AI, Mila - Quebec Artificial Intelligence Institute, EPFL, Massachusetts Institute of Technology, AI Singapore, National University of Singapore, KAIST, Instituto Superior TΓ©cnico, Carnegie Mellon University, CONICET, and University of Buenos Aires.

🏷️ +200 contributors used Argilla MMLU questions where regional, dialect, or cultural knowledge was required to answer correctly. 85% of the questions required Western-centric knowledge!

Thanks to this annotation process, the open dataset contains two subsets:

1. πŸ—½ Culturally Agnostic: no specific regional, cultural knowledge is required.
2. βš–οΈ Culturally Sensitive: requires dialect, cultural knowledge or geographic knowledge to answer correctly.

Moreover, we provide high quality translations of 25 out of 42 languages, thanks again to the community and professional annotators leveraging Argilla on the Hub.

I hope this will ensure a better understanding of the limitations and challenges for making open AI useful for many languages.

Dataset: CohereForAI/Global-MMLU
reacted to fdaudens's post with ❀️ 21 days ago
view post
Post
1053
πŸ“ˆπŸ‘€ Just dropped: visualization mapping Hugging Face's most liked & downloaded models from 2022 to now. Small models are clearly on the rise - fascinating shift in both likes and download patterns.

Check it out: huggingface/open-source-ai-year-in-review-2024
reacted to AdinaY's post with ❀️ 21 days ago
view post
Post
1465
2023 & 2024 Top Downloaded (all time) Open Models on the hub are both from the Chinese community πŸ‘€

2023 πŸ‘‰ Bge base by BAAI
BAAI/bge-base-en-v1.5
2024 πŸ‘‰ Qwen 2.5 by Alibaba Qwen
Qwen/Qwen2.5-1.5B-Instruct

Can’t wait to see what incredible models the Chinese community will bring in 2025πŸš€

✨ Follow https://huggingface.co./zh-ai-community to get the latest updates from the Chinese community
✨ Explore the 2024 Year in Review huggingface/open-source-ai-year-in-review-2024