Hugging Face Science

company
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

Hugging Face Research

The science team at Hugging Face is dedicated to advancing machine learning research in ways that maximize value for the whole community. Our work focuses on three core areas of tooling, datasets and open models.

This is the release timeline of 2024 so far (you can click on each element!):

๐Ÿ”ฅ Warming up
Jan
โš™๏ธNanotron Release
โญ๏ธThe Stack v2
โญ๏ธStarCoder2
Feb
๐ŸชZephyr Gemma
๐ŸชCosmopedia
Mar
๐ŸทFineWeb
๐Ÿ•ต๏ธJAT Agent
๐ŸชZephyr Mixtral
๐ŸถIdefics 2
Apr
๐ŸฆพLeRobot Release
๐Ÿ“ˆWSD Analysis
May
๐ŸทFineWeb Report
๐ŸทFineWeb-Edu
๐ŸŒบFlorence 2 Blog
๐Ÿ‘ฉโ€๐ŸซStanford CS25
Jun
๐ŸฆพLeRobot TeleOps
๐Ÿฅ‡Win AIMO
๐ŸถDocmatix
๐ŸคSmolLM
Jul
๐ŸฆพLeRobot Tutorial
๐ŸถIdefics 3
๐ŸคInstant SmolLM
Aug
๐ŸฆพLeRobot Video
๐ŸŽฅFineVideo
Sep
๐Ÿ—บ๏ธFineTasks
Oct
๐ŸคSmolLM2
๐Ÿค“SmolVLM
Nov
Dec

๐Ÿ› ๏ธ Tooling & Infrastructure

The foundation of ML research is tooling and infrastructure and we are working on a range of tools such as datatrove, nanotron, TRL, LeRobot, and lighteval.

๐Ÿ“‘ Datasets

High quality datasets are the powerhouse of LLMs and require special care and skills to build. We focus on building high-quality datasets such as no-robots, FineWeb, The Stack, and FineVideo.

๐Ÿค– Open Models

The datatsets and training recipes of most state-of-the-art models are not released. We build cutting-edge models and release the full training pipeline as well fostering more innovation and reproducibility, such as Zephyr, StarCoder2, or SmolLM2.

๐ŸŒธ Collaborations

Research and collaboration go hand in hand. That's why we like to organize and participate in large open collaborations such as BigScience and BigCode, as well as lots of smaller partnerships such as Leaderboards on the Hub.

โš™๏ธ Infrastructre

The research team is organized in small teams with typically <4 people and the science cluster consists of 96 x 8xH100 nodes as well as an auto-scalable CPU cluster for dataset processing. In this setup, even a small research team can build and push out impactful artifacts.

๐Ÿ“– Educational material

Besides writing tech reports of research projects we also like to write more educational content to help newcomers get started to the field or practitioners. We built for example the alignment handbook, the evaluation guidebook, the pretraining tutorial, or the FineWeb blog.

๐Ÿค— Join us!

We are actively hiring for both full-time and internships. Check out hf.co/jobs

models

None public yet

datasets

None public yet