ben burtenshaw's picture

ben burtenshaw

burtenshaw

AI & ML interests

None yet

Recent Activity

updated a dataset less than a minute ago
agents-course/certificates
updated a dataset 14 minutes ago
agents-course/certificates
updated a dataset 21 minutes ago
agents-course/certificates
View all activity

Organizations

Hugging Face's profile picture Hugging Face Course's profile picture Argilla's profile picture Blog-explorers's profile picture MLX Community's profile picture distilabel-internal-testing's profile picture Data Is Better Together's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture argilla-internal-testing's profile picture Open Human Feedback's profile picture Argilla Warehouse's profile picture open/ acc's profile picture Data Is Better Together Contributor's profile picture Open Source AI Research Community's profile picture FeeL (Feedback Loop)'s profile picture Hugging Face Agents Course's profile picture Agents Course Students's profile picture Agents Course Finishers's profile picture Open R1's profile picture Hugging Face Reasoning Course's profile picture

burtenshaw's activity

posted an update 1 day ago
view post
Post
2078
I’m super excited to work with @mlabonne to build the first practical example in the reasoning course.

🔗 https://huggingface.co./reasoning-course

Here's a quick walk through of the first drop of material that works toward the use case:

- a fundamental introduction to reinforcement learning. Answering questions like, ‘what is a reward?’ and ‘how do we create an environment for a language model?’

- Then it focuses on Deepseek R1 by walking through the paper and highlighting key aspects. This is an old school way to learn ML topics, but it always works.

- Next, it takes to you Transformers Reinforcement Learning and demonstrates potential reward functions you could use. This is cool because it uses Marimo notebooks to visualise the reward.

- Finally, Maxime walks us through a real training notebook that uses GRPO to reduce generation length. I’m really into this because it works and Maxime took the time to validate it share assets and logging from his own runs for you to compare with.

Maxime’s work and notebooks have been a major part of the open source community over the last few years. I, like everyone, have learnt so much from them.
replied to their post 7 days ago
replied to their post 7 days ago
posted an update 7 days ago
view post
Post
5215
I made a real time voice agent with FastRTC, smolagents, and hugging face inference providers. Check it out in this space:

🔗 burtenshaw/coworking_agent
·
posted an update 9 days ago
view post
Post
6029
Now the Hugging Face agent course is getting real! With frameworks like smolagents, LlamaIndex, and LangChain.

🔗 Follow the org for updates https://huggingface.co./agents-course

This week we are releasing the first framework unit in the course and it’s on smolagents. This is what the unit covers:

- why should you use smolagents vs another library?
- how to build agents that use code
- build multiagents systems
- use vision language models for browser use

The team has been working flat out on this for a few weeks. Led by @sergiopaniego and supported by smolagents author @m-ric .
replied to their post 15 days ago
view reply

Thanks for the heads up. It's fixed now. Just go to the quiz app and you'll get a certificate directly.

posted an update 16 days ago
view post
Post
7161
AGENTS + FINETUNING! This week Hugging Face learn has a whole pathway on finetuning for agentic applications. You can follow these two courses to get knowledge on levelling up your agent game beyond prompts:

1️⃣ New Supervised Fine-tuning unit in the NLP Course https://huggingface.co./learn/nlp-course/en/chapter11/1
2️⃣New Finetuning for agents bonus module in the Agents Course https://huggingface.co./learn/agents-course/bonus-unit1/introduction

Fine-tuning will squeeze everything out of your model for how you’re using it, more than any prompt.
  • 2 replies
·
reacted to sayakpaul's post with ❤️ 16 days ago
view post
Post
2999
Inference-time scaling meets Flux.1-Dev (and others) 🔥

Presenting a simple re-implementation of "Inference-time scaling diffusion models beyond denoising steps" by Ma et al.

I did the simplest random search strategy, but results can potentially be improved with better-guided search methods.

Supports Gemini 2 Flash & Qwen2.5 as verifiers for "LLMGrading" 🤗

The steps are simple:

For each round:

1> Starting by sampling 2 starting noises with different seeds.
2> Score the generations w.r.t a metric.
3> Obtain the best generation from the current round.

If you have more compute budget, go to the next search round. Scale the noise pool (2 ** search_round) and repeat 1 - 3.

This constitutes the random search method as done in the paper by Google DeepMind.

Code, more results, and a bunch of other stuff are in the repository. Check it out here: https://github.com/sayakpaul/tt-scale-flux/ 🤗
posted an update 17 days ago
view post
Post
3446
NEW COURSE! We’re cooking hard on Hugging Face courses, and it’s not just agents. The NLP course is getting the same treatment with a new chapter on Supervised Fine-Tuning!

👉 Follow to get more updates https://huggingface.co./nlp-course

The new SFT chapter will guide you through these topics:

1️⃣ Chat Templates: Master the art of structuring AI conversations for consistent and helpful responses.

2️⃣ Supervised Fine-Tuning (SFT): Learn the core techniques to adapt pre-trained models to your specific outputs.

3️⃣ Low Rank Adaptation (LoRA): Discover efficient fine-tuning methods that save memory and resources.

4️⃣ Evaluation: Measure your model's performance and ensure top-notch results.

This is the first update in a series, so follow along if you’re upskilling in AI.
  • 2 replies
·
posted an update 20 days ago
view post
Post
3539
Hey, I’m Ben and I work at Hugging Face.

Right now, I’m focusing on educational stuff and getting loads of new people to build open AI models using free and open source tools.

I’ve made a collection of some of the tools I’m building and using for teaching. Stuff like quizzes, code challenges, and certificates.

burtenshaw/tools-for-learning-ai-6797453caae193052d3638e2
  • 1 reply
·
posted an update 24 days ago
view post
Post
9087
The Hugging Face agents course is finally out!

👉 https://huggingface.co./agents-course

This first unit of the course sets you up with all the fundamentals to become a pro in agents.

- What's an AI Agent?
- What are LLMs?
- Messages and Special Tokens
- Understanding AI Agents through the Thought-Action-Observation Cycle
- Thought, Internal Reasoning and the Re-Act Approach
- Actions, Enabling the Agent to Engage with Its Environment
- Observe, Integrating Feedback to Reflect and Adapt
posted an update 28 days ago
view post
Post
3665
SmolLM2 paper is out! 😊

😍 Why do I love it? Because it facilitates teaching and learning!

Over the past few months I've engaged with (no joke) thousands of students based on SmolLM.

- People have inferred, fine-tuned, aligned, and evaluated this smol model.
- People used they're own machines and they've used free tools like colab, kaggle, and spaces.
- People tackled use cases in their job, for fun, in their own language, and with their friends.

upvote the paper SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model (2502.02737)
  • 1 reply
·
posted an update about 1 month ago
view post
Post
3309
Manic few days in open source AI, with game changing development all over the place. Here's a round up of the resources:

- The science team at @huggingface reproduced and open source the seek r1. https://github.com/huggingface/open-r1
- @qwen released a series of models with 1 million token context! https://qwenlm.github.io/blog/qwen2.5-1m/
- SmolVLM got even smaller with completely new variants at 256m and 500m https://huggingface.co./blog/smolervlm

There's so much you could do with these developments. Especially combining them together into agentic applications or fine-tuning them on your use case.
  • 1 reply
·
reacted to merve's post with 👀🤗🔥 about 1 month ago
view post
Post
5234
Oof, what a week! 🥵 So many things have happened, let's recap! merve/jan-24-releases-6793d610774073328eac67a9

Multimodal 💬
- We have released SmolVLM -- tiniest VLMs that come in 256M and 500M, with it's retrieval models ColSmol for multimodal RAG 💗
- UI-TARS are new models by ByteDance to unlock agentic GUI control 🤯 in 2B, 7B and 72B
- Alibaba DAMO lab released VideoLlama3, new video LMs that come in 2B and 7B
- MiniMaxAI released Minimax-VL-01, where decoder is based on MiniMax-Text-01 456B MoE model with long context
- Dataset: Yale released a new benchmark called MMVU
- Dataset: CAIS released Humanity's Last Exam (HLE) a new challenging MM benchmark

LLMs 📖
- DeepSeek-R1 & DeepSeek-R1-Zero: gigantic 660B reasoning models by DeepSeek, and six distilled dense models, on par with o1 with MIT license! 🤯
- Qwen2.5-Math-PRM: new math models by Qwen in 7B and 72B
- NVIDIA released AceMath and AceInstruct, new family of models and their datasets (SFT and reward ones too!)

Audio 🗣️
- Llasa is a new speech synthesis model based on Llama that comes in 1B,3B, and 8B
- TangoFlux is a new audio generation model trained from scratch and aligned with CRPO

Image/Video/3D Generation ⏯️
- Flex.1-alpha is a new 8B pre-trained diffusion model by ostris similar to Flux
- tencent released Hunyuan3D-2, new 3D asset generation from images
·
posted an update about 1 month ago
view post
Post
1411
Hey 👋

I'm helping out on some community research to learn about the AI community. If you want to join in the conversation, head over here where I started a community discussion on the most influential model since BERT.

OSAIResearchCommunity/README#2
posted an update about 1 month ago
view post
Post
2062
📣 Teachers and Students! Here's a handy quiz app if you're preparing your own study material.

TLDR, It's a quiz that uses a dataset to make questions and save answers

Here's how it works:

- make a dataset of multiple choice questions
- duplicate the space add set the dataset repo
- log in and do the quiz
- submit the questions to create a new dataset

I made this to get ready for the agents course, but I hope it's useful for you projects too!

quiz app burtenshaw/dataset_quiz

dataset with questions burtenshaw/exam_questions

agents course we're working on https://huggingface.co./agents-course
posted an update about 1 month ago
view post
Post
2644
AI was built on side projects!
posted an update about 1 month ago
view post
Post
4009
🚧 Work in Progress! 🚧

👷‍♀️ We're working hard on getting the official agents course ready for the 50,000 students that have signed up.

If you want to contribute to the discussion, I started these community posts. Looking forward to hearing from you:

- smolagents unit in the agents course - agents-course/README#7
- LlamaIndex Unit in the agents course - agents-course/README#6
- LangChain and LangGraph unit in the agents course - agents-course/README#5
- Real world use cases in the agents course - agents-course/README#8