Natural Language Processing with Transformers

Activity Feed

AI & ML interests

This organization contains all the models and datasets covered in the book "Natural Language Processing with Transformers".

Recent Activity

transformersbook's activity

lewtunΒ 
posted an update 9 days ago
view post
Post
6441
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute πŸ”₯

How? By combining step-wise reward models with tree search algorithms :)

We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"

We're open sourcing the full recipe and sharing a detailed blog post.

In our blog post we cover:

πŸ“ˆ Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.

πŸŽ„ Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.

🧭 Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM

Here's the links:

- Blog post: HuggingFaceH4/blogpost-scaling-test-time-compute

- Code: https://github.com/huggingface/search-and-learn

Enjoy!
  • 2 replies
Β·
thomwolfΒ 
posted an update 17 days ago
view post
Post
4336
We are proud to announce HuggingFaceFW/fineweb-2: A sparkling update to HuggingFaceFW/fineweb with 1000s of πŸ—£οΈlanguages.

We applied the same data-driven approach that led to SOTA English performance in🍷 FineWeb to thousands of languages.

πŸ₯‚ FineWeb2 has 8TB of compressed text data and outperforms other multilingual datasets in our experiments.

The dataset is released under the permissive πŸ“œ ODC-By 1.0 license, and the πŸ’» code to reproduce it and our evaluations is public.

We will very soon announce a big community project, and are working on a πŸ“ blogpost walking you through the entire dataset creation process. Stay tuned!

In the mean time come ask us question on our chat place: HuggingFaceFW/discussion

H/t @guipenedo @hynky @lvwerra as well as @vsabolcec Bettina Messmer @negar-foroutan and @mjaggi
  • 2 replies
Β·
thomwolfΒ 
posted an update 20 days ago
thomwolfΒ 
posted an update 22 days ago
thomwolfΒ 
posted an update about 1 month ago
thomwolfΒ 
posted an update about 1 month ago
thomwolfΒ 
posted an update 2 months ago
view post
Post
4114
Parents in the 1990: Teach the kids to code
Parents now: Teach the kids to fix the code when it starts walking around πŸ€–βœ¨
  • 2 replies
Β·
thomwolfΒ 
posted an update 7 months ago
view post
Post
4555
[New crazy blog post alert] We are releasing an extensive blog post on the science of creating high quality web-scale datasets, detailing all the steps and learnings that came in our recent 15 trillion tokens 🍷FineWeb release

Inspired by the distill.pub interactive graphics papers, we settled to write the most extensive, enjoyable and in-depth tech report we could draft on so prepare for a 45-mmin read with interactive graphics and all.

And it's not all, in this article we also introduce πŸ“šFineWeb-Edu a filtered subset of Common Crawl with 1.3T tokens containing only web pages with very high educational content. Up to our knowledge, FineWeb-Edu out-performs all openly release web-scale datasets by a significant margin on knowledge- and reasoning-intensive benchmarks like MMLU, ARC, and OpenBookQA

We also make a number of surprising observations on the "quality" of the internet it-self which may challenge some of the general assumptions on web data (not saying more, I'll let you draw your conclusions ;)

HuggingFaceFW/blogpost-fineweb-v1
  • 1 reply
Β·
thomwolfΒ 
posted an update 8 months ago
view post
Post
4868
Is is time for the open-source AI robots revolution πŸš€?

With @haixuantao and @Leyo we’ve been playing with a low-cost DJI robot controlled by three local open-source AI models (Whisper, Idefics2, Parler-TTS - all Apache2) and orchestrated by Dora-cs.

Links to find all the hardware/software we used in the demo:
- robot control framework – dora-rs: https://github.com/dora-rs/dora
- speech-to-text model – whisper: openai/whisper-base
- vision-text model – Idefics2: HuggingFaceM4/idefics2-8b-AWQ
- text-to-speech model – ParlerTTS mini: parler-tts/parler_tts_mini_v0.1
- robot: https://dji.com/robomaster-s1
- code gist: https://gist.github.com/haixuanTao/860e1740245dc2c8dd85b496150a9320
- Larger codebase: dora-rs/dora-idefics2
- laptop/pc: any with a recent GPU card (our has a RTX 4090)

Enjoy!
Β·
lewtunΒ 
posted an update 9 months ago
view post
Post
5029
Introducing Zephyr 141B-A35B πŸͺ:

HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1

Yesterday, Mistral released their latest base model (via magnet link of course πŸ˜…) and the community quickly converted it to transformers format and pushed it to the Hub: mistral-community/Mixtral-8x22B-v0.1

Early evals of this model looked extremely strong, so we teamed up with Argilla and KAIST AI to cook up a Zephyr recipe with a few new alignment techniques that came out recently:

πŸ§‘β€πŸ³ Align the base model with Odds Ratio Preference Optimisation (ORPO). This novel algorithm developed by @JW17 and @nlee-208 and @j6mes and does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO.

🦫 Use a brand new dataset of 7k high-quality, multi-turn preferences that has been developed by our friends at Argilla. To create this dataset, they took the excellent Capybara SFT dataset from @LDJnr LDJnr/Capybara and converted it into a preference dataset by augmenting the final turn with responses from new LLMs that were then ranked by GPT-4.

What we find especially neat about this approach is that training on 7k samples only takes ~1.3h on 4 H100 nodes, yet produces a model that is very strong on chat benchmarks like IFEval and BBH.

Kudos to @alvarobartt @JW17 and @nlee-208 for this very nice and fast-paced collab!

For more details on the paper and dataset, checkout our collection: HuggingFaceH4/zephyr-orpo-6617eba2c5c0e2cc3c151524
thomwolfΒ 
posted an update 9 months ago
view post
Post
3123
Very interesting model just released by MyShell: jetmoe/jetmoe-8b . It's a 8B-parameters MoE LLM so 2.2B active parameters, really efficient.

Main characteristics:
- impressive performances for its size (beating meta-llama/Llama-2-7b and huggyllama/llama-13b)
- combine Mixture of Attention heads (MoA) and Mixture of MLP Experts (MoE) – 8 experts with 2 being active for each token
- trained on a rather limited 1.25T tokens from publicly available datasets – training recipe follows the MiniCPM's two-phases training method => first time I see this for a 2B+ model
- $100k to train
- open weights - open sharing of recipes - open dataset - open code => β™‘
- still interesting room to improve performances (be it only by training longer)

Links:
- report: https://research.myshell.ai/jetmoe
- model: jetmoe/jetmoe-8b
- code: https://github.com/myshell-ai/JetMoE

Note: I actually detailed all of the MiniCPM schedule, Mixture-of-expert (MoE) and many of the datasets used in this work in my recent little guide to building LLMs in 2024, so feel free to check it out if you want to learn more on these topics: https://www.youtube.com/watch?v=2-SPH9hIKT8
  • 1 reply
Β·
thomwolfΒ 
posted an update 9 months ago
view post
Post
1993
Little know gem: the Open-source Cookbook

A collection of notebooks for building practical AI applications using open-source tools and models: https://lnkd.in/e6m6Jmwu

Doc: https://lnkd.in/e3FE6TUq

Currently contains 16 notebooks in English (and some in Chinese):
1. Using LLM-as-a-judge πŸ§‘β€βš–οΈ for an automated and versatile evaluation
2. Create a legal preference dataset
3. Suggestions for Data Annotation with SetFit in Zero-shot Text Classification
4. Implementing semantic cache to improve a RAG system
5. Building A RAG Ebook β€œLibrarian” Using LlamaIndex
6. Stable Diffusion Interpolation
7. Building A RAG System with Gemma, MongoDB and Open Source Models
8. Prompt Tuning with PEFT Library
9. Migrating from OpenAI to Open LLMs Using TGI’s Messages API
10. Automatic Embeddings with TEI through Inference Endpoints
11. Simple RAG for GitHub issues using Hugging Face Zephyr and LangChain
12. Embedding multimodal data for similarity search using πŸ€— transformers, πŸ€— datasets and FAISS
13. Fine-tuning a Code LLM on Custom Code on a single GPU
14. RAG Evaluation Using Synthetic data and LLM-As-A-Judge
15. Advanced RAG on HuggingFace documentation using LangChain
16. Detecting Issues in a Text Dataset with Cleanlab
thomwolfΒ 
posted an update 9 months ago
view post
Post
5042
A Little guide to building Large Language Models in 2024

This is a post-recording of a 75min lecture I gave two weeks ago on how to train a LLM from scratch in 2024. I tried to keep it short and comprehensive – focusing on concepts that are crucial for training good LLM but often hidden in tech reports.

In the lecture, I introduce the students to all the important concepts/tools/techniques for training good performance LLM:
* finding, preparing and evaluating web scale data
* understanding model parallelism and efficient training
* fine-tuning/aligning models
* fast inference

There is of course many things and details missing and that I should have added to it, don't hesitate to tell me you're most frustrating omission and I'll add it in a future part. In particular I think I'll add more focus on how to filter topics well and extensively and maybe more practical anecdotes and details.

Now that I recorded it I've been thinking this could be part 1 of a two-parts series with a 2nd fully hands-on video on how to run all these steps with some libraries and recipes we've released recently at HF around LLM training (and could be easily adapted to your other framework anyway):
*datatrove for all things web-scale data preparation: https://github.com/huggingface/datatrove
*nanotron for lightweight 4D parallelism LLM training: https://github.com/huggingface/nanotron
*lighteval for in-training fast parallel LLM evaluations: https://github.com/huggingface/lighteval

Here is the link to watch the lecture on Youtube: https://www.youtube.com/watch?v=2-SPH9hIKT8
And here is the link to the Google slides: https://docs.google.com/presentation/d/1IkzESdOwdmwvPxIELYJi8--K3EZ98_cL6c5ZcLKSyVg/edit#slide=id.p

Enjoy and happy to hear feedback on it and what to add, correct, extend in a second part.
  • 2 replies
Β·
thomwolfΒ 
posted an update 9 months ago
lewtunΒ 
posted an update 10 months ago
view post
Post
Can we align code generation models to be good at chat without compromising their base capabilities πŸ€”?

This was the question the H4 team asked itself when BigCode released StarCoder2 a bit over a week ago. We knew that code models like deepseek-ai/deepseek-coder-6.7b-instruct and m-a-p/OpenCodeInterpreter-DS-33B get impressive scores on code benchmarks like HumanEval, but they tend to score poorly on chat benchmarks like MT Bench and IFEval. We also knew that the Zephyr recipe we applied to Mistral 7B produced a strong chat model, so we wondered -- could be tweaked to produce a strong coding assistant?

It turns out the answer is yes and I'm happy to share StarChat2, a DPO fine-tune of StarCoder2 15B that scores highly on both HumanEval and MT Bench / IFEval 🌟!

The most interesting lesson for me was that you get better models by blending in more code/math data than chat during the SFT step - in terms of tokens, we found a ratio of 3:1 worked best.

Anyway, here's a demo of the model, along with all the code and datasets we used to train it:

* Demo: HuggingFaceH4/starchat2-playground
* Collection: HuggingFaceH4/starchat2-15b-65f068417b330fafad751fce
* Recipe: https://github.com/huggingface/alignment-handbook

Hope it's useful to others!
  • 3 replies
Β·