Agents Course documentation
Introduction
Introduction
Welcome to this first Bonus Unit, where you’ll learn to fine-tune a Large Language Model (LLM) for function calling.
In terms of LLMs, function calling is quickly becoming a must-know technique.
The idea is, rather than relying only on prompt-based approaches like we did in Unit 1, function calling trains your model to take actions and interpret observations during the training phase, making your AI more robust.
When should I do this Bonus Unit?
This section is optional and is more advanced than Unit 1, so don’t hesitate to either do this unit now or revisit it when your knowledge has improved thanks to this course.
But don’t worry, this Bonus Unit is designed to have all the information you need, so we’ll walk you through every core concept of fine-tuning a model for function-calling even if you haven’t learned yet the inner workings of fine-tuning.
The best way for you to be able to follow this Bonus Unit is:
Know how to Fine-Tune an LLM with Transformers, if it’s not the case check this.
Know how to use
SFTTrainer
to fine-tune our model, to learn more about it check this documentation.
What You’ll Learn
Function Calling
How modern LLMs structure their conversations effectively letting them trigger Tools.LoRA (Low-Rank Adaptation)
A lightweight and efficient fine-tuning method that cuts down on computational and storage overhead. LoRA makes training large models faster, cheaper, and easier to deploy.The Thought → Act → Observe Cycle in Function Calling models
A simple but powerful approach for structuring how your model decides when (and how) to call functions, track intermediate steps, and interpret the results from external Tools or APIs.New Special Tokens
We’ll introduce special markers that help the model distinguish between:- Internal “chain-of-thought” reasoning
- Outgoing function calls
- Responses coming back from external tools
By the end of this bonus unit, you’ll be able to:
- Understand the inner working of APIs when it comes to Tools.
- Fine-tune a model using the LoRA technique.
- Implement and modify the Thought → Act → Observe cycle to create robust and maintainable Function-calling workflows.
- Design and utilize special tokens to seamlessly separate the model’s internal reasoning from its external actions.
And you’ll have fine-tuned your own model to do function calling. 🔥
Let’s dive into function calling!
< > Update on GitHub