Transformers documentation
Transformers
Transformers
Transformers is a library of pretrained natural language processing, computer vision, audio, and multimodal models for inference and training. Use Transformers to train models on your data, build inference applications, and generate text with large language models.
Explore the Hugging Face Hub today to find a model and use Transformers to help you get started right away.
Features
Transformers provides everything you need for inference or training with state-of-the-art pretrained models. Some of the main features include:
- Pipeline: Simple and optimized inference class for many machine learning tasks like text generation, image segmentation, automatic speech recognition, document question answering, and more.
- Trainer: A comprehensive trainer that supports features such as mixed precision, torch.compile, and FlashAttention for training and distributed training for PyTorch models.
- generate: Fast text generation with large language models (LLMs) and vision language models (VLMs), including support for streaming and multiple decoding strategies.
Design
Read our Philosophy to learn more about Transformers’ design principles.
Transformers is designed for developers and machine learning engineers and researchers. Its main design principles are:
- Fast and easy to use: Every model is implemented from only three main classes (configuration, model, and preprocessor) and can be quickly used for inference or training with Pipeline or Trainer.
- Pretrained models: Reduce your carbon footprint, compute cost and time by using a pretrained model instead of training an entirely new one. Each pretrained model is reproduced as closely as possible to the original model and offers state-of-the-art performance.
Join us on the Hugging Face Hub, Discord, or forum to collaborate and build models, datasets, and applications together.
< > Update on GitHub