Does anyone know the minimum hardware requirements to fine-tune this Flan-T5-Large model?

#16
by LeandroArg - opened

Or what hardware did you use to fine-tune it?

Are 2 NVIDIA A30 GPUs with 24GB each sufficient? 🤔

Hi @LeandroArg
if you use LoRA or QLoRA this should be more than sufficient. By fine-tuning only adapters you drastically reduce the number of trainable parameters of the model, making it possible to fine-tune large models on consumer-type hardware.
Please have a look at: https://huggingface.co./docs/transformers/peft or the examples here: https://github.com/huggingface/peft/tree/main/examples to understand how to use PEFT to fine-tune large models at low cost.

Sign up or log in to comment