Samuel Lima Braz PRO

samuellimabraz

AI & ML interests

None yet

Recent Activity

Articles

Organizations

Tech4Humans's profile picture Hugging Face Discord Community's profile picture

Posts 1

view post
Post
312
I wrote a article on Parameter-Efficient Fine-Tuning (PEFT), exploring techniques for efficient fine-tuning in LLMs, their implementations, and variations.

The study is based on the article "Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning" and the PEFT library integrated with Hugging Face's Transformers.

Article: https://huggingface.co./blog/samuellimabraz/peft-methods
Notebook: https://colab.research.google.com/drive/1B9RsKLMa8SwTxLsxRT8g9OedK10zfBEP?usp=sharing
Collection: samuellimabraz/service-summary-6793ccfe774073328ea9f8df

Analyzed methods:
- Adapters: Soft Prompts (Prompt Tuning, Prefix Tuning, P-tuning), IA³.
- Reparameterization: LoRA, QLoRA, LoHa, LoKr, X-LoRA, Intrinsic SAID, and variations of initializations (PiSSA, OLoRA, rsLoRA, DoRA).
- Selective Tuning: BitFit, DiffPruning, FAR, FishMask.

I'm starting out in generative AI, I have more experience with computer vision and robotics. Just sharing here 🤗

datasets

None public yet