Abhishek Patnia's picture

Abhishek Patnia PRO

appliedml42
ยท

AI & ML interests

SMOL LLMs, PEFT, GPU Optimization, Natural Language Processing, Trust & Safety

Recent Activity

Organizations

None yet

Posts 1

view post
Post
1307
I am trying to find resources that explain how I can protect against instruction following capability degradation due to LoRA fine-tuning.

For example, I fine-tuned Llama 3.2 3B Instruct on cornell-movie-review-data/rotten_tomatoes dataset and saw significant degradation in ifeval benchmark scores.

I would appreciate any pointers ๐Ÿ™๐Ÿฝ

models

None public yet

datasets

None public yet