view post Post 612 Fine tuning on the edge. Pushing the MI100 to it's limits.QWQ-32B 4bit QLORA fine tuningVRAM usage 31.498G/31.984G :D See translation
view post Post 1906 -UPDATED-4bit inference is working! The blogpost is updated with code snippet and requirements.txthttps://devquasar.com/uncategorized/all-about-amd-and-rocm/-UPDATED-I've played around with an MI100 and ROCm and collected my experience in a blogpost:https://devquasar.com/uncategorized/all-about-amd-and-rocm/Unfortunately I've could not make inference or training work with model loaded in 8bit or use BnB, but did everything else and documented my findings. See translation
csabakecskemeti/bert-base-case-yelp5-tuned-experiment Text Classification โข Updated Apr 5, 2024 โข 15