sometimesanotion PRO

sometimesanotion

AI & ML interests

Agentic LLM services, model merging, finetunes, distillation

Recent Activity

liked a model about 1 hour ago
jpacifico/Chocolatine-2-14B-Instruct-v2.0b3
updated a model about 4 hours ago
sometimesanotion/Lamarck-14B-v0.7
liked a model about 5 hours ago
bunnycore/Phi-4-Model-Stock-v4
View all activity

Organizations

Hugging Face Discord Community's profile picture

Posts 1

view post
Post
2515
I've managed a #1 score of 41.22% average for 14B parameter models on the Open LLM Leaderboard. As of this writing, sometimesanotion/Lamarck-14B-v0.7 is #8 for all models up to 70B parameters.

It took a custom toolchain around Arcee AI's mergekit to manage the complex merges, gradients, and LoRAs required to make this happen. I really like seeing features of many quality finetunes in one solid generalist model.

datasets

None public yet