Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
6
8
Ricardo Malagon Jerez
rjmalagon
Follow
21world's profile picture
Gargaz's profile picture
sometimesanotion's profile picture
5 followers
ยท
39 following
rjmalagon
rjmalagon
AI & ML interests
None yet
Recent Activity
liked
a model
10 days ago
wanlige/li-14b-v0.4
liked
a model
2 months ago
Danielbrdz/Barcenas-10b
reacted
to
mkurman
's
post
with ๐ฅ
3 months ago
We built a new small language model SmolLM2-MedIT-Upscale-2B, based on SmolLM2-1.7B-Instruct from Hugging Face. The premise was simple - increasing the vector in attention layers would positively impact the model's capabilities. What did we prove? In total, not much really, since we don't have the original trained under the same conditions as our upscale. However... 1. We scaled up the model without losing its quality 2. We confirmed that the method we devised works 3. After extremely short fine-tuning, the model achieved much better results in IFEval compared to the original (53.68 vs 64.29) and a higher overall average score in Open LLM Leaderboard (14.75 vs 15.17) I consider this a big success ๐, since surpassing the original in metrics is often very time-consuming, generates high costs, and doesn't always work out. Meanwhile, we're moving forward, training SmolLM2 400M Instruct as an upscale of 136M. We're curious about how increasing the base and intermediate vectors will affect the model's quality. We'll compare it to the original and the 360M Instruct version released by Hugging Face. License: Apache 2.0โโโโโโโโโโโโโโโโ https://huggingface.co./meditsolutions/SmolLM2-MedIT-Upscale-2B
View all activity
Organizations
None yet
models
1
rjmalagon/Nxcode-CQ-7B-orpo-Q8_0-GGUF
Text Generation
โข
Updated
May 9, 2024
โข
13
โข
2
datasets
None public yet