Training Jamba (LR etc.)

#1
by ptrdvn - opened

Hey, great model! I'm Peter from Lightblue, and we have a Jamba finetune too (lightblue/Jamba-v0.1-chat-multilingual).

Just looking at your LR, you might be an order of magnitude too low potentially, as we trained with LR= 0.0002 and it worked pretty well.

The whole training setup is here if youre interested

https://huggingface.co./lightblue/Jamba-v0.1-chat-multilingual#training

Let me know if I can help at all :)

Owner

Hey @ptrdvn ! Thank you so much for the input and sharing your training! This is incredibly promising and makes me feel hopeful about the power of Jamba. Your model also has great outputs! Can't wait to see more iterations and/or new models you cook up

After seeing your results with the Lr and hyperparameters, I am going to definitely use your advice on my next training. Hopefully, I can get a new one trained and pushed over the next few days if the resources are there.

I appreciate the insight and willingness to help! I'll let you know you know how it goes and reach out for sure

No worries, good luck!

Severian changed discussion status to closed

Sign up or log in to comment