Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
m-ric 
posted an update Oct 10
Post
2937
Rhymes AI drops Aria: small Multimodal MoE that beats GPT-4o and Gemini-1.5-Flash ⚡️

New player entered the game! Rhymes AI has just been announced, and unveiled Aria – a multimodal powerhouse that's punching above its weight.

Key insights:

🧠 Mixture-of-Experts architecture: 25.3B total params, but only 3.9B active.

🌈 Multimodal: text/image/video → text.

📚 Novel training approach: “multimodal-native” where multimodal training starts directly during pre-training, not just tacked on later

📏 Long 64K token context window

🔓 Apache 2.0 license, with weights, code, and demos all open

⚡️ On the benchmark side, Aria leaves some big names in the dust.

- It beats Pixtral 12B or Llama-3.2-12B on several vision benchmarks like MMMU or MathVista.
- It even overcomes the much bigger GPT-4o on long video tasks and even outshines Gemini 1.5 Flash when it comes to parsing lengthy documents.

But Rhymes AI isn't just showing off benchmarks. They've already got Aria powering a real-world augmented search app called “Beago”. It’s handling even recent events with great accuracy!

And they partnered with AMD to make it much faster than competitors like Perplexity or Gemini search.

Read their paper for Aria 👉  Aria: An Open Multimodal Native Mixture-of-Experts Model (2410.05993)

Try BeaGo 🐶 👉 https://rhymes.ai/blog-details/introducing-beago-your-smarter-faster-ai-search

BeaGo is fine! But why only English?

In this post