You gonna revive the LinkedIn newsletter someday ? Think it was cool 😭😭
https://www.linkedin.com/newsletters/7233909926606053377/
Aurélien-Morgan CLAUDON
Aurelien-Morgan
AI & ML interests
None yet
Recent Activity
upvoted
a
paper
2 days ago
nGPT: Normalized Transformer with Representation Learning on the
Hypersphere
new activity
2 days ago
huggingface/HuggingDiscussions:[FEEDBACK] Follow
Organizations
Aurelien-Morgan's activity

upvoted
a
paper
2 days ago
[FEEDBACK] Follow
6
#14 opened over 1 year ago
by
victor

[bot] Conversion to Parquet
#1 opened 3 days ago
by
parquet-converter

Article
Common AI Model Formats
By
•
•
27Article
Trace & Evaluate your Agent with Arize Phoenix
•
29
Seen many astonishing results from people posting examples of this model in action.

reacted to
AdinaY's
post with 🤯
9 days ago
Post
2700
Wan2.1 🔥📹 new OPEN video model by Alibaba Wan team!
Model: Wan-AI/Wan2.1-T2V-14B
Demo: Wan-AI/Wan2.1
✨Apache 2.0
✨8.19GB VRAM, runs on most GPUs
✨Multi-Tasking: T2V, I2V, Video Editing, T2I, V2A
✨Text Generation: Supports Chinese & English
✨Powerful Video VAE: Encode/decode 1080P w/ temporal precision
Model: Wan-AI/Wan2.1-T2V-14B
Demo: Wan-AI/Wan2.1
✨Apache 2.0
✨8.19GB VRAM, runs on most GPUs
✨Multi-Tasking: T2V, I2V, Video Editing, T2I, V2A
✨Text Generation: Supports Chinese & English
✨Powerful Video VAE: Encode/decode 1080P w/ temporal precision

reacted to
freddyaboulton's
post with 🤗🔥
12 days ago
Post
3105
Getting WebRTC and Websockets right in python is very tricky. If you've tried to wrap an LLM in a real-time audio layer then you know what I'm talking about.
That's where FastRTC comes in! It makes WebRTC and Websocket streams super easy with minimal code and overhead.
Check out our org: hf.co/fastrtc
That's where FastRTC comes in! It makes WebRTC and Websocket streams super easy with minimal code and overhead.
Check out our org: hf.co/fastrtc

upvoted
an
article
12 days ago
Article
FastRTC: The Real-Time Communication Library for Python
•
130

reacted to
lysandre's
post with ❤️
14 days ago
Post
5494
SmolVLM-2 and SigLIP-2 are now part of
They're added on top of the v4.49.0 release, and can be installed from the following tags:
This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).
Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.
Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.
transformers
in dedicated releases!They're added on top of the v4.49.0 release, and can be installed from the following tags:
v4.49.0-SmolVLM-2
and v4.49.0-SigLIP-2
.This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).
Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.
Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.

reacted to
jsulz's
post with 🚀❤️
16 days ago
Post
3380
Time flies!
Six months after joining Hugging Face the Xet team is kicking off the first migrations from LFS to our storage for a number of repositories on the Hub.
More on the nitty gritty details behind the migration soon, but here are the big takeaways:
🤖 We've successfully completed the first migrations from LFS -> Xet to test the infrastructure and prepare for a wider release
✅ No action on your part needed - you can work with a Xet-backed repo like any other repo on the Hub (for now - major improvements on their way!)
👀 Keep an eye out for the Xet logo to see if a repo you know is on our infra! See the screenshots below to spot the difference 👇
⏩ ⏩ ⏩ Blazing uploads and downloads coming soon. W’re gearing up for a full integration with the Hub's Python library that will make building on the Hub faster than ever - special thanks to @celinah and @Wauplin for their assistance.
🎉 Want Early Access? If you’re curious and want to test it out the bleeding edge that will power the development experience on the Hub, we’d love to partner with you. Let me know!
This is the culmination of a lot of effort from the entire team. Big round of applause to @sirahd @brianronan @jgodlewski @hoytak @seanses @assafvayner @znation @saba9 @rajatarya @port8080 @yuchenglow
Six months after joining Hugging Face the Xet team is kicking off the first migrations from LFS to our storage for a number of repositories on the Hub.
More on the nitty gritty details behind the migration soon, but here are the big takeaways:
🤖 We've successfully completed the first migrations from LFS -> Xet to test the infrastructure and prepare for a wider release
✅ No action on your part needed - you can work with a Xet-backed repo like any other repo on the Hub (for now - major improvements on their way!)
👀 Keep an eye out for the Xet logo to see if a repo you know is on our infra! See the screenshots below to spot the difference 👇
⏩ ⏩ ⏩ Blazing uploads and downloads coming soon. W’re gearing up for a full integration with the Hub's Python library that will make building on the Hub faster than ever - special thanks to @celinah and @Wauplin for their assistance.
🎉 Want Early Access? If you’re curious and want to test it out the bleeding edge that will power the development experience on the Hub, we’d love to partner with you. Let me know!
This is the culmination of a lot of effort from the entire team. Big round of applause to @sirahd @brianronan @jgodlewski @hoytak @seanses @assafvayner @znation @saba9 @rajatarya @port8080 @yuchenglow

reacted to
fdaudens's
post with ❤️
16 days ago
Post
5802
🎯 Perplexity drops their FIRST open-weight model on Hugging Face: A decensored DeepSeek-R1 with full reasoning capabilities. Tested on 1000+ examples for unbiased responses.
Check it out: perplexity-ai/r1-1776
Blog post: https://perplexity.ai/hub/blog/open-sourcing-r1-1776
Check it out: perplexity-ai/r1-1776
Blog post: https://perplexity.ai/hub/blog/open-sourcing-r1-1776