After some heated discussion π₯, we clarify our intent re. storage limits on the Hub
TL;DR: - public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible - private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)
We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community π₯
Multimodal πΌοΈ > Google shipped a PaliGemma 2, new iteration of PaliGemma with more sizes: 3B, 10B and 28B, with pre-trained and captioning variants π > OpenGVLab released InternVL2, seven new vision LMs in different sizes, with sota checkpoint with MIT license β¨ > Qwen team at Alibaba released the base models of Qwen2VL models with 2B, 7B and 72B ckpts
LLMs π¬ > Meta released a new iteration of Llama 70B, Llama3.2-70B trained further > EuroLLM-9B-Instruct is a new multilingual LLM for European languages with Apache 2.0 license π₯ > Dataset: CohereForAI released GlobalMMLU, multilingual version of MMLU with 42 languages with Apache 2.0 license > Dataset: QwQ-LongCoT-130K is a new dataset to train reasoning models > Dataset: FineWeb2 just landed with multilinguality update! π₯ nearly 8TB pretraining data in many languages!
Image/Video Generation πΌοΈ > Tencent released HunyuanVideo, a new photorealistic video generation model > OminiControl is a new editing/control framework for image generation models like Flux
Audio π > Indic-Parler-TTS is a new text2speech model made by community
Keeping up with open-source AI in 2024 = overwhelming.
Here's help: We're launching our Year in Review on what actually matters, starting today!
Fresh content dropping daily until year end. Come along for the ride - first piece out now with @clem's predictions for 2025.
Think of it as your end-of-year AI chocolate calendar.
Kudos to @BrigitteTousi@clefourrier@Wauplin@thomwolf for making it happen. We teamed up with aiworld.eu for awesome visualizations to make this digestibleβit's a charm to work with their team.
Itβs 2nd of December , hereβs your Cyber Monday present π !
Weβre cutting our price down on Hugging Face Inference Endpoints and Spaces!
Our folks at Google Cloud are treating us with a 40% price cut on GCP Nvidia A100 GPUs for the next 3οΈβ£ months. We have other reductions on all instances ranging from 20 to 50%.
We just released Transformers.js v3.1 and you're not going to believe what's now possible in the browser w/ WebGPU! π€― Let's take a look: π Janus from Deepseek for unified multimodal understanding and generation (Text-to-Image and Image-Text-to-Text) ποΈ Qwen2-VL from Qwen for dynamic-resolution image understanding π’ JinaCLIP from Jina AI for general-purpose multilingual multimodal embeddings π LLaVA-OneVision from ByteDance for Image-Text-to-Text generation π€ΈββοΈ ViTPose for pose estimation π MGP-STR for optical character recognition (OCR) π PatchTST & PatchTSMixer for time series forecasting
That's right, everything running 100% locally in your browser (no data sent to a server)! π₯ Huge for privacy!
if you use Google Kubernetes Engine to host you ML workloads, I think this series of videos is a great way to kickstart your journey of deploying LLMs, in less than 10 minutes! Thank you @wietse-venema-demo !
I'd like to share here a bit more about our Deep Learning Containers (DLCs) we built with Google Cloud, to transform the way you build AI with open models on this platform!
With pre-configured, optimized environments for PyTorch Training (GPU) and Inference (CPU/GPU), Text Generation Inference (GPU), and Text Embeddings Inference (CPU/GPU), the Hugging Face DLCs offer:
β‘ Optimized performance on Google Cloud's infrastructure, with TGI, TEI, and PyTorch acceleration. π οΈ Hassle-free environment setup, no more dependency issues. π Seamless updates to the latest stable versions. πΌ Streamlined workflow, reducing dev and maintenance overheads. π Robust security features of Google Cloud. βοΈ Fine-tuned for optimal performance, integrated with GKE and Vertex AI. π¦ Community examples for easy experimentation and implementation. π TPU support for PyTorch Training/Inference and Text Generation Inference is coming soon!
IBM & NASA just released open-source AI model for weather & climate on Hugging Face.
Prithvi WxC offers insights beyond forecasting, tackling challenges from local weather to global climate. Potential apps: targeted forecasts, severe weather detection & more. https://huggingface.co./Prithvi-WxC
This is impressive. Check out this comparison of the Ida hurricane between ground truth and the AI model's prediction.
reacted to alvarobartt's
post with π₯3 months ago
π€ Serving Meta Llama 3.1 405B on Google Cloud is now possible via the Hugging Face Deep Learning Containers (DLCs) for Text Generation Inference (TGI)
Thanks to the Hugging Face DLCs for TGI and Google Cloud Vertex AI, deploying a high-performance text generation container for serving Large Language Models (LLMs) has never been easier. And weβre not going to stop here β stay tuned as we enable more experiences to build AI with open models on Google Cloud!
Just crossed 200,000 free public AI datasets shared by the community on Hugging Face! Text, image, video, audio, time-series & many more... Thanks everyone!