Ali El Filali's picture

Ali El Filali

alielfilali01

AI & ML interests

AI Psychometrician ? | NLP (mainly for Arabic) | Other interests include Reinforcement Learning and Cognitive sciences among others

Recent Activity

updated a dataset about 6 hours ago
OALL/requests
liked a Space about 9 hours ago
osanseviero/gemini-coder
liked a model about 10 hours ago
Qwen/QVQ-72B-Preview
View all activity

Articles

Organizations

Gradio-Themes-Party's profile picture Arabic Machine Learning 's profile picture BigLAM: BigScience Libraries, Archives and Museums's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Blog-explorers's profile picture ASAS AI's profile picture Nt3awnou's profile picture Qwen's profile picture Mixed Arabic Datasets's profile picture ZeroGPU Explorers's profile picture 2A2I Legacy Models & Datasets's profile picture AtlasIA's profile picture 2A2I's profile picture Open Arabic LLM Leaderboard's profile picture MLX Community's profile picture Social Post Explorers's profile picture C4AI Community's profile picture Dev Mode Explorers's profile picture Chinese LLMs on Hugging Face's profile picture ThinkAI's profile picture KABOUR's profile picture Hugging Face Discord Community's profile picture llmc's profile picture Arabic Translation Prompt Engineering's profile picture Inception's profile picture Dataset Tools's profile picture ml-fw-prerelease's profile picture Data Is Better Together Contributor's profile picture Donut Earthers 🍩's profile picture QudraTech's profile picture

alielfilali01's activity

replied to their post 12 days ago
posted an update 12 days ago
view post
Post
3301
Unpopular opinion: Open Source takes courage to do !

Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged !
It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !

Cheers to the heroes here who see this!
·
reacted to takarajordan's post with 🔥❤️ 12 days ago
view post
Post
2199
I'm super excited to release my first open-source text dataset:

WorldScenario 20K is a novel dataset of 20,000 synthetically generated multi-stakeholder scenarios designed to simulate real-world decision-making processes. Each scenario explores a unique environmental, societal, or economic issue.

I used the brand new meta-llama/Llama-3.3-70B-Instruct model to generate this dataset and I put the dataset through some post processing to clean and evaluate the dataset for diversity.

I'd appreciate some feedback and thoughts on my new release! Thanks!

takarajordan/WorldScenario_20K
·
reacted to AdinaY's post with 🤗 14 days ago
view post
Post
868
Updates from the Chinese community last week 🔥

LLM:
✨ Sailor 2 , multilingual model supporting 10+ South Asian languages by Sea AI Lab. https://huggingface.co./sailor2

MLLM:
✨InternVL 2.5 , new open multimodal LLM by OpenGVLab
https://huggingface.co./collections/OpenGVLab/internvl-25-673e1019b66e2218f68d7c1c
✨Qwen2-VL 2B/7B/72B base model, the latest iteration of our Qwen-VL model by Alibaba Qwen
Qwen/qwen2-vl-66cee7455501d7126940800d

Video model:
✨HunyuanVideo , 13B open video model by Tencent
tencent/HunyuanVideo

Reasoning model:
✨ LLaMA-O1 🦙 base & supervised model; pretrain & finetune datasets and demo all released
zh-ai-community/reasoning-models-67409fb3aa1ed78f10087cd7

Audio model:
✨Fish Speech 1.5, Text-to-speech in 13 languages, trained on 1M+ hours of audio by FishAudio
fishaudio/fish-speech-1.5
✨ClearVoice, An advanced voice processing framework by Alibaba Tongyi SpeechAI https://huggingface.co./alibabasglab

More details 👉 https://huggingface.co./zh-ai-community
posted an update 16 days ago
view post
Post
1476
Apparently i forgot to put this here !

Well, this is a bit late but consider given our recent blog a read if you are interested in Evaluation.

You don't have to be into Arabic NLP in order to read it, the main contribution we are introducing is a new evaluation measure for NLG. We made the fisrt application of this measure on Arabic for now and we will be working with colleagues from the community to expand it to other languages.

Blog:
Rethinking LLM Evaluation with 3C3H: AraGen Benchmark and Leaderboard
https://huggingface.co./blog/leaderboard-3c3h-aragen

Space:
inceptionai/AraGen-Leaderboard

Give it a read and let me know your thoughts 🤗
reacted to dvilasuero's post with ❤️🔥 18 days ago
view post
Post
2259
🌐 Announcing Global-MMLU: an improved MMLU Open dataset with evaluation coverage across 42 languages, built with Argilla and the Hugging Face community.

Global-MMLU is the result of months of work with the goal of advancing Multilingual LLM evaluation. It's been an amazing open science effort with collaborators from Cohere For AI, Mila - Quebec Artificial Intelligence Institute, EPFL, Massachusetts Institute of Technology, AI Singapore, National University of Singapore, KAIST, Instituto Superior Técnico, Carnegie Mellon University, CONICET, and University of Buenos Aires.

🏷️ +200 contributors used Argilla MMLU questions where regional, dialect, or cultural knowledge was required to answer correctly. 85% of the questions required Western-centric knowledge!

Thanks to this annotation process, the open dataset contains two subsets:

1. 🗽 Culturally Agnostic: no specific regional, cultural knowledge is required.
2. ⚖️ Culturally Sensitive: requires dialect, cultural knowledge or geographic knowledge to answer correctly.

Moreover, we provide high quality translations of 25 out of 42 languages, thanks again to the community and professional annotators leveraging Argilla on the Hub.

I hope this will ensure a better understanding of the limitations and challenges for making open AI useful for many languages.

Dataset: CohereForAI/Global-MMLU
reacted to stas's post with ❤️ 21 days ago
view post
Post
1153
If you remember my work on MAMF - to find the realistic TFLOPS achievable ceiling - the Intel AI team has shared their measurements and they scored ...

an incredible 99.4% TFLOPS efficiency for Gaudi 2!

That's quite amazing! Your ROI on these accelerators will be very high.

The full table is here: https://github.com/stas00/ml-engineering/tree/master/compute/accelerator#maximum-achievable-matmul-flops-comparison-table

As we have seen the competitors get their achievable efficiency worse with each new generation, I'm looking forward to see if Gaudi 3 will keep the high bar!

Thanks to Avi Rubin, Lakshman Chari, Imtiaz Sajwani, Ramy J and Zhiqi Tao for helping to get these numbers to the community.
reacted to AdinaY's post with ❤️ 21 days ago
view post
Post
1465
2023 & 2024 Top Downloaded (all time) Open Models on the hub are both from the Chinese community 👀

2023 👉 Bge base by BAAI
BAAI/bge-base-en-v1.5
2024 👉 Qwen 2.5 by Alibaba Qwen
Qwen/Qwen2.5-1.5B-Instruct

Can’t wait to see what incredible models the Chinese community will bring in 2025🚀

✨ Follow https://huggingface.co./zh-ai-community to get the latest updates from the Chinese community
✨ Explore the 2024 Year in Review huggingface/open-source-ai-year-in-review-2024
reacted to clem's post with ❤️ 25 days ago
view post
Post
4350
Hugging Face is becoming the best place to share the most viral AI apps with spaces.

Kolors Virtual Try-on just crossed 6,000,000 unique visitors & is now the #5 most popular space. Congrats to the Kwai Kolors team!

Kwai-Kolors/Kolors-Virtual-Try-On
  • 2 replies
·
reacted to ariG23498's post with 🚀 28 days ago
reacted to vincentg64's post with 🧠 about 1 month ago
view post
Post
1183
There is no such thing as a Trained LLM https://mltblog.com/3CEJ9Pt

What I mean here is that traditional LLMs are trained on tasks irrelevant to what they will do for the user. It’s like training a plane to efficiently operate on the runway, but not to fly. In short, it is almost impossible to train an LLM, and evaluating is just as challenging. Then, training is not even necessary. In this article, I dive on all these topics.

➡️ Training LLMs for the wrong tasks

Since the beginnings with Bert, training an LLM typically consists of predicting the next tokens in a sentence, or removing some tokens and then have your algorithm fill the blanks. You optimize the underlying deep neural networks to perform these supervised learning tasks as well as possible. Typically, it involves growing the list of tokens in the training set to billions or trillions, increasing the cost and time to train. However, recently, there is a tendency to work with smaller datasets, by distilling the input sources and token lists. After all, out of one trillion tokens, 99% are noise and do not contribute to improving the results for the end-user; they may even contribute to hallucinations. Keep in mind that human beings have a vocabulary of about 30,000 keywords, and that the number of potential standardized prompts on a specialized corpus (and thus the number of potential answers) is less than a million.

➡️ Read the full articles at https://mltblog.com/3CEJ9Pt, also featuring issues with evaluation metrics and the benefits of untrained LLMs.
reacted to malhajar's post with 🔥 about 1 month ago
view post
Post
4223
🇫🇷 Lancement officiel de l'OpenLLM French Leaderboard : initiative open-source pour référencer l’évaluation des LLMs francophones

Après beaucoup d’efforts et de sueurs avec Alexandre Lavallee, nous sommes ravis d’annoncer que le OpenLLMFrenchLeaderboard est en ligne sur Hugging Face (space url: le-leadboard/OpenLLMFrenchLeaderboard) la toute première plateforme dédiée à l’évaluation des grands modèles de langage (LLM) en français. 🇫🇷✨

Ce projet de longue haleine est avant tout une œuvre de passion mais surtout une nécessité absolue. Il devient urgent et vital d'oeuvrer à plus de transparence dans ce domaine stratégique des LLM dits multilingues. La première pièce à l'édifice est donc la mise en place d'une évaluation systématique et systémique des modèles actuels et futurs.

Votre modèle IA français est-il prêt à se démarquer ? Soumettez le dans notre espace, et voyez comment vous vous comparez par rapport aux autres modèles.

❓ Comment ça marche :
Soumettez votre LLM français pour évaluation, et nous le testerons sur des benchmarks de référence spécifiquement adaptés pour la langue française — notre suite de benchmarks comprend :

- BBH-fr : Raisonnement complexe
- IFEval-fr : Suivi d'instructions
- GPQA-fr : Connaissances avancées
- MUSR-fr : Raisonnement narratif
- MATH_LVL5-fr : Capacités mathématiques
- MMMLU-fr : Compréhension multitâche

Le processus est encore manuel, mais nous travaillons sur son automatisation, avec le soutien de la communauté Hugging Face.

@clem , on se prépare pour une mise à niveau de l’espace ? 😏👀

Ce n'est pas qu'une question de chiffres—il s'agit de créer une IA qui reflète vraiment notre langue, notre culture et nos valeurs. OpenLLMFrenchLeaderboard est notre contribution personnelle pour façonner l'avenir des LLM en France.
  • 1 reply
·
reacted to elliesleightholm's post with 🤗❤️ about 1 month ago
reacted to LukeNeumann's post with 🤯 about 1 month ago
reacted to monsoon-nlp's post with ❤️ about 1 month ago
view post
Post
1413
Great to see Tatta Bio release an embeddings version of their DNA/protein language model 🧬: tattabio/gLM2_650M_embed
  • 2 replies
·
reacted to m-ric's post with 🔥 about 1 month ago
view post
Post
1383
Great feature alert: 𝗬𝗼𝘂 𝗰𝗮𝗻 𝗻𝗼𝘄 𝘂𝘀𝗲 𝗮𝗻𝘆 𝗦𝗽𝗮𝗰𝗲 𝗮𝘀 𝗮 𝘁𝗼𝗼𝗹 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗲𝗿𝘀.𝗮𝗴𝗲𝗻𝘁! 🛠️🔥🔥

This lets you take the coolest spaces, like FLUX.1-dev, and use them in agentic workflows with a few lines of code! 🧑‍💻

On the video below, I set up my fake vacation pictures where I'm awesome at surfing (I'm really not) 🏄

Head to the doc to learn this magic 👉 https://huggingface.co./docs/transformers/main/en/agents_advanced#import-a-space-as-a-tool-
reacted to maxiw's post with ❤️ about 1 month ago
view post
Post
4619
I was curious to see what people post here on HF so I created a dataset with all HF Posts: maxiw/hf-posts

Some interesting stats:

Top 5 Authors by Total Impressions:
-----------------------------------
@merve : 171,783 impressions (68 posts)
@fdaudens : 135,253 impressions (81 posts)
@singhsidhukuldeep : 122,591 impressions (81 posts)
@akhaliq : 119,526 impressions (78 posts)
@MonsterMMORPG : 112,500 impressions (45 posts)

Top 5 Users by Number of Reactions Given:
----------------------------------------
@osanseviero : 1278 reactions
@clem : 910 reactions
@John6666 : 899 reactions
@victor : 674 reactions
@samusenps : 655 reactions

Top 5 Most Used Reactions:
-------------------------
❤️: 7048 times
🔥: 5921 times
👍: 4856 times
🚀: 2549 times
🤗: 2065 times
·