Manel ALOUI

Manel-Hik

AI & ML interests

NLP recommender system, machine learning

Recent Activity

updated a dataset about 1 month ago
OALL/ALRAGE
View all activity

Organizations

๐Ÿค— Course Team AI Law Assistant's profile picture LangChainDatasets's profile picture FreedomAI's profile picture fastai X Hugging Face Group 2022's profile picture Arabic Machine Learning 's profile picture Open Arabic LLM Leaderboard's profile picture Data Is Better Together Contributor's profile picture

Manel-Hik's activity

reacted to joylarkin's post with ๐Ÿš€ 3 months ago
view post
Post
2628
๐Ÿ’ฌ Chat as a way to query SQL! The Airtrain AI team is happy to share a new Hugging Face Space that lets you interact with Hugging Face Hub datasets using a natural language chatbot. ๐Ÿค—

Start Exploring ๐Ÿ‘‰ airtrain-ai/hf-dataset-chat-to-sql

This Space is forked from davidberenstein1957/text-to-sql-hub-datasetsย byย  @davidberenstein1957 and features chat capability with improved table naming. The tool works with Hugging Faceโ€™s recently released in-browser DuckDB-based SQL query engine for datasets.



reacted to Salama1429's post with ๐Ÿ‘ 4 months ago
view post
Post
1423
๐Ÿ“š Introducing the 101 Billion Arabic Words Dataset

๐ŸŒ Exciting Milestone in Arabic Language Technology! hashtag#NLP hashtag#ArabicLLM hashtag#LanguageModels

๐Ÿš€ Why It Matters:
1. ๐ŸŒŸ Large Language Models (LLMs) have brought transformative changes, primarily in English. It's time for Arabic to shine!
2. ๐ŸŽฏ This project addresses the critical challenge of bias in Arabic LLMs due to reliance on translated datasets.

๐Ÿ” Approach:
1. ๐Ÿ’ช Undertook a massive data mining initiative focusing exclusively on Arabic from Common Crawl WET files.
2. ๐Ÿงน Employed state-of-the-art cleaning and deduplication processes to maintain data quality and uniqueness.

๐Ÿ“ˆ Impact:
1. ๐Ÿ† Created the largest Arabic dataset to date with 101 billion words.
2. ๐Ÿ“ Enables the development of Arabic LLMs that are linguistically and culturally accurate.
3. ๐ŸŒ Sets a global benchmark for future Arabic language research.


๐Ÿ”— Paper: https://lnkd.in/dGAiaygn
๐Ÿ”— Dataset: https://lnkd.in/dGTMe5QV

- ๐Ÿ”„ Share your thoughts and let's drive the future of Arabic NLP together!

hashtag#DataScience hashtag#MachineLearning hashtag#ArtificialIntelligence hashtag#Innovation hashtag#ArabicData
New activity in silma-ai/silma-ar-custom-eval 4 months ago

Technical Report

#2 opened 4 months ago by
Manel-Hik
reacted to alielfilali01's post with ๐Ÿค— 7 months ago
view post
Post
1983
I'm officially considered #gpu_poor ๐Ÿ’€
But I'm #data_rich ๐Ÿ˜Ž
upvoted an article 7 months ago
view article
Article

Introducing the Open Arabic LLM Leaderboard

โ€ข 76
upvoted an article 8 months ago
view article
Article

๐Ÿฆ™โš—๏ธ Using Llama3 and distilabel to build fine-tuning datasets

By dvilasuero โ€ข
โ€ข 73
reacted to dvilasuero's post with โค๏ธ 12 months ago
view post
Post
๐Ÿ‘‹ Hi there!

This is my very first post.

I'll use it to share some old news: a math preference dataset for DPO!

I created this dataset some time ago while we were developing distilabel (https://github.com/argilla-io/distilabel).

Some days ago we found out people are actually using it! So I'll use this post to explain how I built it in case it's useful for the community.

1. I used distilabel's SelfInstruct-inspired task to generate instructions about different math topics. I curated the instructions with Argilla (on Spaces!).
2. Then I used a distilabel Pipeline to build a preference dataset using gpt3.5 as generator and gpt4 as labeller. If I recall correctly I used our JudgeLM implementation (see https://distilabel.argilla.io/latest/technical-reference/tasks/#judgelmtask)

(see the screenshot with the dataset in the Argilla UI)

3. Then I just binarized into chosen, rejected pairs and voilร :

argilla/distilabel-math-preference-dpo

The funny thing is that I used this to do a second DPO run over Notus-7B. I hoped to see an improvement on math/reasoning skills but it actually improved in STEM and Humanities and did worse on Math ๐Ÿคฃ .

In conclusion, this dataset was only a quick experiement. I'm happy to see the community found it useful. Data for DPO and fine-tuning are still a mystery, let's unveil these mysteries in 2024 together!

Follow me for the most exciting datasets for LLMs (and maybe some great, small, efficient models). I plan to announce all Argilla open-source work here!
  • 2 replies
ยท