Prithiv Sakthi's picture

Prithiv Sakthi

prithivMLmods

AI & ML interests

computer vision, multimodality, adapters @starngerzonehf @strangerguardhf

Recent Activity

updated a collection about 6 hours ago
Net Classifications
updated a collection about 6 hours ago
Net Classifications
updated a collection about 6 hours ago
Net Classifications
View all activity

Organizations

Stanford AI's profile picture DataScienceEngineering's profile picture AI FILMS's profile picture Samsung Electronics's profile picture MISATO-dataset's profile picture GEM benchmark's profile picture OpenGVLab's profile picture MusicAI's profile picture BigScience Biomedical Datasets's profile picture OpenVINO Toolkit's profile picture LLMs's profile picture ONNXConfig for all's profile picture Gradio-Themes-Party's profile picture scikit-learn's profile picture lora concepts library's profile picture Open-Source AI Meetup's profile picture Kornia AI's profile picture Universitรฉ Dauphine-PSL's profile picture Platzi Community's profile picture Tune a video concepts library's profile picture Keras Dreambooth Event's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture The Waifu Research Department's profile picture Musika's profile picture Blog-explorers's profile picture OpenSky's profile picture AI Tamil Nadu's profile picture OpenLLM France's profile picture huggingPartyParis's profile picture Team Tonic's profile picture That Time I got Reincarnated as a Hugging Face Organization's profile picture LocalLLaMA's profile picture Major TOM's profile picture MLX Community's profile picture C4AI Community's profile picture M4-ai's profile picture Chinese LLMs on Hugging Face's profile picture ONNX Community's profile picture Dataset Tools's profile picture Nerdy Face's profile picture Stranger Zone's profile picture open/ acc's profile picture Data Is Better Together Contributor's profile picture None yet's profile picture Taiwan Llama's profile picture Doge Face's profile picture Stranger Guard's profile picture Twinkle AI's profile picture

prithivMLmods's activity

replied to their post 2 days ago
view reply

@JLouisBiz

But the model is licensed under Llama 3.2, on which the base model is also built. The License Rights and Redistribution section states that the grant of rights allows the use of the content for derivative works and modifications to the Llama materials, provided that 'Built with Llama' is properly mentioned and the Llama is displayed wherever it is used. I believe I have properly mentioned that and have not overruled anything from the license.

Provided a copy of the license. Include 'Llama' at the beginning of the modelโ€™s name. In the 'About' section of the model, mention that it is built based on Llama.

" If you use the Llama Materials or any outputs or results of the Llama Materials to ๐—ฐ๐—ฟ๐—ฒ๐—ฎ๐˜๐—ฒ, ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป, ๐—ณ๐—ถ๐—ป๐—ฒ ๐˜๐˜‚๐—ป๐—ฒ, ๐—ผ๐—ฟ
๐—ผ๐˜๐—ต๐—ฒ๐—ฟ๐˜„๐—ถ๐˜€๐—ฒ ๐—ถ๐—บ๐—ฝ๐—ฟ๐—ผ๐˜ƒ๐—ฒ ๐—ฎ๐—ป ๐—”๐—œ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น, ๐˜„๐—ต๐—ถ๐—ฐ๐—ต ๐—ถ๐˜€ ๐—ฑ๐—ถ๐˜€๐˜๐—ฟ๐—ถ๐—ฏ๐˜‚๐˜๐—ฒ๐—ฑ ๐—ผ๐—ฟ ๐—บ๐—ฎ๐—ฑ๐—ฒ ๐—ฎ๐˜ƒ๐—ฎ๐—ถ๐—น๐—ฎ๐—ฏ๐—น๐—ฒ, ๐˜†๐—ผ๐˜‚ ๐˜€๐—ต๐—ฎ๐—น๐—น ๐—ฎ๐—น๐˜€๐—ผ ๐—ถ๐—ป๐—ฐ๐—น๐˜‚๐—ฑ๐—ฒ โ€œ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎโ€
at the beginning of any such AI model name. "

Please refer to the Llama 3.2 License [ https://huggingface.co./meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt ], specifically the License Rights and Redistribution section, clauses (a) and (b).

posted an update 3 days ago
view post
Post
1520
Luna, the single-speaker text-to-speech model, features a Radio & Atcosim-style sound with a female voice. It offers authentic radio podcast noise and empathetic speech generation, fine-tuned based on Orpheus's Llama-based speech generation state-of-the-art model. ๐ŸŽ™๏ธ

+ Model : prithivMLmods/Llama-3B-Mono-Luna
+ Collection : prithivMLmods/clean-radio-mono-voice-67e76fe1b3a87cc3bccef803
+ Reference ft : https://github.com/canopyai/Orpheus-TTS
+ Base Model : canopylabs/orpheus-3b-0.1-ft

I also tried some other clean-voice single-speaker models based on Orpheus. If you're interested, check out the collection.

๐Ÿ”‰Try the Mono Luna demo here: http://colab.research.google.com/drive/1K0AAIOKDE5XE0znxXaiiUJvPSpFveteK
ยท
reacted to AdinaY's post with ๐Ÿ”ฅ 4 days ago
view post
Post
2190
Let's check out the latest releases from the Chinese community in March!

๐Ÿ‘‰ https://huggingface.co./collections/zh-ai-community/march-2025-releases-from-the-chinese-community-67c6b479ebb87abbdf8e2e76


โœจMLLM
> R1 Omni by Alibaba Tongyi - 0.5B
> Qwen2.5 Omni by Alibaba Qwen - 7B with apache2.0

๐Ÿ–ผ๏ธVideo
> CogView-4 by ZhipuAI - Apacha2.0
> HunyuanVideo-I2V by TencentHunyuan
> Open Sora2.0 - 11B with Apache2.0
> Stepvideo TI2V by StepFun AI - 30B with MIT license

๐ŸŽตAudio
> DiffDiffRhythm - Apache2.0
> Spark TTS by SparkAudio - 0.5B

โšก๏ธImage/3D
> Hunyuan3D 2mv/2mini (0.6B) by @TencentHunyuan
> FlexWorld by ByteDance - MIT license
> Qwen2.5-VL-32B-Instruct by Alibaba Qwen - Apache2.0
> Tripo SG (1.5B)/SF by VastAIResearch - MIT license
> InfiniteYou by ByteDance

> LHM by Alibaba AIGC team - Apache2.0
> Spatial LM by ManyCore

๐Ÿง Reasoning
> QwQ-32B by Alibaba Qwen - Apache2.0
> Skywork R1V - 38B with MIT license
> RWKV G1 by RWKV AI - 0.1B pure RNN reasoning model with Apache2.0
> Fin R1 by SUFE AIFLM Lab - financial reasoning

๐Ÿ” LLM
> DeepSeek v3 0324 by DeepSeek -MIT license
> Babel by Alibaba DAMO - 9B/83B/25 languages
ยท
reacted to AdinaY's post with ๐Ÿ”ฅ 6 days ago
view post
Post
1580
A new OPEN Omni model just dropped by @Alibaba_Qwen on the hub๐Ÿ”ฅ๐Ÿคฏ

Qwen2.5-Omni: a 7B end-to-end multimodal model
Qwen/Qwen2.5-Omni-7B

โœจ Thinker-Talker architecture
โœจ Real-time voice & video chat
โœจ Natural speech generation
โœจ Handles text, image, audio & video
  • 1 reply
ยท
reacted to tomaarsen's post with ๐Ÿ”ฅ 6 days ago
view post
Post
2030
โ€ผ๏ธSentence Transformers v4.0 is out! You can now train and finetune reranker models with multi-GPU training, bf16 support, loss logging, callbacks & much more. I also prove that finetuning on your domain helps much more than you might think.

1๏ธโƒฃ Reranker Training Refactor
Reranker models can now be trained using an extensive trainer with a lot of powerful features:
- MultiGPU Training (Data Parallelism (DP) and Distributed Data Parallelism (DDP))
- bf16 training support; loss logging
- Evaluation datasets + evaluation loss
- Improved callback support + an excellent Weights & Biases integration
- Gradient checkpointing, gradient accumulation
- Model card generation
- Resuming from a training checkpoint without performance loss
- Hyperparameter Optimization
and much more!

Read my detailed blogpost to learn about the components that make up this new training approach: https://huggingface.co./blog/train-reranker
Notably, the release is fully backwards compatible: all deprecations are soft, meaning that they still work but emit a warning informing you how to upgrade.

2๏ธโƒฃ New Reranker Losses
- 11 new losses:
- 2 traditional losses: BinaryCrossEntropy and CrossEntropy
- 2 distillation losses: MSE and MarginMSE
- 2 in-batch negatives losses: MNRL (a.k.a. InfoNCE) and CMNRL
- 5 learning to rank losses: Lambda, p-ListMLE, ListNet, RankNet, ListMLE

3๏ธโƒฃ New Reranker Documentation
- New Training Overview, Loss Overview, API Reference docs
- 5 new, 1 refactored training examples docs pages
- 13 new, 6 refactored training scripts
- Migration guides (2.x -> 3.x, 3.x -> 4.x)

4๏ธโƒฃ Blogpost
Alongside the release, I've written a blogpost where I finetune ModernBERT on a generic question-answer dataset. My finetunes easily outperform all general-purpose reranker models, even models 4x as big. Finetuning on your domain is definitely worth it: https://huggingface.co./blog/train-reranker

See the full release notes here: https://github.com/UKPLab/sentence-transformers/releases/v4.0.1
replied to clem's post 6 days ago
posted an update 6 days ago
view post
Post
1640
Dropping some new Journey Art and Realism adapters for Flux.1-Dev, including Thematic Arts, 2021 Memory Adapters, Thread of Art, Black of Art, and more. For more details, visit the model card on Stranger Zone HF ๐Ÿค—

+ Black-of-Art-Flux : strangerzonehf/Black-of-Art-Flux
+ Thread-of-Art-Flux : strangerzonehf/Thread-of-Art-Flux
+ 2021-Art-Flux : strangerzonehf/2021-Art-Flux
+ 3d-Station-Toon : strangerzonehf/3d-Station-Toon
+ New-Journey-Art-Flux : strangerzonehf/New-Journey-Art-Flux
+ Casual-Pencil-Pro : strangerzonehf/Casual-Pencil-Pro
+ Realism-H6-Flux : strangerzonehf/Realism-H6-Flux

- Repository Page : https://huggingface.co./strangerzonehf

The best dimensions and inference settings for optimal results are as follows: A resolution of 1280 x 832 with a 3:2 aspect ratio is recommended for the best quality, while 1024 x 1024 with a 1:1 aspect ratio serves as the default option. For inference, the recommended number of steps ranges between 30 and 35 to achieve optimal output.
  • 1 reply
ยท
posted an update 8 days ago
view post
Post
2565
Dropping Downstream tasks using newly initialized parameters and weights ([classifier.bias & weights]) support domain-specific ๐—ถ๐—บ๐—ฎ๐—ด๐—ฒ ๐—ฐ๐—น๐—ฎ๐˜€๐˜€๐—ถ๐—ณ๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป. Based on siglip2-base-patch16-224 and DomainNet (single-domain, multi-source adaptation), with Fashion-MNIST & More for experimental testing. ๐Ÿงคโ˜„๏ธ

Fashion-Mnist : prithivMLmods/Fashion-Mnist-SigLIP2
Age-Classification : prithivMLmods/Age-Classification-SigLIP2
Mnist-Digits : prithivMLmods/Mnist-Digits-SigLIP2
Multisource-121 : prithivMLmods/Multisource-121-DomainNet
Painting-126 : prithivMLmods/Painting-126-DomainNet
Sketch-126 : prithivMLmods/Sketch-126-DomainNet
Clipart-126 : prithivMLmods/Clipart-126-DomainNet

Models are trained with different parameter settings for experimental purposes only, with the intent of further development. Refer to the model page below for instructions on running it with Transformers ๐Ÿค—.

Collection : prithivMLmods/domainnet-0324-67e0e3c934c03cc40c6c8782

Citations : SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features https://arxiv.org/pdf/2502.14786 & Moment Matching for Multi-Source Domain Adaptation : https://arxiv.org/pdf/1812.01754

posted an update 12 days ago
view post
Post
2246
Play with Orpheus TTS, a Llama-based Speech-LLM designed for high-quality, empathetic text-to-speech generation. This model has been fine-tuned to deliver human-level speech synthesis ๐Ÿ”ฅ๐Ÿ—ฃ๏ธ

๐Ÿ‘‰GitHub [ Demo ] : https://github.com/PRITHIVSAKTHIUR/Orpheus-TTS-Edge

Demo supporting both text-to-speech and text-to-llm responses in speech.

> voice: tara, dan, emma, josh
> emotion: <laugh>, <chuckle>, <sigh>, <cough>, <sniffle>, <groan>, <yawn>, <gasp>.

๐Ÿฅ Orpheus-3b-0.1-ft
Model Page: canopylabs/orpheus-3b-0.1-ft

๐Ÿฅ Orpheus-3b-0.1-ft
Colab Inference Notebook: https://colab.research.google.com/drive/1KhXT56UePPUHhqitJNUxq63k-pQomz3N?usp=sharing

๐Ÿฅ Finetune [ orpheus-3b-0.1-pretrained ]
Resource: https://github.com/canopyai/Orpheus-TTS/tree/main/finetune

๐Ÿฅ Model-releases:
https://canopylabs.ai/model-releases
  • 1 reply
ยท
reacted to jsulz's post with ๐Ÿค— 13 days ago
view post
Post
1849
If you've been following along with the Xet Team's (https://huggingface.co./xet-team) work, you know we've been working to migrate the Hugging Face Hub from Git LFS and to Xet.

Recently, we launched a waitlist to join the movement to Xet (join here! https://huggingface.co./join/xet ) but getting to this point was a journey.

From the initial proof of concept in August, to launching on the Hub internally, to migrating a set of repositories and routing a small chunk of download traffic on the Hub through our infrastructure. Every step of the way has been full of challenges, big and small, and well worth the effort.

Over the past few weeks, with real traffic flowing through our services weโ€™ve tackled some truly gnarly issues (unusual upload/download patterns, memory leaks, load imbalances, and more) and resolved each without major disruptions.

If you're curious about how this sliver of Hub infrastructure looks as we routed traffic through it for the first time (and want a deep dive full of Grafana and Kibana charts ๐Ÿค“) I have a post for you.

Here's an inside look into the day of our first migrations and the weeks following, where we pieced together solutions in real time.

https://huggingface.co./blog/xet-on-the-hub
reacted to onekq's post with ๐Ÿš€ 15 days ago
view post
Post
2280
Introducing ๐ŸŽ‰ OneSQL-v0.1๐Ÿฅณ, our first text-to-SQL model based on Qwen2.5-Coder. This model has achieved an EX score of 63.33 on the BIRD leaderboard (https://bird-bench.github.io/).

The model family includes 7B and 32B,
onekq-ai/onesql-v01-qwen-67d8e3eb1611c5532bb90c5f
and can be also found on Ollama (https://ollama.com/onekq/OneSQL-v0.1-Qwen)

My goal is to make OneSQL the most usable open-weights model for text-to-SQL. I'm currently working on best practices to help users use this model the right away and avoid pitfalls. After that, I plan to train the next version to push for a higher EX score.

Enjoy this model and feel free to share comments/questions ๐Ÿค—
  • 1 reply
ยท
reacted to mlabonne's post with ๐Ÿš€ 16 days ago
view post
Post
6045
โœ‚๏ธ Gemma 3 Abliterated

I noticed that Gemma 3 was much more resilient to refusal removal than other models like Qwen 2.5.

I experimented with different recipes and improved the abliteration technique I wrote about last year.

It's still experimental but the refusal rate is super low in my tests. Enjoy!

mlabonne/gemma-3-4b-it-abliterated
mlabonne/gemma-3-12b-it-abliterated
mlabonne/gemma-3-27b-it-abliterated

  • 1 reply
ยท
reacted to Kseniase's post with ๐Ÿ”ฅ 16 days ago
view post
Post
7706
15 types of attention mechanisms

Attention mechanisms allow models to dynamically focus on specific parts of their input when performing tasks. In our recent article, we discussed Multi-Head Latent Attention (MLA) in detail and now it's time to summarize other existing types of attention.

Here is a list of 15 types of attention mechanisms used in AI models:

1. Soft attention (Deterministic attention) -> Neural Machine Translation by Jointly Learning to Align and Translate (1409.0473)
Assigns a continuous weight distribution over all parts of the input. It produces a weighted sum of the input using attention weights that sum to 1.

2. Hard attention (Stochastic attention) -> Effective Approaches to Attention-based Neural Machine Translation (1508.04025)
Makes a discrete selection of some part of the input to focus on at each step, rather than attending to everything.

3. Self-attention -> Attention Is All You Need (1706.03762)
Each element in the sequence "looks" at other elements and "decides" how much to borrow from each of them for its new representation.

4. Cross-Attention (Encoder-Decoder attention) -> Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation (2104.08771)
The queries come from one sequence and the keys/values come from another sequence. It allows a model to combine information from two different sources.

5. Multi-Head Attention (MHA) -> Attention Is All You Need (1706.03762)
Multiple attention โ€œheadsโ€ are run in parallel.โ€‹ The model computes several attention distributions (heads), each with its own set of learned projections of queries, keys, and values.

6. Multi-Head Latent Attention (MLA) -> DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model (2405.04434)
Extends MHA by incorporating a latent space where attention heads can dynamically learn different latent factors or representations.

7. Memory-Based attention -> End-To-End Memory Networks (1503.08895)
Involves an external memory and uses attention to read from and write to this memory.

See other types in the comments ๐Ÿ‘‡
  • 1 reply
ยท
posted an update 18 days ago
view post
Post
939
Hey Guys! One Small Announcement ๐Ÿค—
Stranger Zone now accepts LoRA requests!

โœ๏ธRequest : strangerzonehf/Request-LoRA [ or ] strangerzonehf/Request-LoRA#1

Page : https://huggingface.co./strangerzonehf

Describe the artistic properties by posting sample images or links to similar images in the request discussion. If the adapters you're asking for are truly creative and safe for work, I'll train and upload the LoRA to the Stranger Zone repo!

Thank you!
posted an update 20 days ago
view post
Post
2484
Gemma-3-4B : Image and Video Inference ๐Ÿ–ผ๏ธ๐ŸŽฅ

๐ŸงคSpace: prithivMLmods/Gemma-3-Multimodal
๐Ÿฅ Git : https://github.com/PRITHIVSAKTHIUR/Gemma-3-Multimodal

@gemma3 : {Tag + Space_+ 'prompt'}
@video-infer : {Tag + Space_+ 'prompt'}

+ Gemma3-4B : google/gemma-3-4b-it
+ By default, it runs : prithivMLmods/Qwen2-VL-OCR-2B-Instruct

Gemma 3 Technical Report : https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf
  • 1 reply
ยท
posted an update 21 days ago
reacted to Smooke's post with ๐Ÿง  21 days ago
view post
Post
1864
Hallucinations Blog Research Reading List:

Hallucinations Are A Feature of AI, Humans Are The Bug https://hackernoon.com/hallucinations-are-a-feature-of-ai-humans-are-the-bug

Overcome LLM Hallucinations Using Knowledge Bases https://hackernoon.com/overcome-llm-hallucinations-using-knowledge-bases

How to Detect and Minimise Hallucinations in AI Models https://hackernoon.com/how-to-detect-and-minimise-hallucinations-in-ai-models

Predictive Coding, AI: Modeling Placebos in RCTs for Psychedelics and Antidepressants https://hackernoon.com/predictive-coding-ai-modeling-placebos-in-rcts-for-psychedelics-and-antidepressants

A Simple Method to Improving the Accuracy of Your RAG System https://hackernoon.com/say-goodbye-to-ai-hallucinations-a-simple-method-to-improving-the-accuracy-of-your-rag-system

Gen AI Hallucinations: The Good, the Bad, and the Costly https://hackernoon.com/gen-ai-hallucinations-the-good-the-bad-and-the-costly

Why Do LLMs Hallucinate? https://hackernoon.com/why-do-llms-hallucinate

Truth Serum For The AI Age: Factiverse To Fight Fake News And Hallucinations https://hackernoon.com/truth-serum-for-the-ai-age-factiverse-to-fight-fake-news-and-hallucinations

A Secret Technique To Sidestepping LLM Hallucinations https://hackernoon.com/a-secret-technique-to-sidestepping-llm-hallucinations

The Importance of Explainability in AI (XAI) https://hackernoon.com/tackling-ai-hallucinations-the-importance-of-explainability-in-ai-xai

What You Need to Know About Amazon Bedrockโ€™s RAG Evaluation and LLM-as-a-Judge for Advancing AI https://hackernoon.com/what-you-need-to-know-about-amazon-bedrocks-rag-evaluation-and-llm-as-a-judge-for-advancing-ai

I Over Relied on AI and Those Shortcuts Cost Me https://hackernoon.com/i-over-relied-on-ai-and-those-shortcuts-cost-me

AIโ€™s Non-Determinism, Hallucinations, And... Cats? https://hackernoon.com/ais-non-determinism-hallucinations-and-cats

More to read --> https://hackernoon.com/search?query=hallucinations

reacted to Kseniase's post with ๐Ÿง  23 days ago
view post
Post
4045
5 New implementations of Diffusion Models

Diffusion models are widely used for image and video generation but remain underexplored in text generation, where autoregressive models (ARMs) dominate. Unlike ARMs, which produce tokens sequentially, diffusion models iteratively refine noise through denoising steps, offering greater flexibility and speed.
Recent advancements show a shift toward using diffusion models in place of, or alongside, ARMs. Researchers also combine strengths from both methods and integrate autoregressive concepts into diffusion.

Here are 5 new implementations of diffusion models:

1. Mercury family of diffusion LLMs (dLLMs) by Inception Labs -> https://www.inceptionlabs.ai/news
It applies diffusion to text and code data, enabling sequence generation 10x faster than today's top LLMs. Now available Mercury Coder can run at over 1,000 tokens/sec on NVIDIA H100s.

2. Diffusion of Thoughts (DoT) -> Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models (2402.07754)
Integrates diffusion models with Chain-of-Thought. DoT allows reasoning steps to diffuse gradually over time. This flexibility enables balancing between reasoning quality and computational cost.

3. LLaDA -> Large Language Diffusion Models (2502.09992)
Shows diffusion models' potential in replacing ARMs. Trained with pre-training and SFT, LLaDA masks tokens, predicts them via a Transformer, and optimizes a likelihood bound. LLaDA matches key LLM skills, and surpasses GPT-4o in reversal poetry.

4. LanDiff -> The Best of Both Worlds: Integrating Language Models and Diffusion Models for Video Generation (2503.04606)
This hybrid text-to-video model combines autoregressive and diffusion paradigms, introducing a semantic tokenizer, an LM for token generation, and a streaming diffusion model. LanDiff outperforms models like Sora.

5. General Interpolating Discrete Diffusion (GIDD) -> Generalized Interpolating Discrete Diffusion (2503.04482)
A flexible noising process with a novel diffusion ELBO enables combining masking and uniform noise, allowing diffusion models to correct mistakes, where ARMs struggle.
  • 3 replies
ยท
reacted to clem's post with ๐Ÿ”ฅ 24 days ago
view post
Post
7260
I was chatting with @peakji , one of the cofounders of Manu AI, who told me he was on Hugging Face (very cool!).

He shared an interesting insight which is that agentic capabilities might be more of an alignment problem rather than a foundational capability issue. Similar to the difference between GPT-3 and InstructGPT, some open-source foundation models are simply trained to 'answer everything in one response regardless of the complexity of the question' - after all, that's the user preference in chatbot use cases. Just a bit of post-training on agentic trajectories can make an immediate and dramatic difference.

As a thank you to the community, he shared 100 invite code first-come first serve, just use โ€œHUGGINGFACEโ€ to get access!
ยท
reacted to davidberenstein1957's post with โค๏ธ 26 days ago