HindiLLM-Small

HindiLLM-Small is a powerful language model designed for processing text in the Devanagari script. With 124 million parameters and a context window of 1024 tokens, it delivers exceptional performance across various downstream tasks, including sentiment analysis, multi-class classification, and natural language inference.

Citation

If you find our work helpful, feel free to give us a cite.

@InProceedings{10.1007/978-3-031-78172-8_17,
author="Chouhan, Sanjay
and Nath, Shubha Brata
and Dutta, Aparajita",
editor="Antonacopoulos, Apostolos
and Chaudhuri, Subhasis
and Chellappa, Rama
and Liu, Cheng-Lin
and Bhattacharya, Saumik
and Pal, Umapada",
title="HindiLLM: Large Language Model for Hindi",
booktitle="Pattern Recognition",
year="2025",
publisher="Springer Nature Switzerland",
address="Cham",
pages="255--270",
abstract="The advancements in the Large Language Model (LLM) have helped in solving several problems related to language processing. Most of the researches have focused on the English language only, because of its popularity and abundance on the internet. However, a high-performance language model for Hindi and other Indic languages is lacking in the literature. In this work, we have pre-trained two autoregressive LLM models for the Hindi language, namely HindiLLM-Small and HindiLLM-Medium. We use a two-step process comprising unsupervised pre-training and supervised fine-tuning. First, we create a large and high-quality text corpus for unsupervised pre-training. Next, we train a Byte-Pair Encoding, named HindiLLM tokenizer, using the pre-training text data. We then perform training on the unlabeled data, known as the pre-training step, to get the HindiLLM base models. Furthermore, we perform fine-tuning of the HindiLLM base models for different tasks like sentiment analysis, text classification, natural language inference, and multiple choice question-answer on popular labeled datasets to measure the real-world performance. The evaluation shows that the HindiLLM-based fine-tuned models outperform several models in most of the language related tasks.",
isbn="978-3-031-78172-8"
}
Downloads last month
12
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.