You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

LLaMA 1.9B - Kazakh Causal Language Model

Llama Model Logo

LLaMA 1.9B

Kazakh Causal Language Model

Model Description

This model is a Kazakh language version of the LLaMA model with 1.9 billion parameters, trained for causal language modeling. The train dataset includes some mixed Russian texts, which occasionally cause the model to generate Russian text. Despite this, the model shows promising results. Future steps may include retraining with a purer dataset, fine-tuning, or using the model for various NLP tasks with additional fine-tuning.

Training Setup

  • Training Examples: Over 5.3 million examples
  • Training Hardware: Two NVIDIA A100 GPUs (80GB each)
  • Training Status: Ongoing, currently partway through the first epoch
  • Optimizer: Cosine with restarts scheduler
  • Parallelism: Distributed Data Parallel (DDP)
  • Number of Warmup Steps: 8000

Model Authors

Name: Kadyrbek Nurgali

  @misc{
  nurgali_kadyrbek_2024,
  author       = {NURGALI, Kadyrbek},
  title        = {llama-1.9B-kaz (Revision 299ebbb)},
  year         = 2024,
  url          = {https://huggingface.co./nur-dev/llama-1.9B-kaz},
  doi          = {10.57967/hf/3043},
  publisher    = {Hugging Face}
}
Downloads last month
12
Safetensors
Model size
1.94B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nur-dev/llama-1.9B-kaz

Finetunes
1 model

Dataset used to train nur-dev/llama-1.9B-kaz