Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

bllossom_polyglot-12.8b_0725 This model is a fine-tuned version of EleutherAI/polyglot-ko-12.8b on LIMA dataset(translated in Korean)

Training hyperparameters The following hyperparameters were used during training:

learning_rate: 0.0002 train_batch_size: 4 eval_batch_size: 1 seed: 0 gradient_accumulation_steps: 8 total_train_batch_size: 32 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03 training_steps: 200 Framework versions Transformers 4.31.0.dev0 Pytorch 1.13.1 Datasets 2.13.1 Tokenizers 0.13.3

Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.