Edit model card

Weights from the Llama-3-8B Self-Align Experiments

[WEIGHTS TO BE UPLOADED ONCE DONE]

Training Config

The config.yaml should be used during accelerate launch, and run.sh was used to launch the training using the StarCoder2 Self-Align training script. Some tweaks were performed to get this working on 48GB vRAM:

  • FSDP was used
  • per_device_batch_size is 2
  • A learning rate of 3e-6 was used

Environment:

  • Trained with 2x4090 GPUs
  • 128GB RAM
Downloads last month
8
Inference API
Unable to determine this model's library. Check the docs .

Collection including muellerzr/llama-3-8B-self-align