meta-llama/Meta-Llama-3-8B - 2b_2n4m_128bs Compression

This is a compressed model using deltazip.

Paper, Compression Tool, Inference Engine (Soon).

Compression Configuration

  • Base Model: meta-llama/Meta-Llama-3-8B
  • Compression Scheme: 2b_2n4m_128bs
  • Dataset: HuggingFaceH4/ultrachat_200k
  • Dataset Split: train_sft
  • Max Sequence Length: 2048
  • Number of Samples: 256

Sample Output

Prompt:

[{'role': 'user', 'content': 'Who is Alan Turing?'}]

Output:

<|begin_of_text|><|start_header_id|>user<|end_header_id|>

Who is Alan Turing?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Alan Turing (1912-1954) was a British mathematician, computer scientist, logician, and philosopher who made significant contributions to the development of computer science, artificial intelligence, and cryptography. He is widely considered one of the most influential figures in the history of computer science.

Turing's work had a profound impact on the development of modern computing and artificial intelligence. He is best known for his work on the theoretical foundations of computation, his concept of the universal Turing machine, and his contributions to the development of the first computer programs.

Here are some of

Evaluation

Downloads last month
13
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for deltazip/meta-llama.Meta-Llama-3-8B-Instruct.2b_2n4m_128bs

Finetuned
(373)
this model

Dataset used to train deltazip/meta-llama.Meta-Llama-3-8B-Instruct.2b_2n4m_128bs

Collection including deltazip/meta-llama.Meta-Llama-3-8B-Instruct.2b_2n4m_128bs