Lin-K76 commited on
Commit
594f850
1 Parent(s): 2676d57

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +196 -0
README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - fp8
4
+ - vllm
5
+ ---
6
+
7
+ # Qwen2-57B-A14B-Instruct-FP8
8
+
9
+ ## Model Overview
10
+ - **Model Architecture:** Qwen2-57B-A14B-Instruct
11
+ - **Input:** Text
12
+ - **Output:** Text
13
+ - **Model Optimizations:**
14
+ - **Weight quantization:** FP8
15
+ - **Activation quantization:** FP8
16
+ - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Meta-Llama-3-7B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-7B-Instruct), this models is intended for assistant-like chat.
17
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
18
+ - **Release Date:** 7/17/2024
19
+ - **Version:** 1.0
20
+ - **Model Developers:** Neural Magic
21
+
22
+ Quantized version of [Qwen2-57B-A14B-Instruct](https://huggingface.co/Qwen/Qwen2-57B-A14B-Instruct).
23
+ It achieves an average score of 74.03 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 74.96.
24
+
25
+ ### Model Optimizations
26
+
27
+ This model was obtained by quantizing the weights and activations of [Qwen2-57B-A14B-Instruct](https://huggingface.co/Qwen/Qwen2-57B-A14B-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.0.
28
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
29
+
30
+ Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a linear scaling per output dimension maps the FP8 representations of the quantized weights and activations.
31
+ [AutoFP8](https://github.com/neuralmagic/AutoFP8) is used for quantization with 512 sequences of UltraChat.
32
+
33
+ ## Deployment
34
+
35
+ ### Use with vLLM
36
+
37
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
38
+
39
+ ```python
40
+ from vllm import LLM, SamplingParams
41
+ from transformers import AutoTokenizer
42
+
43
+ model_id = "neuralmagic/Qwen2-57B-A14B-Instruct-FP8"
44
+
45
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
46
+
47
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
48
+
49
+ messages = [
50
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
51
+ {"role": "user", "content": "Who are you?"},
52
+ ]
53
+
54
+ prompts = tokenizer.apply_chat_template(messages, tokenize=False)
55
+
56
+ llm = LLM(model=model_id)
57
+
58
+ outputs = llm.generate(prompts, sampling_params)
59
+
60
+ generated_text = outputs[0].outputs[0].text
61
+ print(generated_text)
62
+ ```
63
+
64
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
65
+
66
+ ## Creation
67
+
68
+ This model was created by applying [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py) with the MoE gates kept at original precision, as specified below.
69
+ However, note that [auto_fp8/modeling.py](https://github.com/neuralmagic/AutoFP8/blob/main/auto_fp8/modeling.py) had to be adjusted, with line 152 ```if re.search(regex_pattern, name):``` replaced with ```if re.search(regex_pattern, name) and re.search(regex_pattern + "_proj", name) is None:```. This way, the ```gate_proj``` layers will not be left unquantized.
70
+ Although AutoFP8 was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoFP8.
71
+
72
+ ```python
73
+ from datasets import load_dataset
74
+ from transformers import AutoTokenizer
75
+
76
+ from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig
77
+
78
+ pretrained_model_dir = "Qwen/Qwen2-57B-A14B-Instruct"
79
+ quantized_model_dir = "Qwen2-57B-A14B-Instruct-FP8"
80
+
81
+ tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True, model_max_length=4096)
82
+ tokenizer.pad_token = tokenizer.eos_token
83
+
84
+ ds = load_dataset("mgoin/ultrachat_2k", split="train_sft").select(range(512))
85
+ examples = [tokenizer.apply_chat_template(batch["messages"], tokenize=False) for batch in ds]
86
+ examples = tokenizer(examples, padding=True, truncation=True, return_tensors="pt").to("cuda")
87
+
88
+ quantize_config = BaseQuantizeConfig(
89
+ quant_method="fp8",
90
+ activation_scheme="static"
91
+ ignore_patterns=["re:.*lm_head", "re:.*gate"],
92
+ )
93
+
94
+ model = AutoFP8ForCausalLM.from_pretrained(
95
+ pretrained_model_dir, quantize_config=quantize_config
96
+ )
97
+ model.quantize(examples)
98
+ model.save_quantized(quantized_model_dir)
99
+ ```
100
+
101
+ ## Evaluation
102
+
103
+ The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
104
+ ```
105
+ lm_eval \
106
+ --model vllm \
107
+ --model_args pretrained="neuralmagic/Qwen2-57B-A14B-Instruct-FP8",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096 \
108
+ --tasks openllm \
109
+ --batch_size auto
110
+ ```
111
+
112
+ ### Accuracy
113
+
114
+ #### Open LLM Leaderboard evaluation scores
115
+ <table>
116
+ <tr>
117
+ <td><strong>Benchmark</strong>
118
+ </td>
119
+ <td><strong>Qwen2-57B-A14B-Instruct</strong>
120
+ </td>
121
+ <td><strong>Qwen2-57B-A14B-Instruct-FP8(this model)</strong>
122
+ </td>
123
+ <td><strong>Recovery</strong>
124
+ </td>
125
+ </tr>
126
+ <tr>
127
+ <td>MMLU (5-shot)
128
+ </td>
129
+ <td>75.76
130
+ </td>
131
+ <td>75.49
132
+ </td>
133
+ <td>99.64%
134
+ </td>
135
+ </tr>
136
+ <tr>
137
+ <td>ARC Challenge (25-shot)
138
+ </td>
139
+ <td>66.89
140
+ </td>
141
+ <td>65.96
142
+ </td>
143
+ <td>98.60%
144
+ </td>
145
+ </tr>
146
+ <tr>
147
+ <td>GSM-8K (5-shot, strict-match)
148
+ </td>
149
+ <td>80.59
150
+ </td>
151
+ <td>77.10
152
+ </td>
153
+ <td>95.66%
154
+ </td>
155
+ </tr>
156
+ <tr>
157
+ <td>Hellaswag (10-shot)
158
+ </td>
159
+ <td>85.96
160
+ </td>
161
+ <td>85.71
162
+ </td>
163
+ <td>99.70%
164
+ </td>
165
+ </tr>
166
+ <tr>
167
+ <td>Winogrande (5-shot)
168
+ </td>
169
+ <td>78.45
170
+ </td>
171
+ <td>78.14
172
+ </td>
173
+ <td>99.60%
174
+ </td>
175
+ </tr>
176
+ <tr>
177
+ <td>TruthfulQA (0-shot)
178
+ </td>
179
+ <td>62.11
180
+ </td>
181
+ <td>61.80
182
+ </td>
183
+ <td>99.50%
184
+ </td>
185
+ </tr>
186
+ <tr>
187
+ <td><strong>Average</strong>
188
+ </td>
189
+ <td><strong>74.96</strong>
190
+ </td>
191
+ <td><strong>74.03</strong>
192
+ </td>
193
+ <td><strong>98.76%</strong>
194
+ </td>
195
+ </tr>
196
+ </table>