Lin-K76 commited on
Commit
d4e8614
1 Parent(s): 5c50d27

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +233 -0
README.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - fp8
4
+ - vllm
5
+ license: other
6
+ license_name: bigcode-openrail-m
7
+ license_link: https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement
8
+ ---
9
+
10
+ # starcoder2-3b-FP8
11
+
12
+ ## Model Overview
13
+ - **Model Architecture:** starcoder2-3b
14
+ - **Input:** Text
15
+ - **Output:** Text
16
+ - **Model Optimizations:**
17
+ - **Weight quantization:** FP8
18
+ - **Activation quantization:** FP8
19
+ - **Intended Use Cases:** Intended for commercial and research use in English.
20
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
21
+ - **Release Date:** 8/1/2024
22
+ - **Version:** 1.0
23
+ - **License(s):** [bigcode-openrail-m](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
24
+ - **Model Developers:** Neural Magic
25
+
26
+ Quantized version of [starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b).
27
+ <!-- It achieves an average score of 73.19 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.48. -->
28
+ It achieves an average score of 35.53 on the [HumanEval+](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark, whereas the unquantized model achieves 35.35.
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights and activations of [starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) to FP8 data type, ready for inference with vLLM >= 0.5.2.
33
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
34
+
35
+ Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations.
36
+ [AutoFP8](https://github.com/neuralmagic/AutoFP8) is used for quantization with 512 sequences of UltraChat.
37
+
38
+ <!-- ## Deployment
39
+
40
+ ### Use with vLLM
41
+
42
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
43
+
44
+ ```python
45
+ from vllm import LLM, SamplingParams
46
+ from transformers import AutoTokenizer
47
+
48
+ model_id = "neuralmagic/starcoder2-3b-FP8"
49
+
50
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
51
+
52
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
53
+
54
+ messages = [
55
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
56
+ {"role": "user", "content": "Who are you?"},
57
+ ]
58
+
59
+ prompts = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
60
+
61
+ llm = LLM(model=model_id, trust_remote_code=True, max_model_len=4096)
62
+
63
+ outputs = llm.generate(prompts, sampling_params)
64
+
65
+ generated_text = outputs[0].outputs[0].text
66
+ print(generated_text)
67
+ ```
68
+
69
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. -->
70
+
71
+ ## Creation
72
+
73
+ This model was created by applying [LLM Compressor with calibration samples from UltraChat](https://github.com/vllm-project/llm-compressor/blob/sa/big_model_support/examples/big_model_offloading/big_model_w8a8_calibrate.py), as presented in the code snipet below.
74
+ A slight modification to the code was made due to the parameters of the model. Running the below code will throw an index error, and simply replacing the erroneous line with ```max_quant_shape = param.shape[0]``` resolves the issue.
75
+
76
+ ```python
77
+ import torch
78
+ from datasets import load_dataset
79
+ from transformers import AutoTokenizer
80
+
81
+ from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
82
+ from llmcompressor.transformers.compression.helpers import (
83
+ calculate_offload_device_map,
84
+ custom_offload_device_map,
85
+ )
86
+
87
+ recipe = """
88
+ quant_stage:
89
+ quant_modifiers:
90
+ QuantizationModifier:
91
+ ignore: ["lm_head"]
92
+ config_groups:
93
+ group_0:
94
+ weights:
95
+ num_bits: 8
96
+ type: float
97
+ strategy: tensor
98
+ dynamic: false
99
+ symmetric: true
100
+ input_activations:
101
+ num_bits: 8
102
+ type: float
103
+ strategy: tensor
104
+ dynamic: false
105
+ symmetric: true
106
+ targets: ["Linear"]
107
+ """
108
+
109
+ model_stub = "bigcode/starcoder2-3b"
110
+ model_name = model_stub.split("/")[-1]
111
+
112
+ device_map = calculate_offload_device_map(
113
+ model_stub, reserve_for_hessians=False, num_gpus=8, torch_dtype=torch.float16
114
+ )
115
+
116
+ model = SparseAutoModelForCausalLM.from_pretrained(
117
+ model_stub, torch_dtype=torch.float16, device_map=device_map
118
+ )
119
+ tokenizer = AutoTokenizer.from_pretrained(model_stub)
120
+
121
+ output_dir = f"./{model_name}-FP8"
122
+
123
+ DATASET_ID = "HuggingFaceH4/ultrachat_200k"
124
+ DATASET_SPLIT = "train_sft"
125
+ NUM_CALIBRATION_SAMPLES = 512
126
+ MAX_SEQUENCE_LENGTH = 4096
127
+
128
+ ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
129
+ ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
130
+
131
+ def preprocess(example):
132
+ return {
133
+ "text": " ".join([msg["content"] for msg in example["messages"]])
134
+ }
135
+
136
+ ds = ds.map(preprocess)
137
+
138
+ def tokenize(sample):
139
+ return tokenizer(
140
+ sample["text"],
141
+ padding=False,
142
+ max_length=MAX_SEQUENCE_LENGTH,
143
+ truncation=True,
144
+ add_special_tokens=False,
145
+ )
146
+
147
+ ds = ds.map(tokenize, remove_columns=ds.column_names)
148
+
149
+ oneshot(
150
+ model=model,
151
+ output_dir=output_dir,
152
+ dataset=ds,
153
+ recipe=recipe,
154
+ max_seq_length=MAX_SEQUENCE_LENGTH,
155
+ num_calibration_samples=NUM_CALIBRATION_SAMPLES,
156
+ save_compressed=True,
157
+ )
158
+ ```
159
+
160
+ ## Evaluation
161
+
162
+ The model was evaluated on the [HumanEval+](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark with the [Neural Magic fork](https://github.com/neuralmagic/evalplus) of the [EvalPlus implementation of HumanEval+](https://github.com/evalplus/evalplus) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
163
+ ```
164
+ python codegen/generate.py --model neuralmagic/starcoder2-3b-FP8 --temperature 0.2 --n_samples 50 --resume --root ~ --dataset humaneval
165
+ python evalplus/sanitize.py ~/humaneval/neuralmagic--starcoder2-3b-FP8_vllm_temp_0.2
166
+ evalplus.evaluate --dataset humaneval --samples ~/humaneval/neuralmagic--starcoder2-3b-FP8_vllm_temp_0.2-sanitized
167
+ ```
168
+
169
+ ### Accuracy
170
+
171
+ #### HumanEval+ evaluation scores
172
+ <table>
173
+ <tr>
174
+ <td><strong>Benchmark</strong>
175
+ </td>
176
+ <td><strong>starcoder2-3b</strong>
177
+ </td>
178
+ <td><strong>starcoder2-3b-FP8(this model)</strong>
179
+ </td>
180
+ <td><strong>Recovery</strong>
181
+ </td>
182
+ </tr>
183
+ <tr>
184
+ <td>base pass@1
185
+ </td>
186
+ <td>30.7
187
+ </td>
188
+ <td>30.8
189
+ </td>
190
+ <td>100.3%
191
+ </td>
192
+ </tr>
193
+ <tr>
194
+ <td>base pass@10
195
+ </td>
196
+ <td>44.9
197
+ </td>
198
+ <td>45.4
199
+ </td>
200
+ <td>101.1%
201
+ </td>
202
+ </tr>
203
+ <tr>
204
+ <td>base+extra pass@1
205
+ </td>
206
+ <td>26.6
207
+ </td>
208
+ <td>26.5
209
+ </td>
210
+ <td>99.62%
211
+ </td>
212
+ </tr>
213
+ <tr>
214
+ <td>base+extra pass@10
215
+ </td>
216
+ <td>39.2
217
+ </td>
218
+ <td>39.4
219
+ </td>
220
+ <td>100.5%
221
+ </td>
222
+ </tr>
223
+ <tr>
224
+ <td><strong>Average</strong>
225
+ </td>
226
+ <td><strong>35.35</strong>
227
+ </td>
228
+ <td><strong>35.53</strong>
229
+ </td>
230
+ <td><strong>100.3%</strong>
231
+ </td>
232
+ </tr>
233
+ </table>