kishizaki-sci
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,92 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- ja
|
5 |
+
- en
|
6 |
+
base_model:
|
7 |
+
- microsoft/phi-4
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
library_name: transformers
|
10 |
+
tags:
|
11 |
+
- autoawq
|
12 |
+
---
|
13 |
+
# kishizaki-sci/phi-4-AWQ-4bit-EN-JP
|
14 |
+
|
15 |
+
## model information
|
16 |
+
[phi-4](https://huggingface.co/microsoft/phi-4)を[AutoAWQ](https://github.com/casper-hansen/AutoAWQ)で4bit 量子化したモデル。量子化の際のキャリブレーションデータに日本語と英語を含むデータを使用。
|
17 |
+
A model of phi-4 quantized to 4 bits using AutoAWQ. Calibration data containing Japanese and English was used during the quantization process.
|
18 |
+
|
19 |
+
## usage
|
20 |
+
### transformers
|
21 |
+
```python
|
22 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
23 |
+
|
24 |
+
tokenizer = AutoTokenizer.from_pretrained("kishizaki-sci/phi-4-AWQ-4bit-EN-JP")
|
25 |
+
model = AutoModelForCausalLM.from_pretrained("kishizaki-sci/phi-4-AWQ-4bit-EN-JP")
|
26 |
+
model.to("cuda")
|
27 |
+
|
28 |
+
chat = [
|
29 |
+
{"role": "system", "content": "あなたは日本語で応答するAIチャットボットです。ユーザをサポートしてください。"},
|
30 |
+
{"role": "user", "content": "plotly.graph_objectsを使って散布図を作るサンプルコードを書いてください。"}
|
31 |
+
]
|
32 |
+
prompt = tokenizer.apply_chat_template(
|
33 |
+
chat,
|
34 |
+
tokenize=False,
|
35 |
+
add_generation_prompt=True
|
36 |
+
)
|
37 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
38 |
+
inputs = inputs.to("cuda")
|
39 |
+
streamer = TextStreamer(tokenizer)
|
40 |
+
|
41 |
+
output = model.generate(**inputs, streamer=streamer, max_new_tokens=1024)
|
42 |
+
```
|
43 |
+
このコードはA100インスタンスの[Google Colab](https://colab.research.google.com/drive/1gt0jy1LstgX7orMbT_3MUcJJNak4vy5i?usp=sharing) でも動かせます。
|
44 |
+
This code can also run on Google Colab with an A100 instance.
|
45 |
+
|
46 |
+
### vLLM
|
47 |
+
```python
|
48 |
+
from vllm import LLM, SamplingParams
|
49 |
+
|
50 |
+
llm = LLM(
|
51 |
+
model="kishizaki-sci/phi-4-AWQ-4bit-EN-JP",
|
52 |
+
tensor_parallel_size=1,
|
53 |
+
gpu_memory_utilization=0.97,
|
54 |
+
quantization="awq"
|
55 |
+
)
|
56 |
+
tokenizer = llm.get_tokenizer()
|
57 |
+
|
58 |
+
messages = [
|
59 |
+
{"role": "system", "content": "あなたは日本語で応答するAIチャットボットです。ユーザをサポートしてください。"},
|
60 |
+
{"role": "user", "content": "plotly.graph_objectsを使って散布図を作るサンプルコードを書いてください。"},
|
61 |
+
]
|
62 |
+
|
63 |
+
prompt = tokenizer.apply_chat_template(
|
64 |
+
messages,
|
65 |
+
tokenize=False,
|
66 |
+
add_generation_prompt=True
|
67 |
+
)
|
68 |
+
|
69 |
+
sampling_params = SamplingParams(
|
70 |
+
temperature=0.6,
|
71 |
+
top_p=0.9,
|
72 |
+
max_tokens=1024
|
73 |
+
)
|
74 |
+
|
75 |
+
outputs = llm.generate(prompt, sampling_params)
|
76 |
+
print(outputs[0].outputs[0].text)
|
77 |
+
```
|
78 |
+
Google Colab A100インスタンスでの実行はこちらの[notebook](https://colab.research.google.com/drive/1GB3xXDmd7C2Cx9rdNEhmIxWr9fUubSiH?usp=sharing)をご覧ください。
|
79 |
+
Please refer to this google colab notebook for execution on the A100 instance.
|
80 |
+
|
81 |
+
## calibration data
|
82 |
+
以下のデータセットから512個のデータ,プロンプトを抽出。1つのデータのトークン数は最大350制限。
|
83 |
+
Extract 512 data points and prompts from the following dataset. The maximum token limit per data point is 350.
|
84 |
+
- [TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)
|
85 |
+
- [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
|
86 |
+
- [m-a-p/CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction)
|
87 |
+
- [kunishou/databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
|
88 |
+
- その他日本語版・英語版のwikipedia記事から作成したオリジナルデータ,有害プロンプト回避のためのオリジナルデータを使用。 Original data created from Japanese and English Wikipedia articles, as well as original data for avoiding harmful prompts, is used.
|
89 |
+
|
90 |
+
## License
|
91 |
+
[MIT License](https://opensource.org/license/mit)を適用する。
|
92 |
+
The MIT License is applied.
|