upload
Browse files- README.md +161 -0
- config.json +23 -0
- pytorch_model.bin +3 -0
- rinna.png +0 -0
- spiece.model +3 -0
- spiece.vocab +0 -0
- tokenizer_config.json +1 -0
README.md
CHANGED
@@ -1,3 +1,164 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: mit
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language: ja
|
3 |
+
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
|
4 |
+
tags:
|
5 |
+
- ja
|
6 |
+
- gpt_neox
|
7 |
+
- text-generation
|
8 |
+
- lm
|
9 |
+
- nlp
|
10 |
license: mit
|
11 |
+
datasets:
|
12 |
+
- Anthropic/hh-rlhf
|
13 |
+
inference: false
|
14 |
---
|
15 |
+
|
16 |
+
# japanese-gpt-neox-3.6b-instruction-ppo
|
17 |
+
|
18 |
+
![rinna-icon](./rinna.png)
|
19 |
+
|
20 |
+
# Overview
|
21 |
+
This repository provides a Japanese GPT-NeoX model of 3.6 billion parameters. The model is based on [`rinna/japanese-gpt-neox-3.6b-instruction-sft-v2`](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2) and has been aligned to serve as an instruction-following conversational agent.
|
22 |
+
|
23 |
+
* **Model architecture**
|
24 |
+
|
25 |
+
A 36-layer, 2816-hidden-size transformer-based language model.
|
26 |
+
|
27 |
+
* **RLHF**
|
28 |
+
|
29 |
+
Following the [OpenAI InstructGPT paper](https://arxiv.org/abs/2203.02155), **Reinforcement Learning from Human Feedback** (RLHF) has been applied to aligning the model's behaviour with input instructions. Particularly, the model has been trained in two stages, i.e. **Supervised Fine-Tuning** (SFT) and [PPO](https://arxiv.org/abs/1707.06347)-based **Reinforcement Learning** (RL).
|
30 |
+
* The first SFT stage produces [`rinna/japanese-gpt-neox-3.6b-instruction-sft-v2`](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2).
|
31 |
+
* The second RL stage produces this model.
|
32 |
+
|
33 |
+
* **PPO vs. SFT evaluation**
|
34 |
+
|
35 |
+
We conducted human evaluation and ChatGPT-based automated evaluation on 100 prompts to assess the *performance gain from reinforcement learning*.
|
36 |
+
|
37 |
+
| [PPO](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) vs. [SFT](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2) | win | tie | loss |
|
38 |
+
| :---: | :---: | :---: | :---: |
|
39 |
+
| Human evaluation | **47**% | 30% | 23% |
|
40 |
+
| ChatGPT auto. evaluation | **63**% | 3% | 34% |
|
41 |
+
|
42 |
+
* **Reinforcement learning**
|
43 |
+
|
44 |
+
We used [CarperAI/trlx](https://github.com/CarperAI/trlx) and its implementation of the PPO algorithm for the RL stage.
|
45 |
+
|
46 |
+
The RL data is the subset of the following dataset and has been translated into Japanese.
|
47 |
+
* [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf)
|
48 |
+
|
49 |
+
* **Authors**
|
50 |
+
|
51 |
+
[Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada)
|
52 |
+
|
53 |
+
|
54 |
+
# Limitations
|
55 |
+
* We found this verison of PPO model tends to generate repeated text more often than its SFT counterpart, and thus we set `repetition_penalty=1.1` for better generation performance. (*The same generation hyper-parameters are applied to the SFT model in aforementioned evaluation experiments.*) You can also explore other hyperparameter combinations that yield higher generation randomness/diversity for better generation quality, e.g. `temperature=0.9, repetition_penalty=1.0`.
|
56 |
+
|
57 |
+
# I/O Format
|
58 |
+
|
59 |
+
A special format has been adopted to construct inputs.
|
60 |
+
* An input prompt is formatted as a conversation between `ユーザー` and `システム`.
|
61 |
+
* Each input utterance consists of (1) its speaker (`"ユーザー"` or `"システム"`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"世界で一番高い山は?"`).
|
62 |
+
* The input prompt should be ended with `"システム: "` to acknowledge the model to generate a response.
|
63 |
+
* Since the model's tokenizer does not recognize `"\n"`, a special newline symbol `"<NL>"` is used instead.
|
64 |
+
* All the newlines in input and output utterances should be replaced with `"<NL>"`.
|
65 |
+
* All the utterances in the input prompt should be separated by `"<NL>"`.
|
66 |
+
|
67 |
+
Following is an example to construct an input from a conversation.
|
68 |
+
~~~python
|
69 |
+
prompt = [
|
70 |
+
{
|
71 |
+
"speaker": "ユーザー",
|
72 |
+
"text": "コンタクトレンズを慣れるにはどうすればよいですか?"
|
73 |
+
},
|
74 |
+
{
|
75 |
+
"speaker": "システム",
|
76 |
+
"text": "これについて具体的に説明していただけますか?何が難しいのでしょうか?"
|
77 |
+
},
|
78 |
+
{
|
79 |
+
"speaker": "ユーザー",
|
80 |
+
"text": "目が痛いのです。"
|
81 |
+
},
|
82 |
+
{
|
83 |
+
"speaker": "システム",
|
84 |
+
"text": "分かりました、コンタクトレンズをつけると目がかゆくなるということですね。思った以上にレンズを外す必要があるでしょうか?"
|
85 |
+
},
|
86 |
+
{
|
87 |
+
"speaker": "ユーザー",
|
88 |
+
"text": "いえ、レンズは外しませんが、目が赤くなるんです。"
|
89 |
+
}
|
90 |
+
]
|
91 |
+
prompt = [
|
92 |
+
f"{uttr['speaker']}: {uttr['text']}"
|
93 |
+
for uttr in prompt
|
94 |
+
]
|
95 |
+
prompt = "<NL>".join(prompt)
|
96 |
+
prompt = (
|
97 |
+
prompt
|
98 |
+
+ "<NL>"
|
99 |
+
+ "システム: "
|
100 |
+
)
|
101 |
+
print(prompt)
|
102 |
+
# "ユーザー: コンタクトレンズを慣れるにはどうすればよいですか?<NL>システム: これについて具体的に説明していただけますか?何が難しいのでしょうか?<NL>ユーザー: 目が痛いのです。<NL>システム: 分かりました、コンタクトレンズをつけると目がかゆくなるということですね。思った以上にレンズを外す必要がある��しょうか?<NL>ユーザー: いえ、レンズは外しませんが、目が赤くなるんです。<NL>システム: "
|
103 |
+
~~~
|
104 |
+
# How to use the model
|
105 |
+
|
106 |
+
~~~~python
|
107 |
+
import torch
|
108 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
109 |
+
|
110 |
+
tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo", use_fast=False)
|
111 |
+
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo")
|
112 |
+
|
113 |
+
if torch.cuda.is_available():
|
114 |
+
model = model.to("cuda")
|
115 |
+
|
116 |
+
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
|
117 |
+
|
118 |
+
with torch.no_grad():
|
119 |
+
output_ids = model.generate(
|
120 |
+
token_ids.to(model.device),
|
121 |
+
do_sample=True,
|
122 |
+
max_new_tokens=128,
|
123 |
+
temperature=0.7,
|
124 |
+
repetition_penalty=1.1,
|
125 |
+
pad_token_id=tokenizer.pad_token_id,
|
126 |
+
bos_token_id=tokenizer.bos_token_id,
|
127 |
+
eos_token_id=tokenizer.eos_token_id
|
128 |
+
)
|
129 |
+
|
130 |
+
output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):])
|
131 |
+
output = output.replace("<NL>", "\n")
|
132 |
+
print(output)
|
133 |
+
"""それは、コンタクトレンズが目に合わないために起こることがあります。レンズが目の表面に長時間触れ続けることが原因となることがあります。また、コンタクトレンズが汚れている可能性もあります。コンタクトレンズケースを定期的に洗浄したり、コンタクトレンズを正しくフィットさせるようにしたりすることが役立ちます。</s>"""
|
134 |
+
~~~~
|
135 |
+
|
136 |
+
# Tokenization
|
137 |
+
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
|
138 |
+
* The tokenizer has a vocabulary size of 32,000.
|
139 |
+
* It uses sentencepiece's byte fallback feature to decompose unknown text pieces into UTF-8 byte pieces and to avoid producing `<UNK>` tokens.
|
140 |
+
* sentencepiece's `--add_dummy_prefix` option was turned off so that a leading whitespace will not be prepended automatically.
|
141 |
+
~~~
|
142 |
+
print(tokenizer.tokenize("吾輩は猫である"))
|
143 |
+
# ['吾', '輩', 'は', '猫', 'である']
|
144 |
+
# instead of ['▁', '吾', '輩', 'は', '猫', 'である'] as in rinna/japanese-gpt-1b
|
145 |
+
~~~
|
146 |
+
* sentencepiece's `--remove_extra_whitespaces` option was turned off so that leading, trailing, and duplicate whitespaces are reserved.
|
147 |
+
~~~
|
148 |
+
print(tokenizer.tokenize(" 吾輩は 猫である "))
|
149 |
+
# ['▁', '▁', '吾', '輩', 'は', '▁', '▁', '猫', 'である', '▁', '▁', '▁']
|
150 |
+
# instead of ['▁', '吾', '輩', 'は', '▁猫', 'である'] as in rinna/japanese-gpt-1b
|
151 |
+
~~~
|
152 |
+
* Don't forget to set `use_fast=False` to make the above features function correctly.
|
153 |
+
~~~
|
154 |
+
good_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b", use_fast=False)
|
155 |
+
bad_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b")
|
156 |
+
|
157 |
+
print(good_tokenizer.decode(good_tokenizer.encode("გამარჯობა 吾輩は 猫である ")))
|
158 |
+
# 'გამარჯობა 吾輩は 猫である </s>'
|
159 |
+
print(bad_tokenizer.decode(bad_tokenizer.encode("გამარჯობა 吾輩は 猫である ")))
|
160 |
+
# 'გამარ[UNK]ობა 吾輩は 猫である </s>'
|
161 |
+
~~~
|
162 |
+
|
163 |
+
# Licenese
|
164 |
+
[The MIT license](https://opensource.org/licenses/MIT)
|
config.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"GPTNeoXForCausalLM"
|
4 |
+
],
|
5 |
+
"bos_token_id": 2,
|
6 |
+
"eos_token_id": 3,
|
7 |
+
"hidden_act": "gelu",
|
8 |
+
"hidden_size": 2816,
|
9 |
+
"initializer_range": 0.02,
|
10 |
+
"intermediate_size": 11264,
|
11 |
+
"layer_norm_eps": 1e-05,
|
12 |
+
"max_position_embeddings": 2048,
|
13 |
+
"model_type": "gpt_neox",
|
14 |
+
"num_attention_heads": 22,
|
15 |
+
"num_hidden_layers": 36,
|
16 |
+
"rotary_emb_base": 10000,
|
17 |
+
"rotary_pct": 1.0,
|
18 |
+
"tie_word_embeddings": false,
|
19 |
+
"torch_dtype": "float16",
|
20 |
+
"use_cache": true,
|
21 |
+
"use_parallel_residual": false,
|
22 |
+
"vocab_size": 32000
|
23 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fea9af6902e86fd28343c69ab710a8ba017bbf1c35dbd3b553367c72ca83210e
|
3 |
+
size 7397399329
|
rinna.png
ADDED
spiece.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d78ab344146700112cd41628ac7ce54b79c0868fe0c7c201750d8237b54dbb4
|
3 |
+
size 786216
|
spiece.vocab
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"eos_token": "</s>", "unk_token": "[UNK]", "pad_token": "[PAD]", "extra_ids": 0, "additional_special_tokens": [], "sp_model_kwargs": {}, "bos_token": "<s>", "cls_token": "[CLS]", "sep_token": "[SEP]", "mask_token": "[MASK]", "do_lower_case": false, "tokenizer_class": "T5Tokenizer"}
|