zhzhang93 commited on
Commit
a15478f
1 Parent(s): b7642cb

Init commit

Browse files
README.md CHANGED
@@ -2,4 +2,37 @@
2
  ---
3
  license: apache-2.0
4
  ---
5
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  ---
3
  license: apache-2.0
4
  ---
5
+
6
+ ## 模型介绍
7
+ 这个版本是基于mistral-large-instruct-2407模型,经过特殊处理的中文sft版。与原始的instruct版类似,模型对中文内容和emoji表情的处理更加亲和,确保问答性能与用户体验的优化。
8
+
9
+ 特点: 优化了对中文和emoji表情的处理能力,不影响原有instruct版模型的能力。实测表明,这个中文sft版在问答性能上领先于llama3_1-405B 中文模型
10
+ ![demo](./images/demo.png)
11
+ ![demo1](./images/demo1.png)
12
+
13
+ ## 训练细节
14
+ - Lora rank128, alpha256
15
+ ![detail](./images/detail.png)
16
+
17
+ ## 模型下载
18
+
19
+ 通过Git LFS克隆模型:
20
+
21
+ ```shell
22
+ git lfs install
23
+ git clone https://huggingface.co/opencsg/CSG-Wukong-Chinese-Mistral-Large2-123B
24
+ ```
25
+
26
+ ## Lora参数合并指南
27
+
28
+ 实现lora参数的合并,需要使用以下python代码:
29
+
30
+ ```python
31
+ from transformers import AutoModelForCausalLM
32
+ from peft import PeftModel
33
+
34
+ base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-Large-Instruct-2407")
35
+ peft_model_id = "opencsg/CSG-Wukong-Chinese-Mistral-Large2-123B"
36
+ model = PeftModel.from_pretrained(base_model, peft_model_id)
37
+ model.merge_and_unload()
38
+ ```
adapter_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/data/models/mistralai/Mistral-Large-Instruct-2407",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 32,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": [],
18
+ "peft_type": "LORA",
19
+ "r": 128,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "q_proj",
24
+ "v_proj",
25
+ "k_proj"
26
+ ],
27
+ "task_type": "CAUSAL_LM",
28
+ "use_dora": false,
29
+ "use_rslora": false
30
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f9db1b2db0d219a416d5f20294604e30ca6af3674c8e64d4755b3e2e9de26b5
3
+ size 1153507360
images/demo.png ADDED
images/demo1.png ADDED
images/detail.png ADDED