dahara1 commited on
Commit
50101ba
1 Parent(s): 36bdfd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -4,10 +4,13 @@ tags:
4
  - amd
5
  - llama3.1
6
  - RyzenAI
 
7
  ---
8
 
9
  This model is finetuned [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and AWQ quantized and converted version to run on the [NPU installed Ryzen AI PC](https://github.com/amd/RyzenAI-SW/issues/18), for example, Ryzen 9 7940HS Processor.
10
-
 
 
11
  For set up Ryzen AI for LLMs in window 11, see [Running LLM on AMD NPU Hardware](https://www.hackster.io/gharada2013/running-llm-on-amd-npu-hardware-19322f).
12
 
13
  The following sample assumes that the setup on the above page has been completed.
@@ -121,13 +124,13 @@ if __name__ == "__main__":
121
  print(translation("Translate Japanese to English.", "1月1日は日本の祝日です。その日は日曜日で、5日ぶりに雨が降りました"))
122
  print(translation("Translate English to Japanese.", "It’s raining cats and dogs."))
123
  print(translation("Translate French to Japanese.", "Après la pluie, le beau temps"))
124
- print(translation("Translate Mandarin to Japanese.", "说曹操曹操就到"))
125
 
126
 
127
 
128
  ```
129
 
130
- ![chat_image](llama-3.1.png)
131
 
132
  ## Acknowledgements
133
  - [amd/RyzenAI-SW](https://github.com/amd/RyzenAI-SW)
@@ -135,4 +138,4 @@ Sample Code and Drivers.
135
  - [mit-han-lab/llm-awq](https://github.com/mit-han-lab/llm-awq)
136
  Thanks for AWQ quantization Method.
137
  - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
138
- [Built with Meta Llama 3](https://llama.meta.com/llama3/license/)
 
4
  - amd
5
  - llama3.1
6
  - RyzenAI
7
+ - translation
8
  ---
9
 
10
  This model is finetuned [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and AWQ quantized and converted version to run on the [NPU installed Ryzen AI PC](https://github.com/amd/RyzenAI-SW/issues/18), for example, Ryzen 9 7940HS Processor.
11
+
12
+ Supports translation between English, French, Chinese(Mandarin) and Japanese.
13
+
14
  For set up Ryzen AI for LLMs in window 11, see [Running LLM on AMD NPU Hardware](https://www.hackster.io/gharada2013/running-llm-on-amd-npu-hardware-19322f).
15
 
16
  The following sample assumes that the setup on the above page has been completed.
 
124
  print(translation("Translate Japanese to English.", "1月1日は日本の祝日です。その日は日曜日で、5日ぶりに雨が降りました"))
125
  print(translation("Translate English to Japanese.", "It’s raining cats and dogs."))
126
  print(translation("Translate French to Japanese.", "Après la pluie, le beau temps"))
127
+ print(translation("Translate Mandarin to Japanese.", "要功夫深,铁杵磨成针"))
128
 
129
 
130
 
131
  ```
132
 
133
+ ![chat_image](trans-sample.png)
134
 
135
  ## Acknowledgements
136
  - [amd/RyzenAI-SW](https://github.com/amd/RyzenAI-SW)
 
138
  - [mit-han-lab/llm-awq](https://github.com/mit-han-lab/llm-awq)
139
  Thanks for AWQ quantization Method.
140
  - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
141
+ [Built with Meta Llama 3](https://llama.meta.com/llama3/license/)