Update README.md
Browse files
README.md
CHANGED
@@ -102,7 +102,7 @@ print(tokenizer.decode(output))
|
|
102 |
## Tokenizer
|
103 |
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
|
104 |
The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
|
105 |
-
Please refer to
|
106 |
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
|
107 |
- **Training algorithm:** SentencePiece Unigram byte-fallback
|
108 |
- **Training data:** A subset of the datasets for model pre-training
|
|
|
102 |
## Tokenizer
|
103 |
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
|
104 |
The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
|
105 |
+
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure.
|
106 |
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
|
107 |
- **Training algorithm:** SentencePiece Unigram byte-fallback
|
108 |
- **Training data:** A subset of the datasets for model pre-training
|