metadata
dataset_info:
features:
- name: category
dtype: string
- name: author
dtype: string
- name: book
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2301319885
num_examples: 4183
download_size: 1527950083
dataset_size: 2301319885
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- zh
Dataset summary
This dataset is designed for Traditional Chinese (zh-tw) and comprises of a collection of books from 好讀
Total tokens: 1.3B
(Tokens are calculated by tokenizer of LLaMA2)
Usage
from datasets import load_dataset
dataset = load_dataset("benchang1110/Taiwan-book-1B", split="train")