Text Generation
PyTorch
causal-lm
rwkv

RWKV-4 World

Model Description

RWKV-4 trained on 100+ world languages (70% English, 15% multilang, 15% code).

World = Some_Pile + Some_RedPajama + Some_OSCAR + All_Wikipedia + All_ChatGPT_Data_I_can_find

XXXtuned = finetune of World on MC4, OSCAR, wiki, etc.

How to use:

The differences between World & Raven:

  • set pipeline = PIPELINE(model, "rwkv_vocab_v20230424") instead of 20B_tokenizer.json (EXACTLY AS WRITTEN HERE. "rwkv_vocab_v20230424" is included in rwkv 0.7.4+)
  • use Question/Answer or User/AI or Human/Bot for chat. DO NOT USE Bob/Alice or Q/A

For 0.1/0.4/1.5B models, use fp32 for first layer (will overflow in fp16 at this moment - fixable in future), or bf16 if you have 30xx/40xx GPUs. Example strategy: cuda fp32 *1 -> cuda fp16

NOTE: the new greedy tokenizer (https://github.com/BlinkDL/ChatRWKV/blob/main/tokenizer/rwkv_tokenizer.py) will tokenize '\n\n' as one single token instead of ['\n','\n']

QA prompt (replace \n\n in xxx to \n):

Question: xxx

Answer:

and

Instruction: xxx

Input: xxx

Response:

A good chat prompt (replace \n\n in xxx to \n):

User: hi

Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.

User: xxx

Assistant:
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Datasets used to train BlinkDL/rwkv-4-world

Spaces using BlinkDL/rwkv-4-world 7