OrcaMaid-13b-v2-FIX-32k

This is the fixed version of OrcaMaid-v2-13b, extended to 32768 context length via YaRN. The (now-deleted) v2 model had issues with the merged tokenizer that prevented it from stopping when necessary, and caused it to output broken ChatML tokens like <|im_end, etc.

This is a gradient SLERP merge of Microsoft's Orca-2-13b and Undi and IkariDev's Noromaid-v0.1.1-13b, biased towards Orca.

Just as with OrcaMaid v1, the overall goal of this merge is to create a model that sounds uniquely human and natural, without sacrificing intelligence.

The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{YOUR MESSAGE HERE}

### Response:
{BOT MESSAGE HERE}

Misc. information

  • BOS token is <s>
  • EOS token is </s>
  • Native context length is 32768 via YaRN (original context length was 4096)
  • Base model is Llama 2
  • Due to the inclusion of Orca-2-13b, the model is subject to the terms of the Microsoft Research License

Thanks

Downloads last month
22
Safetensors
Model size
13B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ddh0/OrcaMaid-v2-FIX-13b-32k

Quantizations
3 models

Collection including ddh0/OrcaMaid-v2-FIX-13b-32k