File size: 2,157 Bytes
e1b3e33 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co./Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
base_model: Qwen/Qwen1.5-32B
language:
- en
- zh
pipeline_tag: text-generation
tags:
- chat
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/A9n8EJBDQziJWnXhOYeEE.png)
This is the third in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen1.5 32B](https://huggingface.co./Qwen/Qwen1.5-32B).
## Prompting
Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
```py
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
```
## Credits
- Stheno dataset (filtered)
- [NobodyExistsOnTheInternet/claude_3.5s_single_turn_unslop_filtered](https://huggingface.co./datasets/NobodyExistsOnTheInternet/claude_3.5s_single_turn_unslop_filtered)
- [NobodyExistsOnTheInternet/PhiloGlanSharegpt](https://huggingface.co./datasets/NobodyExistsOnTheInternet/PhiloGlanSharegpt)
- [NobodyExistsOnTheInternet/Magpie-Reasoning-Medium-Subset](https://huggingface.co./datasets/NobodyExistsOnTheInternet/Magpie-Reasoning-Medium-Subset)
- [kalomaze/Opus_Instruct_25k](https://huggingface.co./datasets/kalomaze/Opus_Instruct_25k)
- [Nopm/Opus_WritingStruct](https://huggingface.co./datasets/Nopm/Opus_WritingStruct)
- [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co./datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned) (A ~16k rows subset)
This model has been a team effort, and the credits goes to all members of Anthracite.
## Training
The training was done for 2 epochs. We used 8x [NVIDIA H100 Tensor Core](https://www.nvidia.com/en-us/data-center/h100/) GPUs for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
... |