aashish1904
commited on
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
base_model: unsloth/qwen2.5-14b-bnb-4bit
|
5 |
+
tags:
|
6 |
+
- text-generation-inference
|
7 |
+
- transformers
|
8 |
+
- unsloth
|
9 |
+
- qwen2
|
10 |
+
- trl
|
11 |
+
- sft
|
12 |
+
license: apache-2.0
|
13 |
+
language:
|
14 |
+
- en
|
15 |
+
datasets:
|
16 |
+
- qingy2024/QwQ-LongCoT-Verified-130K
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
|
21 |
+
|
22 |
+
|
23 |
+
# QuantFactory/QwQ-14B-Math-v0.2-GGUF
|
24 |
+
This is quantized version of [qingy2024/QwQ-14B-Math-v0.2](https://huggingface.co/qingy2024/QwQ-14B-Math-v0.2) created using llama.cpp
|
25 |
+
|
26 |
+
# Original Model Card
|
27 |
+
|
28 |
+
|
29 |
+
# Uploaded model
|
30 |
+
|
31 |
+
- **Developed by:** qingy2024
|
32 |
+
- **License:** apache-2.0
|
33 |
+
- **Finetuned from model :** unsloth/qwen2.5-14b-bnb-4bit
|
34 |
+
|
35 |
+
This model is a fine-tuned version of **Qwen 2.5-14B**, trained on QwQ 32B Preview's responses to questions from the **NuminaMathCoT** dataset.
|
36 |
+
|
37 |
+
**Note:** This model uses the standard ChatML template.
|
38 |
+
|
39 |
+
At 500 steps, the loss was plateauing so I decided to stop training to prevent excessive overfitting.
|
40 |
+
|
41 |
+
---
|
42 |
+
|
43 |
+
#### Training Details
|
44 |
+
|
45 |
+
- **Base Model**: Qwen 2.5-14B
|
46 |
+
- **Fine-Tuning Dataset**: Verified subset of **NuminaMathCoT** using Qwen 2.5 3B Instruct as a judge. (the `sharegpt-verified-cleaned` subset from my dataset).
|
47 |
+
- **QLoRA Configuration**:
|
48 |
+
- **Rank**: 32
|
49 |
+
- **Rank Stabilization**: Enabled
|
50 |
+
- **Optimization Settings**:
|
51 |
+
- Batch Size: 8
|
52 |
+
- Gradient Accumulation Steps: 2 (Effective Batch Size: 16)
|
53 |
+
- Warm-Up Steps: 5
|
54 |
+
- Weight Decay: 0.01
|
55 |
+
- **Training Steps**: 500 steps
|
56 |
+
- **Hardware Information**: A100-80GB
|
57 |
+
|
58 |
+
---
|
59 |
+
|
60 |
+
|
61 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|