|
--- |
|
license: other |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
inference: false |
|
tags: |
|
- transformers |
|
- gguf |
|
- imatrix |
|
- QwQ-LCoT-7B-Instruct |
|
--- |
|
Quantizations of https://huggingface.co./prithivMLmods/QwQ-LCoT-7B-Instruct |
|
|
|
### Inference Clients/UIs |
|
* [llama.cpp](https://github.com/ggerganov/llama.cpp) |
|
* [KoboldCPP](https://github.com/LostRuins/koboldcpp) |
|
* [ollama](https://github.com/ollama/ollama) |
|
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui) |
|
* [jan](https://github.com/janhq/jan) |
|
* [GPT4All](https://github.com/nomic-ai/gpt4all) |
|
--- |
|
|
|
# From original readme |
|
|
|
The **QwQ-LCoT-7B-Instruct** is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the **Qwen2.5-7B** base model and has been fine-tuned on the **amphora/QwQ-LongCoT-130K** dataset, focusing on chain-of-thought (CoT) reasoning. |
|
|
|
### **Training Dataset:** |
|
- **Dataset Name:** [amphora/QwQ-LongCoT-130K](https://huggingface.co./datasets/amphora/QwQ-LongCoT-130K) |
|
- **Size:** 133k examples. |
|
- **Focus:** Chain-of-Thought reasoning for complex tasks. |
|
|
|
--- |
|
|
|
### **Use Cases:** |
|
1. **Instruction Following:** |
|
Handle user instructions effectively, even for multi-step tasks. |
|
|
|
2. **Reasoning Tasks:** |
|
Perform logical reasoning and generate detailed step-by-step solutions. |
|
|
|
3. **Text Generation:** |
|
Generate coherent, context-aware responses. |
|
--- |