Triangle104 commited on
Commit
4593e39
·
verified ·
1 Parent(s): 55ad551

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -0
README.md CHANGED
@@ -16,6 +16,105 @@ tags:
16
  This model was converted to GGUF format from [`prithivMLmods/Bellatrix-Tiny-3B-R1`](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-3B-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-3B-R1) for more details on the model.
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ## Use with llama.cpp
20
  Install llama.cpp through brew (works on Mac and Linux)
21
 
 
16
  This model was converted to GGUF format from [`prithivMLmods/Bellatrix-Tiny-3B-R1`](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-3B-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-3B-R1) for more details on the model.
18
 
19
+ ---
20
+ Bellatrix is based on a reasoning-based model designed for the DeepSeek-R1
21
+ synthetic dataset entries. The pipeline's instruction-tuned, text-only
22
+ models are optimized for multilingual dialogue use cases, including
23
+ agentic retrieval and summarization tasks. These models outperform many
24
+ of the available open-source options. Bellatrix is an auto-regressive
25
+ language model that uses an optimized transformer architecture. The
26
+ tuned versions utilize supervised fine-tuning (SFT) and reinforcement
27
+ learning with human feedback (RLHF).
28
+
29
+
30
+
31
+
32
+
33
+
34
+
35
+ Use with transformers
36
+
37
+
38
+
39
+
40
+ Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
41
+
42
+
43
+ Make sure to update your transformers installation via:
44
+
45
+
46
+ pip install --upgrade transformers
47
+
48
+
49
+
50
+ import torch
51
+ from transformers import pipeline
52
+
53
+ model_id = "prithivMLmods/Bellatrix-Tiny-3B-R1"
54
+ pipe = pipeline(
55
+ "text-generation",
56
+ model=model_id,
57
+ torch_dtype=torch.bfloat16,
58
+ device_map="auto",
59
+ )
60
+ messages = [
61
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
62
+ {"role": "user", "content": "Who are you?"},
63
+ ]
64
+ outputs = pipe(
65
+ messages,
66
+ max_new_tokens=256,
67
+ )
68
+ print(outputs[0]["generated_text"][-1])
69
+
70
+
71
+
72
+ Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantization, and more at huggingface-llama-recipes.
73
+
74
+
75
+
76
+
77
+
78
+
79
+
80
+ Intended Use
81
+
82
+
83
+
84
+
85
+ Bellatrix is designed for applications that require advanced
86
+ reasoning and multilingual dialogue capabilities. It is particularly
87
+ suitable for:
88
+
89
+
90
+ Agentic Retrieval: Enabling intelligent retrieval of relevant information in a dialogue or query-response system.
91
+ Summarization Tasks: Condensing large bodies of text into concise summaries for easier comprehension.
92
+ Multilingual Use Cases: Supporting conversations in multiple languages with high accuracy and coherence.
93
+ Instruction-Based Applications: Following complex, context-aware instructions to generate precise outputs in a variety of scenarios.
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+ Limitations
102
+
103
+
104
+
105
+
106
+ Despite its capabilities, Bellatrix has some limitations:
107
+
108
+
109
+ Domain Specificity: While it performs well on general tasks, its performance may degrade with highly specialized or niche datasets.
110
+ Dependence on Training Data: It is only as good as the quality and diversity of its training data, which may lead to biases or inaccuracies.
111
+ Computational Resources: The model’s optimized
112
+ transformer architecture can be resource-intensive, requiring
113
+ significant computational power for fine-tuning and inference.
114
+ Language Coverage: While multilingual, some languages or dialects may have limited support or lower performance compared to widely used ones.
115
+ Real-World Contexts: It may struggle with understanding nuanced or ambiguous real-world scenarios not covered during training.
116
+
117
+ ---
118
  ## Use with llama.cpp
119
  Install llama.cpp through brew (works on Mac and Linux)
120