Triangle104 commited on
Commit
850a464
·
verified ·
1 Parent(s): 0c1021f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -16,6 +16,55 @@ tags:
16
  This model was converted to GGUF format from [`prithivMLmods/Bellatrix-Tiny-1.5B-R1`](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1.5B-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1.5B-R1) for more details on the model.
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ## Use with llama.cpp
20
  Install llama.cpp through brew (works on Mac and Linux)
21
 
 
16
  This model was converted to GGUF format from [`prithivMLmods/Bellatrix-Tiny-1.5B-R1`](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1.5B-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1.5B-R1) for more details on the model.
18
 
19
+ ---
20
+ Bellatrix is based on a reasoning-based model designed for the DeepSeek-R1 synthetic dataset entries. The pipeline's instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. These models outperform many of the available open-source options. Bellatrix is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions utilize supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF).
21
+ Use with transformers
22
+
23
+ Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
24
+
25
+ Make sure to update your transformers installation via pip install --upgrade transformers.
26
+
27
+ import torch
28
+ from transformers import pipeline
29
+
30
+ model_id = "prithivMLmods/Bellatrix-Tiny-1.5B-R1"
31
+ pipe = pipeline(
32
+ "text-generation",
33
+ model=model_id,
34
+ torch_dtype=torch.bfloat16,
35
+ device_map="auto",
36
+ )
37
+ messages = [
38
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
39
+ {"role": "user", "content": "Who are you?"},
40
+ ]
41
+ outputs = pipe(
42
+ messages,
43
+ max_new_tokens=256,
44
+ )
45
+ print(outputs[0]["generated_text"][-1])
46
+
47
+ Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantized and more at huggingface-llama-recipes
48
+ Intended Use
49
+
50
+ Bellatrix is designed for applications that require advanced reasoning and multilingual dialogue capabilities. It is particularly suitable for:
51
+
52
+ Agentic Retrieval: Enabling intelligent retrieval of relevant information in a dialogue or query-response system.
53
+ Summarization Tasks: Condensing large bodies of text into concise summaries for easier comprehension.
54
+ Multilingual Use Cases: Supporting conversations in multiple languages with high accuracy and coherence.
55
+ Instruction-Based Applications: Following complex, context-aware instructions to generate precise outputs in a variety of scenarios.
56
+
57
+ Limitations
58
+
59
+ Despite its capabilities, Bellatrix has some limitations:
60
+
61
+ Domain Specificity: While it performs well on general tasks, its performance may degrade with highly specialized or niche datasets.
62
+ Dependence on Training Data: It is only as good as the quality and diversity of its training data, which may lead to biases or inaccuracies.
63
+ Computational Resources: The model’s optimized transformer architecture can be resource-intensive, requiring significant computational power for fine-tuning and inference.
64
+ Language Coverage: While multilingual, some languages or dialects may have limited support or lower performance compared to widely used ones.
65
+ Real-World Contexts: It may struggle with understanding nuanced or ambiguous real-world scenarios not covered during training.
66
+
67
+ ---
68
  ## Use with llama.cpp
69
  Install llama.cpp through brew (works on Mac and Linux)
70