Daemontatox commited on
Commit
0c457bc
·
verified ·
1 Parent(s): 2a88104

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -7
README.md CHANGED
@@ -4,19 +4,84 @@ language:
4
  - en
5
  license: apache-2.0
6
  tags:
7
- - text-generation-inference
 
8
  - transformers
9
  - unsloth
10
  - llama
11
  - trl
12
  ---
13
 
14
- # Uploaded model
 
15
 
16
- - **Developed by:** Daemontatox
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** HuggingFaceTB/SmolLM2-1.7B-Instruct
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - en
5
  license: apache-2.0
6
  tags:
7
+ - text-generation
8
+ - instruction-following
9
  - transformers
10
  - unsloth
11
  - llama
12
  - trl
13
  ---
14
 
15
+ ![image](./image.webp)
16
+ # SmolLM2-1.7B-Instruct
17
 
18
+ **Developed by:** Daemontatox
 
 
19
 
20
+ **Model Type:** Fine-tuned Language Model (LLM)
21
 
22
+ **Base Model:** [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct)
23
+
24
+ **Finetuned from model:** HuggingFaceTB/SmolLM2-1.7B-Instruct
25
+
26
+ **License:** apache-2.0
27
+
28
+ **Languages:** en
29
+
30
+ **Tags:**
31
+ - text-generation
32
+ - instruction-following
33
+ - transformers
34
+ - unsloth
35
+ - llama
36
+ - trl
37
+
38
+ ## Model Description
39
+
40
+ SmolLM2-1.7B-Instruct is a fine-tuned version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct), optimized for general-purpose instruction-following tasks. This model combines the efficiency of the LLaMA architecture with fine-tuning techniques to enhance performance in:
41
+
42
+ - Instruction adherence and task-specific prompts.
43
+ - Creative and coherent text generation.
44
+ - General-purpose reasoning and conversational AI.
45
+
46
+ The fine-tuning process utilized [Unsloth](https://github.com/unslothai/unsloth) and the Hugging Face TRL library, achieving a 2x faster training time compared to traditional methods. This efficiency allows for resource-conscious model updates while retaining high-quality performance.
47
+
48
+ ## Intended Uses
49
+
50
+ SmolLM2-1.7B-Instruct is designed for:
51
+
52
+ - Generating high-quality text for a variety of applications, such as content creation and storytelling.
53
+ - Following complex instructions across different domains.
54
+ - Supporting research and educational use cases.
55
+ - Serving as a lightweight option for conversational agents.
56
+
57
+ ## Limitations
58
+
59
+ While the model excels in instruction-following tasks, it has certain limitations:
60
+
61
+ - May exhibit biases inherent in the training data.
62
+ - Limited robustness for highly technical or specialized domains.
63
+ - Performance may degrade with overly complex or ambiguous prompts.
64
+
65
+ ## How to Use
66
+
67
+ ```python
68
+ from transformers import AutoModelForCausalLM, AutoTokenizer
69
+
70
+ model_name = "daemontatox/smollm2-1.7b-instruct" # Replace with the actual model name
71
+
72
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
73
+ model = AutoModelForCausalLM.from_pretrained(model_name)
74
+
75
+ # Example usage
76
+ prompt = "Explain the importance of biodiversity in simple terms: "
77
+ inputs = tokenizer(prompt, return_tensors="pt")
78
+ outputs = model.generate(**inputs)
79
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
80
+ print(generated_text)
81
+ ```
82
+
83
+ ## Acknowledgements
84
+
85
+ Special thanks to the Unsloth team for their tools enabling efficient fine-tuning. The model was developed with the help of open-source libraries and community resources.
86
+
87
+ [![Unsloth Logo](https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png)](https://github.com/unslothai/unsloth)