File size: 4,673 Bytes
a3a789c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
Strela is a powerful language model designed to provide high speed and quality responses on weak devices.
Strela is recommended for the following purposes:
 * Chat bot for dialogue
 * Story writer
 * Song writer
 * Translation of Russian and English languages
 * When it is ineffective to use heavier models

## Description from strela itself
I am a computer program designed to process and analyze natural language.
I have the ability to understand, analyze, and process natural language, allowing me to communicate with people through various communication channels. My main goal is to help people solve tasks and provide information based on requests.
I can be used for various purposes: from automatic text generation, translation from one language to another, or even creating your own verses and songs.
## Using the model online
You can try it out [here](https://huggingface.co./spaces/gai-labs/chat-with-strela-q4_k_m).
## Using the model for in-app chat
Recommended is [GTP4ALL](https://gpt4all.io/index.html), it supports GGUF, so you need to download the [special model in GGUF format](https://huggingface.co./gai-labs/strela-GGUF).
## Using the model for Unity chat
Recommended is [LLM for Unity](https://assetstore.unity.com/packages/tools/ai-ml-integration/llm-for-unity-273604), it supports GGUF, so you need to download the [special model in GGUF format](https://huggingface.co./gai-labs/strela-GGUF).
## Using the quantized model for chat in Python | Recommended
You should install [gpt4all](https://docs.gpt4all.io/gpt4all_python.html)
```
pip install gpt4all
```
Then, download the [GGUF version of the model](https://huggingface.co./gai-labs/strela-GGUF) and move the file to your script's directory
```py
# Library Imports
import os
from gpt4all import GPT4All

# Initializing the model from the strela-q4_k_m.gguf file in the current directory
model = GPT4All(model_name='strela-q4_k_m.gguf', model_path=os.getcwd())

# Callback function to stop generation if Arrow generates the '#' symbol, which marks the beginning of roles declaration
def stop_on_token_callback(token_id, token_string):
    if '#' in token_string:
        return False
    else:
        return True

# System prompt
system_template = """### System:
You are an AI assistant who gives a helpful response to whatever humans ask of you.
"""

# Human and AI prompt
prompt_template = """
### Human:
{0}
### Assistant:
"""

# Chat session
with model.chat_session(system_template, prompt_template):
    print("To exit, enter 'Exit'")
    while True:
        print('')
        user_input = input(">>> ")
        if user_input.lower() != "exit":

            # Streaming generation
            for token in model.generate(user_input, streaming=True, callback=stop_on_token_callback):
                print(token, end='')
        else:
            break
```
```
To exit, enter 'Exit'

>>> Hello
Hello! How can I help you today?
>>> 
```
## Using the full model for chat in Python
```py
# Library Imports
from transformers import AutoTokenizer, AutoModelForCausalLM

# Loading the model
tokenizer = AutoTokenizer.from_pretrained("gai-labs/strela")
model = AutoModelForCausalLM.from_pretrained("gai-labs/strela")

# System prompt
system_prompt = "You are an AI assistant who gives a helpful response to whatever humans ask of you."

# Your prompt
prompt = "Hello!"

# Chat template
chat = f"""### System:
{system_prompt}
### Human:
{prompt}
### Assistant:
"""

# Generation
model_inputs = tokenizer([chat], return_tensors="pt")
generated_ids = model.generate(**model_inputs, max_new_tokens=64) # Adjust the maximum token count for generation
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

# Cleaning the generated output from the chat template
output = output.replace(chat, "")

# Output of the generation results
print(output)
```
```
Hello! How can I help?
```
## Using the model for text generation in Python
```py
# Library Imports
from transformers import AutoTokenizer, AutoModelForCausalLM

# Loading the model
tokenizer = AutoTokenizer.from_pretrained("gai-labs/strela")
model = AutoModelForCausalLM.from_pretrained("gai-labs/strela")

# Prompt
prompt = "AI - "

# Generation
model_inputs = tokenizer([prompt], return_tensors="pt")
generated_ids = model.generate(**model_inputs, max_new_tokens=64) # Adjust the maximum token count for generation
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

# Output of the generation results
print(output)
```
```
AI - is a field of computer science and technology that deals with creating machines capable of "understanding" humans or performing tasks with logic similar to that of humans.
```