PrinceAyush
commited on
Commit
•
bb877f3
1
Parent(s):
4abf28e
Upload model
Browse files- README.md +1 -44
- adapter_config.json +1 -1
- adapter_model.bin +2 -2
README.md
CHANGED
@@ -1,52 +1,9 @@
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
-
language:
|
4 |
-
- en
|
5 |
-
metrics:
|
6 |
-
- accuracy
|
7 |
-
pipeline_tag: conversational
|
8 |
---
|
9 |
## Training procedure
|
10 |
-
Finetuning for understanding human instructions and acting accordingly, the following procedure was followed:
|
11 |
-
|
12 |
-
Base Model Selection: The llama 7b model was chosen as the base model for finetuning. The llama 7b model provides a solid foundation and has been pre-trained on a vast amount of data, enabling it to capture a wide range of language patterns and understand context.
|
13 |
-
|
14 |
-
Data Preparation: A dataset comprising human instructions and corresponding desired actions was collected. This dataset was carefully curated and annotated to ensure high-quality training examples. The instructions covered various domains and scenarios to enhance the model's versatility.
|
15 |
-
|
16 |
-
Finetuning Process: The base model was finetuned using the collected dataset. The goal of finetuning was to fine-tune the model's parameters, allowing it to learn the specific task of understanding human instructions and generating appropriate responses. By exposing the model to task-specific data, it could adapt its pre-existing knowledge to perform better in this particular domain.
|
17 |
-
|
18 |
-
Hardware Configuration: The training process was carried out using a high-performance computing setup. Specifically, a system equipped with a 40GB NVIDIA A100 GPU was utilized. This powerful GPU accelerated the training process by efficiently processing the computations required for the neural network.
|
19 |
-
|
20 |
-
Training Duration: The model was trained for a total of 2 hours. The training duration was chosen based on several factors, including the available computational resources, dataset size, and complexity of the task. While longer training times might yield incremental improvements, 2 hours provided a reasonable balance between training duration and model performance.
|
21 |
-
|
22 |
-
Optimization Techniques: During training, various optimization techniques were employed to enhance the model's performance. These techniques include but are not limited to backpropagation, gradient descent, and adaptive learning rate algorithms. These techniques enabled the model to iteratively adjust its parameters and minimize the loss function, thereby improving its ability to understand human instructions and generate appropriate responses.
|
23 |
-
|
24 |
-
Evaluation and Iteration: Throughout the training process, periodic evaluations were conducted to assess the model's performance. Evaluation metrics such as accuracy, precision, and recall were used to gauge the model's understanding and action generation capabilities. Based on the evaluation results, further iterations and adjustments were made to improve the model's performance.
|
25 |
-
|
26 |
-
By following this training procedure, the model was successfully finetuned on the base llama 7b model to understand human instructions and act accordingly. The utilization of a 40GB A100 GPU, coupled with 2 hours of training, provided an efficient training process while maintaining a balance between computational resources and model performance.
|
27 |
-
|
28 |
-
## How to Run:
|
29 |
-
from peft import PeftModel
|
30 |
-
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
|
31 |
-
tokenizer = LlamaTokenizer.from_pretrained("llama-model-7b") #Change with the location
|
32 |
-
model = LlamaForCausalLM.from_pretrained("llama-model-7b",device_map="auto") #
|
33 |
-
model = PeftModel.from_pretrained(model, "PrinceAyush/Bharat_GPT")
|
34 |
-
|
35 |
-
prompt="Write a poem on sewwt carrot"
|
36 |
-
text=""""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n### Instruction:\n {}.\n ### Response:""".format(txt)
|
37 |
-
inputs = tokenizer(text,return_tensors="pt")
|
38 |
-
input_ids = inputs["input_ids"].cuda()
|
39 |
-
generation_config = GenerationConfig(temperature=0.6,top_p=0.95,repetition_penalty=1.15)
|
40 |
-
print("Generating...")
|
41 |
-
generation_output = model.generate(input_ids=input_ids,generation_config=generation_config,return_dict_in_generate=True,output_scores=True,
|
42 |
-
max_new_tokens=128)
|
43 |
-
for s in generation_output.sequences:
|
44 |
-
return tokenizer.decode(s)
|
45 |
-
|
46 |
-
Note:
|
47 |
-
|
48 |
|
49 |
### Framework versions
|
50 |
|
51 |
|
52 |
-
- PEFT 0.4.0
|
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
## Training procedure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
### Framework versions
|
7 |
|
8 |
|
9 |
+
- PEFT 0.4.0
|
adapter_config.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
{
|
2 |
"auto_mapping": null,
|
3 |
-
"base_model_name_or_path":
|
4 |
"bias": "none",
|
5 |
"fan_in_fan_out": false,
|
6 |
"inference_mode": true,
|
|
|
1 |
{
|
2 |
"auto_mapping": null,
|
3 |
+
"base_model_name_or_path": null,
|
4 |
"bias": "none",
|
5 |
"fan_in_fan_out": false,
|
6 |
"inference_mode": true,
|
adapter_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6d94f4ad41b996ef016208017c07650509ef9811bb4fec295aec7fa0a0bc6143
|
3 |
+
size 8434765
|