MouezYazidi
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -9,8 +9,87 @@ tags:
|
|
9 |
license: apache-2.0
|
10 |
language:
|
11 |
- en
|
|
|
|
|
12 |
---
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
# Uploaded model
|
15 |
|
16 |
- **Developed by:** MouezYazidi
|
@@ -19,4 +98,4 @@ language:
|
|
19 |
|
20 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
-
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
9 |
license: apache-2.0
|
10 |
language:
|
11 |
- en
|
12 |
+
datasets:
|
13 |
+
- iamtarun/python_code_instructions_18k_alpaca
|
14 |
---
|
15 |
|
16 |
+
# Model Description
|
17 |
+
|
18 |
+
This model is fine-tuned from **microsoft/phi-4**. This model is enhanced to improve coding capabilities, particularly in Python, as it was fine-tuned on a dataset of 18,000 Python samples using Alpaca prompt instructions.
|
19 |
+
|
20 |
+
Please refer to this repository when using the model.
|
21 |
+
|
22 |
+
## To perform inference using these LoRA adapters, please use the following code:
|
23 |
+
|
24 |
+
|
25 |
+
````Python
|
26 |
+
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
|
27 |
+
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
|
28 |
+
!pip install --no-deps "xformers<0.0.27" "trl<0.9.0" peft accelerate bitsandbytes
|
29 |
+
````
|
30 |
+
|
31 |
+
````Python
|
32 |
+
from unsloth import FastLanguageModel
|
33 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
34 |
+
model_name = "MouezYazidi/Py-phi-4-coder_LoRA",
|
35 |
+
max_seq_length = 2048,
|
36 |
+
dtype = None,
|
37 |
+
load_in_4bit = True,
|
38 |
+
)
|
39 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
40 |
+
|
41 |
+
alpaca_prompt = """Below is an instruction describing a task, along with an input providing additional context. Your task is to generate a clear, concise, and accurate Python code response that fulfills the given request.
|
42 |
+
|
43 |
+
### Instruction:
|
44 |
+
{}
|
45 |
+
|
46 |
+
### Input:
|
47 |
+
{}
|
48 |
+
|
49 |
+
### Response:
|
50 |
+
{}"""
|
51 |
+
|
52 |
+
inputs = tokenizer(
|
53 |
+
[
|
54 |
+
alpaca_prompt.format(
|
55 |
+
"", # instruction
|
56 |
+
"""Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.
|
57 |
+
You may assume that each input would have exactly one solution, and you may not use the same element twice. You can return the answer in any order..
|
58 |
+
if you gonna use any external libraries you need to import it first""", # input
|
59 |
+
"", # output - leave this blank for generation!
|
60 |
+
)
|
61 |
+
], return_tensors = "pt").to("cuda")
|
62 |
+
|
63 |
+
from transformers import TextStreamer
|
64 |
+
text_streamer = TextStreamer(tokenizer)
|
65 |
+
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)
|
66 |
+
````
|
67 |
+
|
68 |
+
````Markdown
|
69 |
+
|
70 |
+
The Outout is:
|
71 |
+
|
72 |
+
Below is an instruction describing a task, along with an input providing additional context. Your task is to generate a clear, concise, and accurate Python code response that fulfills the given request.
|
73 |
+
|
74 |
+
### Instruction:
|
75 |
+
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target. You may assume that each input would have exactly one solution, and you may not use the same element twice. You can return the answer in any order.. if you gonna use any external libraries you need to import it first
|
76 |
+
|
77 |
+
### Input:
|
78 |
+
|
79 |
+
|
80 |
+
### Response:
|
81 |
+
def twoSum(nums, target):
|
82 |
+
# Create a dictionary to store the numbers and their indices
|
83 |
+
num_dict = {}
|
84 |
+
for i, num in enumerate(nums):
|
85 |
+
# Check if the complement of the current number is in the dictionary
|
86 |
+
if target - num in num_dict:
|
87 |
+
return [num_dict[target - num], i]
|
88 |
+
# Add the current number to the dictionary
|
89 |
+
num_dict[num] = i<|im_end|>
|
90 |
+
|
91 |
+
````
|
92 |
+
|
93 |
# Uploaded model
|
94 |
|
95 |
- **Developed by:** MouezYazidi
|
|
|
98 |
|
99 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
100 |
|
101 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|