prithivMLmods commited on
Commit
2e3f81b
·
verified ·
1 Parent(s): 868b742

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +128 -2
README.md CHANGED
@@ -1,5 +1,131 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- ![2.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/QcOUgFsZBSnVHBcY6GJKU.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - microsoft/phi-4
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - text-generation-inference
11
+ - math
12
  ---
13
 
14
+ ![2.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/QcOUgFsZBSnVHBcY6GJKU.png)
15
+
16
+ Here's the updated `README.md` with the requested changes:
17
+
18
+ ---
19
+
20
+ # **Phi-4 o1 [ Responsible Mathematical Problem Solving & Reasoning Capabilities ]**
21
+
22
+ `Phi-4 o1 [ Responsible Mathematical Problem Solving & Reasoning Capabilities ]` is a state-of-the-art open model fine-tuned on advanced reasoning tasks. It is based on **Microsoft’s Phi-4**, built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The primary focus is to create a small, capable model that excels in **responsible reasoning** and **mathematical problem-solving** with high-quality data.
23
+
24
+ The **Phi-4 o1** model has undergone robust safety post-training using a combination of **SFT (Supervised Fine-Tuning)** and iterative **DPO (Direct Preference Optimization)** techniques. The safety alignment process includes publicly available datasets and proprietary synthetic datasets to improve **helpfulness**, **harmlessness**, and **responsible AI usage**.
25
+
26
+ ---
27
+
28
+ ## **Dataset Info**
29
+
30
+ Phi-4 o1 ft is fine-tuned on a synthetic dataset curated through a specially designed pipeline. The dataset leverages the **Math IO (Input-Output)** methodology and step-by-step problem-solving approaches. This ensures the model is highly effective in:
31
+
32
+ - **Responsible mathematical problem-solving**
33
+ - **Logical reasoning**
34
+ - **Stepwise breakdowns of complex tasks**
35
+
36
+ The dataset design focuses on enabling the model to generate detailed, accurate, and logically coherent solutions for mathematical and reasoning-based tasks.
37
+
38
+ ---
39
+
40
+ ## **Run with Transformers**
41
+
42
+ To use Phi-4 o1 ft for text generation tasks, follow the example below:
43
+
44
+ ### Example Usage
45
+
46
+ ```python
47
+ # pip install accelerate
48
+ from transformers import AutoTokenizer, AutoModelForCausalLM
49
+ import torch
50
+
51
+ # Load tokenizer and model
52
+ tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Phi-4-Math-IO")
53
+ model = AutoModelForCausalLM.from_pretrained(
54
+ "prithivMLmods/Phi-4-Math-IO",
55
+ device_map="auto",
56
+ torch_dtype=torch.bfloat16,
57
+ )
58
+
59
+ # Input prompt
60
+ input_text = "Solve the equation: 2x + 3 = 11. Provide a stepwise solution."
61
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
62
+
63
+ # Generate output
64
+ outputs = model.generate(**input_ids, max_new_tokens=64)
65
+ print(tokenizer.decode(outputs[0]))
66
+ ```
67
+
68
+ For structured dialogue generation, you can apply the chat template as follows:
69
+
70
+ ```python
71
+ # Structured input for chat-style interaction
72
+ messages = [
73
+ {"role": "user", "content": "Explain Pythagoras’ theorem with an example."},
74
+ ]
75
+ input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
76
+
77
+ # Generate response
78
+ outputs = model.generate(**input_ids, max_new_tokens=256)
79
+ print(tokenizer.decode(outputs[0]))
80
+ ```
81
+ ---
82
+ ## **Intended Use**
83
+
84
+ Phi-4 o1 ft is designed for a wide range of **reasoning-intensive** and **math-focused** applications. Below are some key use cases:
85
+
86
+ ### 1. **Responsible Mathematical Problem Solving**
87
+ - Solving complex mathematical problems with detailed, step-by-step solutions.
88
+ - Assisting students, educators, and researchers in understanding advanced mathematical concepts.
89
+
90
+ ### 2. **Reasoning and Logical Problem Solving**
91
+ - Breaking down intricate problems in logic, science, and other fields into manageable steps.
92
+ - Providing responsible and accurate reasoning capabilities for critical applications.
93
+
94
+ ### 3. **Educational Tools**
95
+ - Supporting educational platforms with explanations, tutoring, and Q&A support.
96
+ - Generating practice problems and solutions for students.
97
+
98
+ ### 4. **Content Creation**
99
+ - Assisting content creators in generating accurate and logical educational content.
100
+ - Helping with technical documentation by providing precise explanations.
101
+
102
+ ### 5. **Customer Support**
103
+ - Automating responses to technical queries with logical stepwise solutions.
104
+ - Providing accurate, responsible, and coherent information for complex questions.
105
+
106
+ ---
107
+
108
+ ## **Limitations**
109
+
110
+ While Phi-4 o1 ft is highly capable in reasoning and mathematics, users should be aware of its limitations:
111
+
112
+ ### 1. **Bias and Fairness**
113
+ - Despite rigorous training, the model may still exhibit biases from its training data. Users are encouraged to carefully review outputs, especially for sensitive topics.
114
+
115
+ ### 2. **Contextual Understanding**
116
+ - The model may sometimes misinterpret ambiguous or complex prompts, leading to incorrect or incomplete responses.
117
+
118
+ ### 3. **Real-Time Knowledge**
119
+ - The model’s knowledge is static, reflecting only the data it was trained on. It does not have real-time information about current events or post-training updates.
120
+
121
+ ### 4. **Safety and Harmlessness**
122
+ - Although safety-aligned, the model may occasionally generate responses that require human oversight. Regular monitoring is recommended when deploying it in sensitive domains.
123
+
124
+ ### 5. **Resource Requirements**
125
+ - Due to its size, running the model efficiently may require high-end computational resources, particularly for large-scale or real-time applications.
126
+
127
+ ### 6. **Ethical Considerations**
128
+ - The model must not be used for malicious purposes, such as generating harmful content, misinformation, or spam. Users are responsible for ensuring ethical use.
129
+
130
+ ### 7. **Domain-Specific Limitations**
131
+ - Although effective in general-purpose reasoning and math tasks, the model may require further fine-tuning for highly specialized domains such as medicine, law, or finance.