Shankar Jayaratnam commited on
Commit
4ecbd9d
·
verified ·
1 Parent(s): 936e522

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -29
README.md CHANGED
@@ -1,12 +1,19 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
11
 
12
  ## Model Details
@@ -15,23 +22,20 @@ tags: []
15
 
16
  <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
 
 
 
 
 
 
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
@@ -41,7 +45,10 @@ This is the model card of a 🤗 transformers model that has been pushed on the
41
 
42
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
 
 
 
45
 
46
  ### Downstream Use [optional]
47
 
@@ -53,25 +60,40 @@ This is the model card of a 🤗 transformers model that has been pushed on the
53
 
54
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
- [More Information Needed]
 
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
 
 
 
 
 
 
 
 
 
63
 
64
  ### Recommendations
65
 
66
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
69
 
70
  ## How to Get Started with the Model
71
 
72
  Use the code below to get started with the model.
73
 
74
- [More Information Needed]
 
 
 
 
75
 
76
  ## Training Details
77
 
@@ -79,7 +101,8 @@ Use the code below to get started with the model.
79
 
80
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
 
83
 
84
  ### Training Procedure
85
 
@@ -87,8 +110,10 @@ Use the code below to get started with the model.
87
 
88
  #### Preprocessing [optional]
89
 
90
- [More Information Needed]
 
91
 
 
92
 
93
  #### Training Hyperparameters
94
 
@@ -110,27 +135,46 @@ Use the code below to get started with the model.
110
 
111
  <!-- This should link to a Dataset Card if possible. -->
112
 
113
- [More Information Needed]
114
 
115
  #### Factors
116
 
117
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
 
119
- [More Information Needed]
 
 
120
 
121
  #### Metrics
122
 
123
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
 
 
 
 
 
 
126
 
127
  ### Results
128
 
129
- [More Information Needed]
 
 
 
 
 
 
130
 
131
  #### Summary
132
 
 
 
 
133
 
 
 
134
 
135
  ## Model Examination [optional]
136
 
@@ -154,7 +198,9 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
154
 
155
  ### Model Architecture and Objective
156
 
157
- [More Information Needed]
 
 
158
 
159
  ### Compute Infrastructure
160
 
@@ -162,11 +208,11 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
162
 
163
  #### Hardware
164
 
165
- [More Information Needed]
166
 
167
  #### Software
168
 
169
- [More Information Needed]
170
 
171
  ## Citation [optional]
172
 
@@ -192,8 +238,8 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
192
 
193
  ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ language:
5
+ - en
6
+ metrics:
7
+ - accuracy
8
+ base_model:
9
+ - mistralai/Mistral-7B-Instruct-v0.3
10
+ pipeline_tag: zero-shot-classification
11
  ---
12
 
13
  # Model Card for Model ID
14
 
15
  <!-- Provide a quick summary of what the model is/does. -->
16
+ The Mistral 7B - Cause Analyzer is a fine-tuned large language model designed for analyzing server logs, categorizing errors, and providing debugging solutions. It is optimized for predictive maintenance tasks and can be integrated into tools like Splunk or Grafana for real-time operational insights.
17
 
18
 
19
  ## Model Details
 
22
 
23
  <!-- Provide a longer summary of what this model is. -->
24
 
25
+ This model was fine-tuned on real-world and synthetic log data from Esperanto servers using the LoRA technique. It excels in automating error categorization and debugging recommendations, reducing manual intervention and improving server health monitoring.
26
+
27
+ - **Developed by:** [Sivakrishna Yaganti, Shankar Jayaratnam]
28
+ - **Funded by [optional]:** [Esperanto Technologies]
29
+ - **Shared by [optional]:** [Sivakrishna Yaganti]
30
+ - **Model type:** [Casual language model]
31
+ - **Finetuned from model [optional]:** [Mistral 7B]
32
 
 
 
 
 
 
 
 
33
 
34
  ### Model Sources [optional]
35
 
36
  <!-- Provide the basic links for the model. -->
37
 
38
+ - **Repository:** https://huggingface.co/Esperanto/Mistral-7B-CauseAnalyzer
 
 
39
 
40
  ## Uses
41
 
 
45
 
46
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
 
48
+ The model can be used to analyze server logs for error categorization and debugging without additional fine-tuning. It is suitable for:
49
+ 1. Identifying patterns in server logs.
50
+ 2. Automating the process of error categorization.
51
+ 3. Generating debugging recommendations.
52
 
53
  ### Downstream Use [optional]
54
 
 
60
 
61
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
 
63
+ 1. The model is not intended for general text generation tasks unrelated to server log analysis.
64
+ 2. It may not perform well on logs from domains significantly different from the training data.
65
 
66
  ## Bias, Risks, and Limitations
67
 
68
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
69
 
70
+ ### Bias:
71
+ 1. The model's performance is optimized for logs similar to those in the training data. Logs with substantially different formats or languages may yield suboptimal results.
72
+
73
+ ### Risks:
74
+ 1. Over-reliance on model predictions without validation could lead to incorrect debugging actions.
75
+ 2. The model may fail to identify new or rare errors that were not part of the training data.
76
+
77
+ ### Limitations:
78
+ 1. The model assumes logs are in English.
79
+ 2. It may struggle with incomplete or highly noisy log data.
80
 
81
  ### Recommendations
82
 
83
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
84
 
85
+ 1. Validate predictions with domain experts, especially in critical systems.
86
+ 2. Use the model alongside traditional debugging methods to ensure accuracy.
87
 
88
  ## How to Get Started with the Model
89
 
90
  Use the code below to get started with the model.
91
 
92
+ ### Load the model and tokenizer
93
+ - model_name = "Esperanto/Mistral-7B-CauseAnalyzer"
94
+ - tokenizer = AutoTokenizer.from_pretrained(model_name)
95
+ - model = AutoModelForCausalLM.from_pretrained(model_name)
96
+
97
 
98
  ## Training Details
99
 
 
101
 
102
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
103
 
104
+ - **Source:** Real-world logs from Esperanto servers augmented with synthetic logs generated using GPT-4.
105
+ - **Size:** ~170 labeled samples after data augmentation.
106
 
107
  ### Training Procedure
108
 
 
110
 
111
  #### Preprocessing [optional]
112
 
113
+ 1. Logs were structured into fields for error type, root cause, and debugging solution.
114
+ 2. Missing labels were generated using GPT-4 and manual verification.
115
 
116
+ - **Fine-tuning method:** LoRA (Low-Rank Adaptation)
117
 
118
  #### Training Hyperparameters
119
 
 
135
 
136
  <!-- This should link to a Dataset Card if possible. -->
137
 
138
+ *Validation set:* 10% of labeled data.
139
 
140
  #### Factors
141
 
142
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
143
 
144
+ Model performance was evaluated on:
145
+ 1. Error categorization accuracy.
146
+ 2. Cause similarity score (cosine similarity) between predicted and ground truth causes.
147
 
148
  #### Metrics
149
 
150
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
151
 
152
+ *Cause (Similarity Score):*
153
+ 1. Baseline Mistral 7B: 51.91
154
+ 2. Mistral-7B-CauseAnalyzer: 67.15
155
+
156
+ *Error Categorization Accuracy:*
157
+ 1. Baseline Mistral 7B: 46.23%
158
+ 2. Mistral-7B-CauseAnalyzer: 70%
159
 
160
  ### Results
161
 
162
+ #### Training and Validation Loss
163
+ 1. Training Loss decreased steadily from ~1 to 0.38, as shown in the train/loss graph.
164
+ 2. Evaluation Loss reduced from 0.6 to 0.3, indicating effective generalization.
165
+
166
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6659207a17951b5bd11a91fa/rh3jaw1F-IoIi7KGrgJc8.png)
167
+
168
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6659207a17951b5bd11a91fa/YIym3FFHDhVNJWie_vKSS.png)
169
 
170
  #### Summary
171
 
172
+ The Fine-Tuned Mistral 7B - Cause Analyzer significantly outperforms the baseline models, achieving:
173
+ 1. A 67.15 similarity score for cause prediction.
174
+ 2. A 70% accuracy in error categorization.
175
 
176
+ These results highlight the model's robustness in predictive maintenance tasks and its potential for real-world integration into server health monitoring systems.
177
+ -*Had it been finetuned with more data, could have given better results.*
178
 
179
  ## Model Examination [optional]
180
 
 
198
 
199
  ### Model Architecture and Objective
200
 
201
+ - **Architecture:** Mistral 7B causal language model.
202
+ - **Objective:** Fine-tuned for error categorization and debugging solutions in server logs.
203
+
204
 
205
  ### Compute Infrastructure
206
 
 
208
 
209
  #### Hardware
210
 
211
+ Nvidia A100 and Esperanto Accelerators
212
 
213
  #### Software
214
 
215
+ Hugging Face Transformers library.
216
 
217
  ## Citation [optional]
218
 
 
238
 
239
  ## Model Card Authors [optional]
240
 
241
+ Sivakrishna Yaganti, Shankar Jayaratnam
242
 
243
  ## Model Card Contact
244
 
245