Update README.md
Browse files
README.md
CHANGED
@@ -2,14 +2,22 @@
|
|
2 |
license: mit
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
-
|
6 |
-
- squad
|
7 |
model-index:
|
8 |
- name: roberta-base-finetuned-squad-v1
|
9 |
-
results:
|
10 |
-
|
11 |
-
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
|
15 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -21,19 +29,9 @@ This model is a fine-tuned version of [roberta-base](https://huggingface.co/robe
|
|
21 |
|
22 |
## Model description
|
23 |
|
24 |
-
|
25 |
-
|
26 |
-
## Intended uses & limitations
|
27 |
-
|
28 |
-
More information needed
|
29 |
-
|
30 |
-
## Training and evaluation data
|
31 |
-
|
32 |
-
More information needed
|
33 |
-
|
34 |
-
## Training procedure
|
35 |
|
36 |
-
|
37 |
|
38 |
The following hyperparameters were used during training:
|
39 |
- learning_rate: 2e-05
|
@@ -47,13 +45,11 @@ The following hyperparameters were used during training:
|
|
47 |
- num_epochs: 3
|
48 |
- mixed_precision_training: Native AMP
|
49 |
|
50 |
-
|
51 |
|
52 |
- training loss: 0.77257
|
53 |
-
- f1 = 92.296
|
54 |
-
- exact_match = 86.045
|
55 |
|
56 |
-
|
57 |
|
58 |
- Transformers 4.27.4
|
59 |
- Pytorch 1.13.1+cu116
|
|
|
2 |
license: mit
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
+
|
|
|
6 |
model-index:
|
7 |
- name: roberta-base-finetuned-squad-v1
|
8 |
+
results:
|
9 |
+
- task:
|
10 |
+
type: question-answering # Required. Example: automatic-speech-recognition
|
11 |
+
name: Question Answering # Optional. Example: Speech Recognition
|
12 |
+
dataset:
|
13 |
+
type: squad # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
14 |
+
name: SQUAD # Required. A pretty name for the dataset. Example: Common Voice (French)
|
15 |
+
metrics:
|
16 |
+
- type: f1 # Required. Example: wer. Use metric id from https://hf.co/metrics
|
17 |
+
value: 92.296 # Required. Example: 20.90
|
18 |
+
- type: exact_match
|
19 |
+
value: 86.045
|
20 |
+
|
21 |
---
|
22 |
|
23 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
29 |
|
30 |
## Model description
|
31 |
|
32 |
+
Given a context / content, the model answers to a question by searching the content and extracting the relavant information.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
+
## Training hyperparameters
|
35 |
|
36 |
The following hyperparameters were used during training:
|
37 |
- learning_rate: 2e-05
|
|
|
45 |
- num_epochs: 3
|
46 |
- mixed_precision_training: Native AMP
|
47 |
|
48 |
+
## Training results
|
49 |
|
50 |
- training loss: 0.77257
|
|
|
|
|
51 |
|
52 |
+
## Framework versions
|
53 |
|
54 |
- Transformers 4.27.4
|
55 |
- Pytorch 1.13.1+cu116
|