Joserzapata commited on
Commit
0b88ab3
1 Parent(s): f43e131

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -1
README.md CHANGED
@@ -4,9 +4,24 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - marsyas/gtzan
 
 
7
  model-index:
8
  - name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,6 +30,9 @@ should probably proofread and complete it, then remove this comment. -->
15
  # ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
16
 
17
  This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
 
 
 
18
 
19
  ## Model description
20
 
@@ -44,6 +62,32 @@ The following hyperparameters were used during training:
44
  - lr_scheduler_warmup_ratio: 0.1
45
  - num_epochs: 20
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.31.0.dev0
 
4
  - generated_from_trainer
5
  datasets:
6
  - marsyas/gtzan
7
+ metrics:
8
+ - accuracy
9
  model-index:
10
  - name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
11
+ results:
12
+ - task:
13
+ name: Audio Classification
14
+ type: audio-classification
15
+ dataset:
16
+ name: GTZAN
17
+ type: marsyas/gtzan
18
+ config: all
19
+ split: train
20
+ args: all
21
+ metrics:
22
+ - name: Accuracy
23
+ type: accuracy
24
+ value: 0.9
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
  # ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
31
 
32
  This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.4717
35
+ - Accuracy: 0.9
36
 
37
  ## Model description
38
 
 
62
  - lr_scheduler_warmup_ratio: 0.1
63
  - num_epochs: 20
64
 
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
69
+ | 0.7581 | 1.0 | 56 | 0.7029 | 0.78 |
70
+ | 0.3942 | 1.99 | 112 | 0.4646 | 0.86 |
71
+ | 0.3298 | 2.99 | 168 | 0.3861 | 0.88 |
72
+ | 0.1227 | 4.0 | 225 | 0.4702 | 0.86 |
73
+ | 0.0774 | 5.0 | 281 | 0.4492 | 0.9 |
74
+ | 0.0039 | 5.99 | 337 | 0.4607 | 0.9 |
75
+ | 0.0014 | 6.99 | 393 | 0.5022 | 0.9 |
76
+ | 0.0022 | 8.0 | 450 | 0.4711 | 0.9 |
77
+ | 0.0193 | 9.0 | 506 | 0.5226 | 0.86 |
78
+ | 0.0004 | 9.99 | 562 | 0.6055 | 0.82 |
79
+ | 0.0003 | 10.99 | 618 | 0.4793 | 0.89 |
80
+ | 0.0002 | 12.0 | 675 | 0.5052 | 0.9 |
81
+ | 0.0002 | 13.0 | 731 | 0.4652 | 0.89 |
82
+ | 0.0001 | 13.99 | 787 | 0.4617 | 0.9 |
83
+ | 0.0001 | 14.99 | 843 | 0.4653 | 0.9 |
84
+ | 0.0001 | 16.0 | 900 | 0.4635 | 0.91 |
85
+ | 0.0001 | 17.0 | 956 | 0.4693 | 0.9 |
86
+ | 0.0001 | 17.99 | 1012 | 0.4697 | 0.9 |
87
+ | 0.0001 | 18.99 | 1068 | 0.4715 | 0.9 |
88
+ | 0.0025 | 19.91 | 1120 | 0.4717 | 0.9 |
89
+
90
+
91
  ### Framework versions
92
 
93
  - Transformers 4.31.0.dev0