juancopi81
commited on
Commit
•
bfe9086
1
Parent(s):
f2c092d
Training in progress epoch 0
Browse files- README.md +19 -93
- config.json +1 -1
- tf_model.h5 +1 -1
README.md
CHANGED
@@ -1,126 +1,52 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
- generated_from_keras_callback
|
4 |
-
- music
|
5 |
model-index:
|
6 |
- name: juancopi81/mutopia_guitar_mmm
|
7 |
results: []
|
8 |
-
datasets:
|
9 |
-
- juancopi81/mutopia_guitar_dataset
|
10 |
-
widget:
|
11 |
-
- text: "PIECE_START TIME_SIGNATURE=4_4 BPM=90 TRACK_START INST=0 DENSITY=2 BAR_START NOTE_ON=43"
|
12 |
-
example_title: "Time signature 4/4, BPM=90, NOTE=G2"
|
13 |
---
|
14 |
|
15 |
-
|
16 |
-
|
17 |
-
Music generation could be approached similarly to language generation. There are many ways to represent music as text and then use a language model to create a model capable of music generation. For encoding MIDI files as text, I am using the excellent [implementation](https://github.com/AI-Guru/MMM-JSB) of Dr. Tristan Beheren of the paper: [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048).
|
18 |
|
19 |
-
|
20 |
-
I created the notebook as an adaptation of [the one created by Dr. Tristan Behrens](https://huggingface.co/TristanBehrens/js-fakes-4bars).
|
21 |
|
|
|
22 |
It achieves the following results on the evaluation set:
|
23 |
-
- Train Loss:
|
24 |
-
- Validation Loss:
|
|
|
25 |
|
26 |
## Model description
|
27 |
|
28 |
-
|
29 |
-
`WhitespaceSplit` pre-tokenizer. The [tokenizer](https://huggingface.co/juancopi81/mutopia_guitar_dataset_tokenizer) is also in the Hugging Face hub.
|
30 |
|
31 |
## Intended uses & limitations
|
32 |
|
33 |
-
|
34 |
-
The main intention of this model is educational. I am creating a [series of notebooks](https://github.com/juancopi81/MMM_Mutopia_Guitar) where I show every step of the process:
|
35 |
-
- Collecting the data
|
36 |
-
- Pre-processing the data
|
37 |
-
- Training a tokenizer from scratch
|
38 |
-
- Fine-tuning a GPT-2 model
|
39 |
-
- Building a Gradio app for the model
|
40 |
-
|
41 |
-
I trained the model using the free version of Colab with a small dataset. Right now, it is heavily overfitting. My idea is to have a more extensive dataset of Guitar Music from Latinoamerica to train a new model similar to the Mutopia Guitar Model, using more GPU resources.
|
42 |
|
43 |
## Training and evaluation data
|
44 |
|
45 |
-
|
46 |
-
The dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.
|
47 |
|
48 |
-
|
49 |
|
50 |
### Training hyperparameters
|
51 |
|
52 |
-
The following hyperparameters were used during training
|
53 |
-
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 5726, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'passive_serialization': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
54 |
-
|
55 |
-
The following hyperparameters were used during training (without transposition - first round):
|
56 |
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
57 |
-
|
58 |
-
The following hyperparameters were used during training (without transposition - second round):
|
59 |
-
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
60 |
-
|
61 |
-
The following hyperparameters were used during training (without transposition, new tokenizer - third round):
|
62 |
-
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'passive_serialization': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
63 |
-
|
64 |
- training_precision: mixed_float16
|
|
|
65 |
### Training results
|
66 |
-
|
67 |
-
| Train Loss | Validation Loss | Epoch |
|
68 |
-
|:----------:|:---------------:|:-----:|
|
69 |
-
| 1.0705 | 1.3590 | 0 |
|
70 |
-
| 0.8889 | 1.3702 | 1 |
|
71 |
-
| 0.7588 | 1.3974 | 2 |
|
72 |
-
| 0.7294 | 1.4813 | 3 |
|
73 |
-
| 0.6263 | 1.5263 | 4 |
|
74 |
-
| 0.5841 | 1.5263 | 5 |
|
75 |
-
| 0.5844 | 1.5263 | 6 |
|
76 |
-
| 0.5837 | 1.5346 | 7 |
|
77 |
-
| 0.5798 | 1.5411 | 8 |
|
78 |
-
| 0.5773 | 1.5440 | 9 |
|
79 |
-
|
80 |
-
Without transposition (first round):
|
81 |
-
| Train Loss | Validation Loss | Epoch |
|
82 |
-
|:----------:|:---------------:|:-----:|
|
83 |
-
| 0.5503 | 1.5436 | 0 |
|
84 |
-
| 0.5503 | 1.5425 | 1 |
|
85 |
-
| 0.5476 | 1.5425 | 2 |
|
86 |
-
| 0.5467 | 1.5425 | 3 |
|
87 |
-
| 0.5447 | 1.5431 | 4 |
|
88 |
-
| 0.5418 | 1.5447 | 5 |
|
89 |
-
| 0.5418 | 1.5451 | 6 |
|
90 |
-
| 0.5401 | 1.5472 | 7 |
|
91 |
-
| 0.5386 | 1.5479 | 8 |
|
92 |
-
| 0.5365 | 1.5482 | 9 |
|
93 |
-
|
94 |
-
Without transposition (second round):
|
95 |
-
| Train Loss | Validation Loss | Epoch |
|
96 |
-
|:----------:|:---------------:|:-----:|
|
97 |
-
| 0.5368 | 1.5482 | 0 |
|
98 |
-
| 0.5355 | 1.5480 | 1 |
|
99 |
-
| 0.5326 | 1.5488 | 2 |
|
100 |
-
| 0.5363 | 1.5493 | 3 |
|
101 |
-
| 0.5346 | 1.5488 | 4 |
|
102 |
-
| 0.5329 | 1.5502 | 5 |
|
103 |
-
| 0.5329 | 1.5514 | 6 |
|
104 |
-
| 0.5308 | 1.5514 | 7 |
|
105 |
-
| 0.5292 | 1.5536 | 8 |
|
106 |
-
| 0.5272 | 1.5543 | 9 |
|
107 |
-
|
108 |
-
Without transposition (third round - new tokenizer):
|
109 |
| Train Loss | Validation Loss | Epoch |
|
110 |
|:----------:|:---------------:|:-----:|
|
111 |
-
|
|
112 |
-
|
113 |
-
| 4.9125 | 4.8956 | 2 |
|
114 |
-
| 4.2013 | 4.2778 | 3 |
|
115 |
-
| 3.8665 | 4.0330 | 4 |
|
116 |
-
| 3.7106 | 3.8956 | 5 |
|
117 |
-
| 3.6041 | 3.7995 | 6 |
|
118 |
-
| 3.5301 | 3.7485 | 7 |
|
119 |
-
| 3.4973 | 3.7323 | 8 |
|
120 |
-
| 3.4909 | 3.7323 | 9 |
|
121 |
|
122 |
### Framework versions
|
123 |
-
|
|
|
124 |
- TensorFlow 2.8.2
|
125 |
- Datasets 2.5.1
|
126 |
-
- Tokenizers 0.12.1
|
|
|
1 |
---
|
2 |
tags:
|
3 |
- generated_from_keras_callback
|
|
|
4 |
model-index:
|
5 |
- name: juancopi81/mutopia_guitar_mmm
|
6 |
results: []
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
+
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
10 |
+
probably proofread and complete it, then remove this comment. -->
|
|
|
11 |
|
12 |
+
# juancopi81/mutopia_guitar_mmm
|
|
|
13 |
|
14 |
+
This model was trained from scratch on an unknown dataset.
|
15 |
It achieves the following results on the evaluation set:
|
16 |
+
- Train Loss: 3.4879
|
17 |
+
- Validation Loss: 3.7206
|
18 |
+
- Epoch: 0
|
19 |
|
20 |
## Model description
|
21 |
|
22 |
+
More information needed
|
|
|
23 |
|
24 |
## Intended uses & limitations
|
25 |
|
26 |
+
More information needed
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
## Training and evaluation data
|
29 |
|
30 |
+
More information needed
|
|
|
31 |
|
32 |
+
## Training procedure
|
33 |
|
34 |
### Training hyperparameters
|
35 |
|
36 |
+
The following hyperparameters were used during training:
|
|
|
|
|
|
|
37 |
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
- training_precision: mixed_float16
|
39 |
+
|
40 |
### Training results
|
41 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
| Train Loss | Validation Loss | Epoch |
|
43 |
|:----------:|:---------------:|:-----:|
|
44 |
+
| 3.4879 | 3.7206 | 0 |
|
45 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
### Framework versions
|
48 |
+
|
49 |
+
- Transformers 4.22.2
|
50 |
- TensorFlow 2.8.2
|
51 |
- Datasets 2.5.1
|
52 |
+
- Tokenizers 0.12.1
|
config.json
CHANGED
@@ -32,7 +32,7 @@
|
|
32 |
"max_length": 350
|
33 |
}
|
34 |
},
|
35 |
-
"transformers_version": "4.22.
|
36 |
"use_cache": true,
|
37 |
"vocab_size": 588
|
38 |
}
|
|
|
32 |
"max_length": 350
|
33 |
}
|
34 |
},
|
35 |
+
"transformers_version": "4.22.2",
|
36 |
"use_cache": true,
|
37 |
"vocab_size": 588
|
38 |
}
|
tf_model.h5
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 345352296
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:65a22e285ce87a9ea0b11fa05ff56f4ef6faa84566fd03de23b9b7f6ea1003b2
|
3 |
size 345352296
|