Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,34 @@
|
|
1 |
---
|
2 |
license: openrail
|
3 |
---
|
4 |
-
Experimental Tagalog loras: safe or accurate outputs not guaranteed (not for production use)!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: openrail
|
3 |
---
|
4 |
+
Experimental Tagalog loras: safe or accurate outputs not guaranteed (not for production use)!
|
5 |
+
|
6 |
+
# lt2_08162023
|
7 |
+
* Fine tuned on a small dataset of 14 items, manually edited
|
8 |
+
* 1 epoch (barely any noticable results)
|
9 |
+
* From chat LLaMA-2-7b
|
10 |
+
* Lora of chat-tagalog v0.1
|
11 |
+
|
12 |
+
# lt2_08162023a
|
13 |
+
* Fine tuned on a small dataset of 14 items, manually edited
|
14 |
+
* 20 epochs (more observable effects)
|
15 |
+
* From chat LLaMA-2-7b
|
16 |
+
* Lora of chat-tagalog v0.1a
|
17 |
+
|
18 |
+
# lt2_08162023b
|
19 |
+
* Fine tuned on a small dataset of 14 items, manually edited
|
20 |
+
* 10 epochs
|
21 |
+
* From chat LLaMA-2-7b
|
22 |
+
* Lora of chat-tagalog v0.1b
|
23 |
+
|
24 |
+
# lt2_08162023c
|
25 |
+
* Fine tuned on a small dataset of 14 items, manually edited
|
26 |
+
* 50 epochs (overfitted)
|
27 |
+
* From chat LLaMA-2-7b
|
28 |
+
* Lora of chat-tagalog v0.1c
|
29 |
+
|
30 |
+
# lt2_08162023d
|
31 |
+
* Fine tuned on a small dataset of 14 items, manually edited
|
32 |
+
* 30 epochs (v0.1a further trained and cut-off before overfit)
|
33 |
+
* From chat LLaMA-2-7b
|
34 |
+
* Lora of chat-tagalog v0.1d
|