Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@
|
|
5 |
|
6 |
A frankenMoE of [heegyu/WizardVicuna-Uncensored-3B-0719](https://huggingface.co/heegyu/WizardVicuna-Uncensored-3B-0719) that has been accidentally aligned against evil. I was trying to train the experts to have an evil alignment and instead only exponentially increased its alignment towards good, so I named it after the hero of one of my favorite games. [The yml I wrote that caused this alignment is here.](https://huggingface.co/Kquant03/Raiden-16x3.43B/blob/main/Dark.yml)
|
7 |
|
8 |
-
|
9 |
|
10 |
Unlike the last model, this is just the same model being used 16 times as experts. I felt like this would allow it to be more coherent, which was correct.
|
11 |
|
|
|
5 |
|
6 |
A frankenMoE of [heegyu/WizardVicuna-Uncensored-3B-0719](https://huggingface.co/heegyu/WizardVicuna-Uncensored-3B-0719) that has been accidentally aligned against evil. I was trying to train the experts to have an evil alignment and instead only exponentially increased its alignment towards good, so I named it after the hero of one of my favorite games. [The yml I wrote that caused this alignment is here.](https://huggingface.co/Kquant03/Raiden-16x3.43B/blob/main/Dark.yml)
|
7 |
|
8 |
+
[My last model](https://huggingface.co/Kquant03/PsychoOrca_32x1.1B_MoE_fp16) was an attempt to improve the overall coherence of TinyLlama models. It failed spectacularly. However, I was amused enough by the results to try frankenMoE with a better model. Although this model didn't achieve the level of unbridled evil I was hoping for...The results of this were good enough to post, in my opinion. (I do have a theory, that if given something to fight against, it could potentially generate more uncensored stuff).
|
9 |
|
10 |
Unlike the last model, this is just the same model being used 16 times as experts. I felt like this would allow it to be more coherent, which was correct.
|
11 |
|