Text Generation
GGUF
English
mixture of experts
Mixture of Experts
4x8B
32 bit enhanced
float 32 quants
LLama MOE
uncensored
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
horror
mergekit
Inference Endpoints
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -45,7 +45,7 @@ pipeline_tag: text-generation
|
|
45 |
|
46 |
<I><small> A float 32 high precision M.O.E model, quanted in float 32 with additional upgraded and augmented quants too. </small></i>
|
47 |
|
48 |
-
<img src="grand-story
|
49 |
|
50 |
It is a Llama3 model, max context of 8k (8192) using mixture of experts to combine FOUR top Llama3 8B
|
51 |
models into one massive powerhouse at 24.9B parameters (equal to 32B - 4 X 8 B).
|
|
|
45 |
|
46 |
<I><small> A float 32 high precision M.O.E model, quanted in float 32 with additional upgraded and augmented quants too. </small></i>
|
47 |
|
48 |
+
<img src="grand-dark-story.jpg" style="float:right; width:300px; height:300px; padding:10px;">
|
49 |
|
50 |
It is a Llama3 model, max context of 8k (8192) using mixture of experts to combine FOUR top Llama3 8B
|
51 |
models into one massive powerhouse at 24.9B parameters (equal to 32B - 4 X 8 B).
|