BRAINSTORM - 4x - Multi : L3-SthenoMaidBlackroot-8B-V1 (now at 8.68B)

This repo contains quants 4x of L3-SthenoMaidBlackroot-8B-V1 (now at 8.68B) using the "Brainstorm" method of augmenting reasoning in a LLM to increase it's performance at the core level for ANY creative use case(s).

This version has 4 "reasoning" centers - one from the original merge, and 3 from the unmerged models (at close to full strength) melded into a 4 layer reasoning center.

The BRAINSTORM process was developed by David_AU.

Some of the core principals behind this process are discussed in this scientific paper : Progressive LLaMA with Block Expansion . However I went in a completely different direction from what was outlined in this paper.

What is "Brainstorm" ?

The reasoning center of an LLM is taken apart, reassembled, and expanded by 8x.

Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (4x,5x,8x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.

The "Multi" reasoning system pulls "reasoning centers" from multiple models and fuses these into one long "chain of reasoning" so to speak. Each one is then calibrated. Each "center" interacts with the other "centers" and the order of the centers further impacts the model's output style - again roughly speaking.

The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.

Here are some of the enhancements this process brings to the model's performance:

  • Prose generation seems more focused on the moment to moment.
  • Sometimes there will be "preamble" and/or foreshadowing present.
  • Fewer or no "cliches"
  • Better overall prose and/or more complex / nuanced prose.
  • A greater sense of nuance on all levels.
  • Coherence is stronger.
  • Description is more detailed, and connected closer to the content.
  • Simile and Metaphors are stronger and better connected to the prose, story, and character.
  • Sense of "there" / in the moment is enhanced.
  • Details are more vivid, and there are more of them.
  • Prose generation length can be long to extreme.
  • Emotional engagement is stronger.
  • The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
  • The MORE instructions and/or details you provide the more strongly the model will respond.
  • Depending on the model "voice" may be more "human" vs original model's "voice".

Other "lab" observations:

  • This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
  • However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
  • From lab testing it seems to ponder, and consider more carefully roughly speaking.
  • You could say this process sharpens the model's focus on it's task(s) at a deeper level.

The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.

Other technologies developed by David_AU like "Ultra" (precision), "Neo Imatrix" (custom imatrix datasets), and "X-quants" (custom application of the imatrix process) can further enhance the performance of the model along with the "Brainstorm" process.

The "Brainstorm" process has been tested on multiple LLama2, Llama3, and Mistral models of various parameter sizes, as well as on "root" models like "Llama3 Instruct", "Mistral Instruct", and "merged" / "fine tuned" models too.

Usage Notice:

You may need to raise the "repeat penalty" from a default of 1.1 to slightly higher levels in some use case(s).

Original Model:

For original model specifications, usage information and other important details please see (this is based on models used in "L3-SthenoMaidBlackroot-8B-V1" ):

[ https://huggingface.co./DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF ]

and the original model page:

Special thanks to the model creators at BLUUWHALE for making such a fantastic model:

[ https://huggingface.co./bluuwhale/L3-SthenoMaidBlackroot-8B-V1 ]

Please report any issue(s) and/or feedback via the "Community tab".

This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 131k.

Here is the standard LLAMA3 template:

{
  "name": "Llama 3",
  "inference_params": {
    "input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
    "input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
    "pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
    "pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
    "pre_prompt_suffix": "<|eot_id|>",
    "antiprompt": [
      "<|start_header_id|>",
      "<|eot_id|>"
    ]
  }
}

Optional Enhancement:

The following can be used in place of the "system prompt" or "system role" to further enhance the model.

It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along. In this case the enhancements do not have as strong effect at using "system prompt" or "system role".

Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.

Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.

Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)

[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)

Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.

You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation and scene continue functions.

This enhancement WAS NOT used to generate the examples below.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 2" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co./DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co./DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

EXAMPLES PROMPTS and OUTPUT:

Examples are created using quant Q4_K_M, "temp=0", minimal parameters and "LLAMA3" template.

Model has been tested with "temp" from ".1" to ".8".

Below are the least creative outputs, prompt is in BOLD.

Higher quants will have more detail, better quality output.


WARNING: NSFW. Vivid prose. Visceral Details. Violence. Graphic HORROR. Swearing. UNCENSORED.


Start a 1000 word scene with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...

The sky scraper swayed, as she watched the window in front of her on the 21st floor explode. Glass shards rained down like diamonds from a shattered chandelier, showering the office below with an avalanche of sharp-edged ice. She stood frozen, her heart lodged in her throat, as the sound of shattering glass and crunching metal filled her ears.

She had been working late, trying to meet the impossible deadline her boss had set for her, when she heard the first scream. It was a woman's voice, shrill with terror, and it came from the floor above hers. She had turned to look up at the ceiling, wondering what could have caused such a sound of panic, when she saw the window blow out.

Her mind raced as she tried to make sense of what was happening. Was this an attack? A bomb? Some kind of crazy person with a grudge against her company? She didn't know, but she knew she had to get out of there - and fast.

She turned to run towards the door, but it was already too late. The emergency alarm had sounded, and the fire sprinkler system was activating, drenching everything in a fine spray of water. People were running everywhere, some panicking, others trying to stay calm as they made their way towards the exits.

She pushed her way through the crowd, her heart pounding in her chest. She could feel the heat from the flames already licking at the edges of the floor, and she knew she had to get out before it was too late.

As she reached the stairwell, she saw that it was jammed with people trying to escape. She pushed her way to the front of the crowd, her hands numb with fear as she clawed her way past the bodies pressing against her from all sides.

Finally, she made it to the door and burst out onto the street. The cool night air hit her like a slap in the face, and for a moment, she just stood there, gasping for breath as she tried to process what had just happened.

She looked around, trying to take stock of the situation. The building was on fire, flames shooting up from the windows like a hellish inferno. People were running in all directions, some with burns and cuts, others just trying to get as far away from the burning building as possible.

She knew she had to move - she couldn't stay here, not when the building could collapse at any moment. She started to run, her feet pounding against the pavement as she tried to put as much distance between herself and the inferno as possible.

As she ran, she saw a figure standing on the corner of the street ahead of her. It was a man, his face twisted in a mixture of shock and anger. He was pointing at something behind him, and as she got closer, she saw what he was looking at - a crowd of people gathered around a burning car, its tires smoking and its windows shattered.

She ran towards the group, her heart sinking as she realized that this must have been the cause of the explosion in the building. She pushed her way to the front of the crowd, her eyes scanning the faces of the people around her for any sign of someone she knew.

And then she saw him - her boss, his face pale and covered in soot, his clothes singed and charred. He was standing next to the car, his eyes fixed on something inside it. She pushed her way past the others until she reached his side, her heart pounding with fear as she realized what he must have seen.

Inside the burning car, she saw a body - a woman's body, her face frozen in a look of terror and pain. Her boss was pointing at the woman's hand, which lay palm-up on the steering wheel. In her hand, there was a small device - a bomb, its timer counting down to zero.

She felt a wave of nausea wash over her as she realized what had happened. This wasn't an attack or an accident - it was murder. Her boss had been killed, and this woman's body was the proof of it.

She looked up at her boss, her eyes wide with shock and horror. He met her gaze, his own eyes filled with a deep sadness and anger. "I didn't think she'd really do it," he said, his voice hoarse with emotion. "But I guess I was wrong."

She felt her heart break for him - for all of them - as she realized the true extent of what had happened tonight. This wasn't just an explosion or a fire - it was a tragedy, a senseless act of violence that would leave scars on all of them for the rest of their lives.

As she stood there, trying to process everything that had happened, she knew one thing for certain - nothing would ever be the same again. The world had changed tonight, and they were all just trying to figure out how to survive in it.

Downloads last month
16
GGUF
Model size
8.68B params
Architecture
llama

4-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Collection including DavidAU/L3-SthenoMaidBlackroot-8.68B-V1-BRAINSTORM-4x-Multi-GGUF