Dark Sapling V1.1 7B - 32k Context - Ultra Quality - 32 bit upscale.

Complete remerge, and remaster of the incredible Dark Sapling V1.1 7B - 32k Context from source files.

Registering an impressive drop of 240 points (lower is better) at Q4KM.

This puts "Q4KM" operating at "Q6" levels, and further elevates Q6 and Q8 as well.

Likewise, even Q2K (smallest quant) will operate at much higher levels than it's original source counterpart.

RESULTS:

The result is superior performance in instruction following, reasoning, depth, nuance and emotion.

Reduction in prompt size, as it understands nuance better.

And as a side effect more context available for output due to reduction in prompt size.

Note that there will be an outsized difference between quants especially for creative and/or "no right answer" use cases.

Because of this it is suggested to download the highest quant you can operate, and it's closest neighbours so to speak.

IE: Q4KS, Q4KM, Q5KS as an example.

Imatrix Plus versions to be uploaded at a separate repo shortly.

Special thanks to "TEEZEE" the original model creator:

[ https://huggingface.co./TeeZee/DarkSapling-7B-v1.1 ]

NOTE: Version 1 and Version 2 are also remastered.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 1":

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co./DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Downloads last month
39
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Collections including DavidAU/DarkSapling-V1.1-Ultra-Quality-7B-GGUF