BY_PINKSTACK.png

PRAM V2

πŸ§€ Which quant is right for you?

  • Q4: This model should be used for low end devices like phones or older laptops due to its compact size, quality is okay but fully usable.
  • Q5: This model should be used on devices which are decently powerful, eg; gtx 1650 gpu's or better for quick responses. Quality is a bit better than Q4.
  • Q8: This model is the best one we offer, should be used on high end devices such as rtx 3070 gpu's or better, respones are very high quality and it's better at reasoning than Q5, Q4.

Things you should be aware of when using PARM models (Pinkstack Accuracy Reasoning Models) πŸ§€

This PARM is based on Phi 3.5 mini which has gotten extra training parameters so it would have similar outputs to O.1 Mini, We trained with this dataset.

To use this model, you must use a service which supports the GGUF file format. Additionaly, this is the Prompt Template, it uses the Phi-3 template.

{{ if .System }}<|system|> {{ .System }}<|end|> {{ end }}{{ if .Prompt }}<|user|> {{ .Prompt }}<|end|> {{ end }}<|assistant|> {{ .Response }}<|end|>

Or if you are using an anti prompt: <|end|><|assistant|>

Highly recommended to be used with a system prompt.

This model has been tested inside of:

  • Msty: with 8,192 Max token output and 32,000 Context. (RTX 3080, q8 model) Very high quality responses
  • Ollama: with 1,000 Max token output and 1,000 Context. (Qualcomm Snapdragon 8 Gen 2, q5 model) High quality responses
  • Transformers: with 4,096 Max token output and 2,048 Context. (Nvidia tesla T4, q4 model) Medium quality but useable responses

Extra information

  • Developed by: Pinkstack
  • License: apache-2.0
  • Finetuned from model : unsloth/phi-3.5-mini-instruct-bnb-4bit

This llama model was trained with Unsloth and Huggingface's TRL library.

Used this model? Don't forget to leave a like :)

Downloads last month
145
GGUF
Model size
3.82B params
Architecture
llama

4-bit

5-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.