BEE-spoke-data/Mixtral-GQA-400m-v2-GGUF

Quantized GGUF model files for Mixtral-GQA-400m-v2 from BEE-spoke-data

Original Model Card:

BEE-spoke-data/Mixtral-GQA-400m-v2

testing code

# !pip install -U -q transformers datasets accelerate sentencepiece
import pprint as pp
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="BEE-spoke-data/Mixtral-GQA-400m-v2",
    device_map="auto",
)
pipe.model.config.pad_token_id = pipe.model.config.eos_token_id

prompt = "My favorite movie is Godfather because"

res = pipe(
    prompt,
    max_new_tokens=256,
    top_k=4,
    penalty_alpha=0.6,
    use_cache=True,
    no_repeat_ngram_size=4,
    repetition_penalty=1.1,
    renormalize_logits=True,
)
pp.pprint(res[0])
Downloads last month
39
GGUF
Model size
2.01B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/Mixtral-GQA-400m-v2-GGUF

Quantized
(1)
this model