GreenBit LLMs

This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.

Please refer to our Github page for the code to run the model and more information.

Repository (Llama 3 Family) Avg Acc. OpenBQ ARC-E Winogr. HellaS. ARC-C PIQA BoolQ RACE ANLI-R1 ANLI-R2 ANLI-R3 WiC
Llama-3-8B-layer-mix-bpw-2.2 0.499 0.302 0.739 0.674 0.509 0.396 0.725 0.743 0.406 0.327 0.337 0.340 0.500
Llama-3-8B-layer-mix-bpw-2.5 0.506 0.298 0.760 0.684 0.513 0.418 0.744 0.756 0.389 0.335 0.335 0.335 0.509
Llama-3-8B-layer-mix-bpw-3.0 0.523 0.318 0.770 0.708 0.540 0.441 0.767 0.784 0.407 0.333 0.345 0.343 0.526
Llama-3-8B-layer-mix-bpw-4.0 0.542 0.338 0.791 0.729 0.591 0.484 0.797 0.799 0.398 0.337 0.345 0.352 0.545
Llama-3-8B-instruct-layer-mix-bpw-2.2 0.514 0.292 0.645 0.672 0.499 0.367 0.698 0.775 0.423 0.417 0.424 0.398 0.565
Llama-3-8B-instruct-layer-mix-bpw-2.5 0.528 0.304 0.741 0.681 0.512 0.412 0.749 0.798 0.425 0.417 0.410 0.390 0.498
Llama-3-8B-instruct-layer-mix-bpw-3.0 0.547 0.316 0.787 0.690 0.530 0.459 0.768 0.800 0.437 0.435 0.417 0.387 0.548
Llama-3-8B-instruct-layer-mix-bpw-4.0 0.576 0.344 0.808 0.716 0.569 0.513 0.778 0.825 0.449 0.462 0.449 0.432 0.578
Downloads last month
15
Safetensors
Model size
1.86B params
Tensor type
FP16
·
I32
·
I16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including GreenBitAI/Llama-3-8B-layer-mix-bpw-3.0