q4f16_1 converted model(4bit Quantized) from Llama-2-ko-7b-Chat
This repository includes 4bit quantized model with MLC-LLM, with the weights from kfkas/Llama-2-ko-7b-Chat.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.