YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co./docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
AutoCoder_S_6.7B - GGUF
- Model creator: https://huggingface.co./Bin12345/
- Original model: https://huggingface.co./Bin12345/AutoCoder_S_6.7B/
Name | Quant method | Size |
---|---|---|
AutoCoder_S_6.7B.Q2_K.gguf | Q2_K | 2.36GB |
AutoCoder_S_6.7B.IQ3_XS.gguf | IQ3_XS | 2.61GB |
AutoCoder_S_6.7B.IQ3_S.gguf | IQ3_S | 2.75GB |
AutoCoder_S_6.7B.Q3_K_S.gguf | Q3_K_S | 2.75GB |
AutoCoder_S_6.7B.IQ3_M.gguf | IQ3_M | 2.9GB |
AutoCoder_S_6.7B.Q3_K.gguf | Q3_K | 3.07GB |
AutoCoder_S_6.7B.Q3_K_M.gguf | Q3_K_M | 3.07GB |
AutoCoder_S_6.7B.Q3_K_L.gguf | Q3_K_L | 3.35GB |
AutoCoder_S_6.7B.IQ4_XS.gguf | IQ4_XS | 3.4GB |
AutoCoder_S_6.7B.Q4_0.gguf | Q4_0 | 3.56GB |
AutoCoder_S_6.7B.IQ4_NL.gguf | IQ4_NL | 3.59GB |
AutoCoder_S_6.7B.Q4_K_S.gguf | Q4_K_S | 3.59GB |
AutoCoder_S_6.7B.Q4_K.gguf | Q4_K | 3.8GB |
AutoCoder_S_6.7B.Q4_K_M.gguf | Q4_K_M | 3.8GB |
AutoCoder_S_6.7B.Q4_1.gguf | Q4_1 | 3.95GB |
AutoCoder_S_6.7B.Q5_0.gguf | Q5_0 | 4.33GB |
AutoCoder_S_6.7B.Q5_K_S.gguf | Q5_K_S | 4.33GB |
AutoCoder_S_6.7B.Q5_K.gguf | Q5_K | 4.46GB |
AutoCoder_S_6.7B.Q5_K_M.gguf | Q5_K_M | 4.46GB |
AutoCoder_S_6.7B.Q5_1.gguf | Q5_1 | 4.72GB |
AutoCoder_S_6.7B.Q6_K.gguf | Q6_K | 5.15GB |
AutoCoder_S_6.7B.Q8_0.gguf | Q8_0 | 6.67GB |
Original model description:
license: apache-2.0
We introduced a new model designed for the Code generation task. It 33B version's test accuracy on the HumanEval base dataset surpasses that of GPT-4 Turbo (April 2024). (90.9% vs 90.2%).
Additionally, compared to previous open-source models, AutoCoder offers a new feature: it can automatically install the required packages and attempt to run the code until it deems there are no issues, whenever the user wishes to execute the code.
This is the 6.7B version of AutoCoder. Its base model is deepseeker-coder.
See details on the AutoCoder GitHub.
Simple test script:
model_path = ""
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path,
device_map="auto")
HumanEval = load_dataset("evalplus/humanevalplus")
Input = "" # input your question here
messages=[
{ 'role': 'user', 'content': Input}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True,
return_tensors="pt").to(model.device)
outputs = model.generate(inputs,
max_new_tokens=1024,
do_sample=False,
temperature=0.0,
top_p=1.0,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id)
answer = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
- Downloads last month
- 115