Edit model card

The base model of AutoCode_QW_7B is CodeQwen1.5-7b.

In this version, we fixed the problem that the model will only start the code interpreter when you ask it to verify its code.

you can try the code interpreter function on the AutoCoder GitHub

For the simple code generation without code interpreter ability, try the following script:

from transformers import AutoTokenizer, AutoModelForCausalLM
from datasets import load_dataset
model_path = "Bin12345/AutoCoder_QW_7B"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, 
                                             device_map="auto")

Input = "" # input your question here
 
messages=[
    { 'role': 'user', 'content': Input}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, 
                                        return_tensors="pt").to(model.device)
outputs = model.generate(inputs, 
                        max_new_tokens=1024, 
                        do_sample=False, 
                        temperature=0.0,
                        top_p=1.0, 
                        num_return_sequences=1, 
                        eos_token_id=tokenizer.eos_token_id)
answer = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
Downloads last month
49
Safetensors
Model size
7.25B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using Bin12345/AutoCoder_QW_7B 1