metadata
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: null
verified: false
speechless-coder-ds-6.7b
Use the following dataset to fine-tune deepseek-ai/deepseek-coder-6.7b in order to improve the model's reasoning and planning abilities.
context window length: 8192 max_tokens > 128 && < 8192
Total 185,193 samples 426 MB
- ise-uiuc/Magicoder-OSS-Instruct-75K 75,186 samples
- ise-uiuc/Magicoder-Evol-Instruct-110K 110,007 samples
50 samples/T=0.2/MaxTokens=512/Top_P=0.95
Code: https://github.com/uukuguy/speechless
How to Prompt the Model
This model accepts the Alpaca instruction format.
For example:
You are an intelligent programming assistant.
### Instruction:
Implement a linked list in C++
### Response:
HumanEval
Metric | Value |
---|---|
humaneval-python |
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
BigCode Eval
0.314188
- metrics_humanevalfixtests-cpp: "pass@1": 0.27439024390243905
- metrics_humanevalfixtests-go: "pass@1": 0.23170731707317074
- metrics_humanevalfixtests-java: "pass@1": 0.25609756097560976
- metrics_humanevalfixtests-js: "pass@1": 0.21951219512195122
- metrics_humanevalfixtests-python: "pass@1": 0.23780487804878048
- metrics_humanevalfixtests-rust: "pass@1": 0.13414634146341464
0.390111
- metrics_humanevalsynthesize-cpp: "pass@1": 0.3780487804878049
- metrics_humanevalsynthesize-go: "pass@1": 0.25609756097560976
- metrics_humanevalsynthesize-java: "pass@1": 0.45121951219512196
- metrics_humanevalsynthesize-js: "pass@1": 0.4268292682926829
- metrics_humanevalsynthesize-python: "pass@1": 0.5365853658536586
- metrics_humanevalsynthesize-rust: "pass@1": 0.25
- metrics_mbpp: "pass@1": 0.432
LMEval
Metric | Value |
---|---|
ARC | |
HellaSwag | |
MMLU | |
TruthfulQA | |
Average |