File size: 2,644 Bytes
a2ee369
808ce4e
 
 
 
 
 
 
 
 
a2ee369
808ce4e
 
 
 
 
 
 
 
 
 
 
 
 
a2ee369
808ce4e
 
 
c813a52
 
808ce4e
 
 
 
 
 
 
 
 
 
 
 
 
 
7cc0d6d
 
 
 
 
 
 
 
 
 
 
 
 
808ce4e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
  results:
  - task:
      type: text-generation
    dataset:
      type: openai_humaneval
      name: HumanEval
    metrics:
    - name: pass@1
      type: pass@1
      value: 
      verified: false
---

<p><h1> speechless-coder-ds-6.7b  </h1></p>

[4, 5 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co./uukuguy/speechless-coder-ds-6.7b/tree/main/GGUF)

Use the following dataset to fine-tune deepseek-ai/deepseek-coder-6.7b in order to improve the model's reasoning and planning abilities.

context window length: 8192
max_tokens > 128 && < 8192
>
Total 185,193 samples 426 MB
- ise-uiuc/Magicoder-OSS-Instruct-75K 75,186 samples
- ise-uiuc/Magicoder-Evol-Instruct-110K 110,007 samples


50 samples/T=0.2/MaxTokens=512/Top_P=0.95

Code: https://github.com/uukuguy/speechless

## How to Prompt the Model
This model accepts the Alpaca instruction format.

For example:
```
You are an intelligent programming assistant.

### Instruction:
Implement a linked list in C++

### Response:
```

## HumanEval

| Metric | Value |
| --- | --- |
| humaneval-python |  |

[Big Code Models Leaderboard](https://huggingface.co./spaces/bigcode/bigcode-models-leaderboard)

CodeLlama-34B-Python: 53.29

CodeLlama-34B-Instruct: 50.79

CodeLlama-13B-Instruct: 50.6

CodeLlama-34B: 45.11

CodeLlama-13B-Python: 42.89

CodeLlama-13B: 35.07

## BigCode Eval
0.314188

- metrics_humanevalfixtests-cpp:    "pass@1": 0.27439024390243905
- metrics_humanevalfixtests-go:    "pass@1": 0.23170731707317074
- metrics_humanevalfixtests-java:    "pass@1": 0.25609756097560976
- metrics_humanevalfixtests-js:    "pass@1": 0.21951219512195122
- metrics_humanevalfixtests-python:    "pass@1": 0.23780487804878048
- metrics_humanevalfixtests-rust:    "pass@1": 0.13414634146341464

0.390111

- metrics_humanevalsynthesize-cpp:    "pass@1": 0.3780487804878049
- metrics_humanevalsynthesize-go:    "pass@1": 0.25609756097560976
- metrics_humanevalsynthesize-java:    "pass@1": 0.45121951219512196
- metrics_humanevalsynthesize-js:    "pass@1": 0.4268292682926829
- metrics_humanevalsynthesize-python:    "pass@1": 0.5365853658536586
- metrics_humanevalsynthesize-rust:    "pass@1": 0.25
- metrics_mbpp:    "pass@1": 0.432



## LMEval

[Open LLM Leaderboard](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | |
| HellaSwag | |
| MMLU | |
| TruthfulQA |  |
| Average |  |