momori-chegg
commited on
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,61 +1,5 @@
|
|
1 |
-
|
2 |
-
---
|
3 |
-
tags:
|
4 |
-
- cheggllm
|
5 |
-
- code
|
6 |
-
metrics:
|
7 |
-
- code_eval
|
8 |
-
library_name: transformers
|
9 |
-
model-index:
|
10 |
-
- name: masa-test
|
11 |
-
results:
|
12 |
-
- task:
|
13 |
-
type: text-generation
|
14 |
-
dataset:
|
15 |
-
type: cs_eval_data
|
16 |
-
name: cs_eval_data
|
17 |
-
revision: 971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7
|
18 |
-
metrics:
|
19 |
-
- name: accuracy
|
20 |
-
type: accuracy
|
21 |
-
value: 4.01
|
22 |
-
verified: false
|
23 |
-
source:
|
24 |
-
name: eval_report
|
25 |
-
url: https://huggingface.co/datasets/momori-chegg/cs_evaluation_report/blob/main/2023-01-21/cs_eval_report.csv
|
26 |
-
- task:
|
27 |
-
type: text-generation
|
28 |
-
dataset:
|
29 |
-
type: cs_eval_dataset
|
30 |
-
name: cs_eval_dataset
|
31 |
-
revision: 971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7
|
32 |
-
metrics:
|
33 |
-
- name: structure
|
34 |
-
type: structure
|
35 |
-
value: 30.9
|
36 |
-
verified: false
|
37 |
-
source:
|
38 |
-
name: eval_report
|
39 |
-
url: https://huggingface.co/datasets/momori-chegg/cs_evaluation_report/blob/main/2023-01-21/cs_eval_report.csv
|
40 |
-
- task:
|
41 |
-
type: text-generation
|
42 |
-
dataset:
|
43 |
-
type: cs_testbed_dataset
|
44 |
-
name: cs_testbed_dataset
|
45 |
-
revision: 971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7
|
46 |
-
metrics:
|
47 |
-
- name: structure
|
48 |
-
type: structure
|
49 |
-
value: 25
|
50 |
-
verified: false
|
51 |
-
source:
|
52 |
-
name: eval_report
|
53 |
-
url: https://huggingface.co/datasets/momori-chegg/cs_testbed_evaluation_dataset
|
54 |
---
|
|
|
55 |
|
56 |
-
|
57 |
-
|
58 |
-
[testbed dataset](https://huggingface.co/datasets/momori-chegg/cs_testbed_evaluation_dataset)<br>
|
59 |
-
[inference results](https://huggingface.co/datasets/momori-chegg/cs_inference_results)<br>
|
60 |
-
|
61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
{'license': 'apache-2.0', 'language': ['fr', 'it', 'de', 'es', 'en'], 'tags': ['moe']}---
|
3 |
|
4 |
+
# Model Card for Mixtral-8x7B
|
5 |
+
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Expert
|
|
|
|
|
|
|
|