duyntnet commited on
Commit
dada1a9
·
verified ·
1 Parent(s): 818ab23

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - gemma-2-9b-it-WPO-HB
12
+ ---
13
+ Quantizations of https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB
14
+
15
+ ### Inference Clients/UIs
16
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
17
+ * [KoboldCPP](https://github.com/LostRuins/koboldcpp)
18
+ * [ollama](https://github.com/ollama/ollama)
19
+ * [jan](https://github.com/janhq/jan)
20
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
21
+ * [GPT4All](https://github.com/nomic-ai/gpt4all)
22
+ ---
23
+
24
+ # From original readme
25
+
26
+ gemma-2-9b-it finetuned by hybrid WPO, utilizing two types of data:
27
+ 1. On-policy sampled gemma outputs based on Ultrafeedback prompts.
28
+ 2. GPT-4-turbo outputs based on Ultrafeedback prompts.
29
+
30
+ In comparison to the preference data construction method in our paper, we switch to RLHFlow/ArmoRM-Llama3-8B-v0.1 to score the outputs, and choose the outputs with maximum/minimum scores to form a preference pair.
31
+
32
+ We provide our training data at [wzhouad/gemma-2-ultrafeedback-hybrid](https://huggingface.co/datasets/wzhouad/gemma-2-ultrafeedback-hybrid).