Model Card for mncai/mistral-7b-dpo-v6
Introduction of MindsAndCompany
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
Model Summary
based mistral-7b, dpo tuned.
Detail
first step ties merge.
models:
- model: AIDC-ai-business/Marcoroni-7B-v3
# no parameters necessary for base model
- model: GreenNode/GreenNodeLM-7B-v1olet # psmathur/orca_mini_v3_13b
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: viethq188/LeoScorpius-7B-Chat-DPO
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: mncai/mistral-7b-dpo-v5
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: AIDC-ai-business/Marcoroni-7B-v3
parameters:
normalize: true
int8_mask: true
dtype: float16
second step dpo.
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=5,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-6,
lr_scheduler_type="cosine",
max_steps=1000,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
# peft_config=peft_config,
beta=0.1,
max_prompt_length=1024,
max_length=2048,
)
# Fine-tune model with DPO
dpo_trainer.train()
How to Use
Here give some examples of how to use our model.
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/mistral-7b-dpo-v6'
message = "<|user|>\nλ κ°μ κ΅¬κ° μλλ° κ°κ° μ§λ¦μ΄ 1, 2μΌλ ꡬμ λΆνΌλ λͺλ°° μ°¨μ΄κ° λμ§? μ€λͺ
λ κ°μ΄ ν΄μ€.\n<|assistant|>\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Warnings
Currently, the leaderboard is overfitted. It is inevitable because, unlike Kaggle, where there's private scoring followed by the end of the competition, here the scores are continuously open. Even among my models, some received lower scores in internal data evaluations. mncai/agiin-13.6B-v0.1 > mncai/agiin-11.1B-v0.1 > mncai/mistral-7b-dpo-v6. However, on the leaderboard, mncai/mistral-7b-dpo-v6 has the highest score. When choosing a model to use on the open LLM leaderboard, it would be best to evaluate with your own private dataset that is not publicly available.
Detect-Pretrain-Code-Contamination Result Share
use https://github.com/Mihaiii/detect-pretrain-code-contamination
DATASET=truthful_qa python src/run.py --target_model mncai/mistral-7b-dpo-v6 --data $DATASET --output_dir out/$DATASET --ratio_gen 0.4
result < 0.1, %: 0.76
Contact
If you have any questions, please raise an issue or contact us at [email protected]
- Downloads last month
- 1,211