Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Kukedlc
/
NeuralMaxime-7B-DPO
like
1
Text Generation
Transformers
Safetensors
Intel/orca_dpo_pairs
mistral
code
conversational
text-generation-inference
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
Edit model card
NeuralMaxime 7b DPO
DPO Intel - Orca
Merge - MergeKit
Models : NeuralMonarch & AlphaMonarch (MLabonne)
NeuralMaxime 7b DPO
DPO Intel - Orca
Merge - MergeKit
Models : NeuralMonarch & AlphaMonarch (MLabonne)
Downloads last month
73
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
Text Generation
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to
Inference Endpoints (dedicated)
instead.
Model tree for
Kukedlc/NeuralMaxime-7B-DPO
Quantizations
2 models
Dataset used to train
Kukedlc/NeuralMaxime-7B-DPO
Intel/orca_dpo_pairs
Viewer
•
Updated
Nov 29, 2023
•
12.9k
•
1.14k
•
290