How we leveraged distilabel to create an Argilla 2.0 Chatbot
•
32
SentenceTransformer("all-MiniLM-L6-v2", backend="onnx")
. Does your model not have an ONNX or OpenVINO file yet? No worries - it'll be autoexported for you. Thank me later 😉from_model2vec
or with from_distillation
where you do the distillation yourself. It'll only take 5 seconds on GPU & 2 minutes on CPU, no dataset needed.huggingface-cli upload-large-folder
. Designed for your massive models and datasets. Much recommended if you struggle to upload your Llama 70B fine-tuned model 🤡pip install huggingface_hub==0.25.0
reflection.py
.reflection.py
.