SPICE
Selective Pretraining for Informed Context Extraction (SPICE)- a small but powerful retriever.
The future is a thing that is not given. It is a thing that is made." - Leto II (Dune)
Using the model
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("iamgroot42/spice")
sentences = [
"Beware of manuals. Manuals create habits.",
"How easy it is to mistake clear reasoning from correct reasoning!",
"Show me a completely smooth operation and I'll show you someone who's covering mistakes. Real boats rock."
]
embeddings = model.encode(sentences)
similarities = model.similarity(embeddings, embeddings)
Model training details and data will be uploaded soon!
- Downloads last month
- 659
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Evaluation results
- ndcg_at_1 on MTEB ArguAna (default)test set self-reported49.075
- ndcg_at_3 on MTEB ArguAna (default)test set self-reported66.383
- ndcg_at_5 on MTEB ArguAna (default)test set self-reported70.342
- ndcg_at_10 on MTEB ArguAna (default)test set self-reported72.962
- ndcg_at_20 on MTEB ArguAna (default)test set self-reported73.723
- ndcg_at_100 on MTEB ArguAna (default)test set self-reported73.826
- ndcg_at_1000 on MTEB ArguAna (default)test set self-reported73.826
- map_at_1 on MTEB ArguAna (default)test set self-reported49.075
- map_at_3 on MTEB ArguAna (default)test set self-reported62.138
- map_at_5 on MTEB ArguAna (default)test set self-reported64.354