Is ModernBERT already fine-tuned for IR tasks?
Hi there, I was wondering if ModernBERT has already been fine-tuned on IR tasks
Hello!
Yes, about a dozen times. See here a filter with Sentence Transformer models & ModernBERT architecture: https://huggingface.co./models?library=sentence-transformers&other=modernbert
I'm not sure which model is the best (it also depends on your use case), perhaps https://huggingface.co./NohTow/ModernBERT-base-DPR-fullneg-gte-0.0002 ?
But I suspect that none so far have reached the same performance as some of the models reported on MTEB. We'll have to wait a bit longer for that.
- Tom Aarsen
Thank you for the information and the links!
I was also curious about the base versions, “answerdotai/ModernBERT-base” and “answerdotai/ModernBERT-large”—have these models already been fine-tuned for IR tasks, such as using contrastive loss?