SPECTER
SPECTER is a pre-trained language model to generate document-level embedding of documents. It is pre-trained on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning.
If you're coming here because you want to embed papers, SPECTER has now been superceded by SPECTER2. Use that instead.
Paper: SPECTER: Document-level Representation Learning using Citation-informed Transformers
Original Repo: Github
Evaluation Benchmark: SciDocs
Authors: Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, Daniel S. Weld
- Downloads last month
- 13,235
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.