Fine-tuned with QuicKB

This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/modernbert-embed-base
  • Maximum Sequence Length: 1024 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("AdamLucek/modernbert-embed-quickb")
# Run inference
sentences = [
    'Which offeror is mentioned as getting in if there is a points discrepancy?',
    '. at 9:14–19 (“[I]f an offeror does not have the same number of points, if it’s the 130th offeror and it doesn’t have the same number of points as the 90th offeror, then the solicitation says the 90th offeror gets in and the 130th doesn’t.”)',
    '. Since the plaintiff does not address this issue in its sur-reply brief in No. 11-445, and because the plaintiff does not ask the Court to direct the DOJ to produce Document 3 to the plaintiff, the plaintiff does not appear to continue to challenge the DOJ’s decision to withhold Document 3. 140 recorded decision to implement the opinion.” Id. at 32',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric dim_768 dim_512 dim_256 dim_128 dim_64
cosine_accuracy@1 0.5821 0.5657 0.5411 0.4887 0.3799
cosine_accuracy@3 0.7495 0.7331 0.7064 0.6581 0.5462
cosine_accuracy@5 0.7957 0.7916 0.7659 0.7177 0.614
cosine_accuracy@10 0.8573 0.8532 0.8306 0.7803 0.7043
cosine_precision@1 0.5821 0.5657 0.5411 0.4887 0.3799
cosine_precision@3 0.2498 0.2444 0.2355 0.2194 0.1821
cosine_precision@5 0.1591 0.1583 0.1532 0.1435 0.1228
cosine_precision@10 0.0857 0.0853 0.0831 0.078 0.0704
cosine_recall@1 0.5821 0.5657 0.5411 0.4887 0.3799
cosine_recall@3 0.7495 0.7331 0.7064 0.6581 0.5462
cosine_recall@5 0.7957 0.7916 0.7659 0.7177 0.614
cosine_recall@10 0.8573 0.8532 0.8306 0.7803 0.7043
cosine_ndcg@10 0.7212 0.7103 0.6839 0.6319 0.5334
cosine_mrr@10 0.6775 0.6645 0.6372 0.5846 0.4797
cosine_map@100 0.6827 0.6695 0.6428 0.5917 0.4878

Training Details

Training Dataset

Unnamed Dataset

  • Size: 8,760 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 7 tokens
    • mean: 15.54 tokens
    • max: 32 tokens
    • min: 4 tokens
    • mean: 76.24 tokens
    • max: 169 tokens
  • Samples:
    anchor positive
    What is being compared in the Circuit's statement? .2d at 1389–90. The Circuit rejected this analogy, stating that, in contrast to the CIA Act, the NSA Act “protects not only organizational matters . . . but also ‘any information with respect to the activities’ of the NSA.” Id. at 1390
    What type of internal documents used by the CIA in FOIA requests is mentioned? . 108 Accordingly, the Court holds that certain specific categories of information withheld by the CIA in this case pursuant to § 403g clearly fall outside that provision’s scope, including (1) internal templates utilized by the CIA in tasking FOIA requests, (2) internal rules, policies and procedures governing FOIA processing, and (7) information about the CIA’s “core functions,” including
    How many documents did the CIA withhold under Exemption 2? . The CIA states in its declaration that all thirteen documents withheld under 38 The plaintiff previously indicated that it intended to challenge Exemption 2 withholding decisions made by the ODNI as well. See Hackett Decl. Ex. E at 1, ECF No. 29-8. The plaintiff, however, does not pursue that challenge in its opposition to the defendants’ motions for summary judgment in No. 11-445
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.5839 10 67.1727 - - - - -
1.0 18 - 0.6999 0.6820 0.6577 0.5988 0.4855
1.1168 20 32.4667 - - - - -
1.7007 30 27.9435 - - - - -
2.0 36 - 0.7167 0.7002 0.6764 0.6233 0.5187
2.2336 40 22.2924 - - - - -
2.8175 50 20.5125 - - - - -
3.0 54 - 0.7190 0.7080 0.6824 0.6318 0.5339
3.3504 60 18.3621 - - - - -
3.8175 68 - 0.7212 0.7103 0.6839 0.6319 0.5334
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.4.0
  • Transformers: 4.48.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
2
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for AdamLucek/modernbert-embed-quickb

Finetuned
(18)
this model

Evaluation results