SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2

This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/MiniLM-L12-v2-all-nli-triplet")
# Run inference
sentences = [
    'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
    'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
    'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8264
spearman_cosine 0.8386
pearson_manhattan 0.8219
spearman_manhattan 0.8255
pearson_euclidean 0.8223
spearman_euclidean 0.8261
pearson_dot 0.6375
spearman_dot 0.6287
pearson_max 0.8264
spearman_max 0.8386

Semantic Similarity

Metric Value
pearson_cosine 0.821
spearman_cosine 0.8347
pearson_manhattan 0.8083
spearman_manhattan 0.8148
pearson_euclidean 0.8093
spearman_euclidean 0.8156
pearson_dot 0.5795
spearman_dot 0.576
pearson_max 0.821
spearman_max 0.8347

Semantic Similarity

Metric Value
pearson_cosine 0.8087
spearman_cosine 0.8218
pearson_manhattan 0.7876
spearman_manhattan 0.7969
pearson_euclidean 0.7903
spearman_euclidean 0.7988
pearson_dot 0.495
spearman_dot 0.4929
pearson_max 0.8087
spearman_max 0.8218

Training Details

Training Dataset

Omartificial-Intelligence-Space/arabic-n_li-triplet

  • Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
  • Size: 557,850 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 10.33 tokens
    • max: 52 tokens
    • min: 5 tokens
    • mean: 13.21 tokens
    • max: 49 tokens
    • min: 5 tokens
    • mean: 15.32 tokens
    • max: 53 tokens
  • Samples:
    anchor positive negative
    شخص على حصان يقفز فوق طائرة معطلة شخص في الهواء الطلق، على حصان. شخص في مطعم، يطلب عجة.
    أطفال يبتسمون و يلوحون للكاميرا هناك أطفال حاضرون الاطفال يتجهمون
    صبي يقفز على لوح التزلج في منتصف الجسر الأحمر. الفتى يقوم بخدعة التزلج الصبي يتزلج على الرصيف
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Evaluation Dataset

Omartificial-Intelligence-Space/arabic-n_li-triplet

  • Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
  • Size: 6,584 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 21.86 tokens
    • max: 105 tokens
    • min: 4 tokens
    • mean: 10.22 tokens
    • max: 49 tokens
    • min: 4 tokens
    • mean: 11.2 tokens
    • max: 33 tokens
  • Samples:
    anchor positive negative
    امرأتان يتعانقان بينما يحملان حزمة إمرأتان يحملان حزمة الرجال يتشاجرون خارج مطعم
    طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة. طفلين يرتديان قميصاً مرقماً يغسلون أيديهم طفلين يرتديان سترة يذهبان إلى المدرسة
    رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس رجل يبيع الدونات لعميل امرأة تشرب قهوتها في مقهى صغير
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss sts-test-128_spearman_cosine sts-test-256_spearman_cosine sts-test-64_spearman_cosine
0.0229 200 6.2204 - - -
0.0459 400 4.9559 - - -
0.0688 600 4.7835 - - -
0.0918 800 4.2725 - - -
0.1147 1000 4.291 - - -
0.1377 1200 4.0704 - - -
0.1606 1400 3.7962 - - -
0.1835 1600 3.7447 - - -
0.2065 1800 3.569 - - -
0.2294 2000 3.5373 - - -
0.2524 2200 3.608 - - -
0.2753 2400 3.5609 - - -
0.2983 2600 3.5231 - - -
0.3212 2800 3.3312 - - -
0.3442 3000 3.4803 - - -
0.3671 3200 3.3552 - - -
0.3900 3400 3.3024 - - -
0.4130 3600 3.2559 - - -
0.4359 3800 3.1882 - - -
0.4589 4000 3.227 - - -
0.4818 4200 3.0889 - - -
0.5048 4400 3.0861 - - -
0.5277 4600 3.0178 - - -
0.5506 4800 3.231 - - -
0.5736 5000 3.1593 - - -
0.5965 5200 3.1101 - - -
0.6195 5400 3.1307 - - -
0.6424 5600 3.1265 - - -
0.6654 5800 3.1116 - - -
0.6883 6000 3.1417 - - -
0.7113 6200 3.0862 - - -
0.7342 6400 2.9652 - - -
0.7571 6600 2.8466 - - -
0.7801 6800 2.271 - - -
0.8030 7000 2.046 - - -
0.8260 7200 1.9634 - - -
0.8489 7400 1.8875 - - -
0.8719 7600 1.7655 - - -
0.8948 7800 1.6874 - - -
0.9177 8000 1.7315 - - -
0.9407 8200 1.6674 - - -
0.9636 8400 1.6574 - - -
0.9866 8600 1.6142 - - -
1.0 8717 - 0.8347 0.8386 0.8218

Framework Versions

  • Python: 3.9.18
  • Sentence Transformers: 3.0.1
  • Transformers: 4.40.0
  • PyTorch: 2.2.2+cu121
  • Accelerate: 0.26.1
  • Datasets: 2.19.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Acknowledgments

The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.

## Citation

If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:

```bibtex
@software{nacar2024,
  author       = {Omer Nacar},
  title        = {Arabic Matryoshka Embeddings Model - Arabic MiniLM L12 v2 All Nli Triplet},
  year         = 2024,
  url          = {https://huggingface.co./Omartificial-Intelligence-Space/Arabic-MiniLM-L12-v2-all-nli-triplet},
  version      = {1.0.0},
}
Downloads last month
459
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Omartificial-Intelligence-Space/Arabic-MiniLM-L12-v2-all-nli-triplet

Dataset used to train Omartificial-Intelligence-Space/Arabic-MiniLM-L12-v2-all-nli-triplet

Spaces using Omartificial-Intelligence-Space/Arabic-MiniLM-L12-v2-all-nli-triplet 4

Collection including Omartificial-Intelligence-Space/Arabic-MiniLM-L12-v2-all-nli-triplet

Evaluation results