SentenceTransformer based on dunzhang/stella_en_1.5B_v5
This is a sentence-transformers model finetuned from dunzhang/stella_en_1.5B_v5. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: dunzhang/stella_en_1.5B_v5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 1536, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1536, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("DrishtiSharma/stella_en_1.5B_v5-obliqa-5-epochs")
# Run inference
sentences = [
'Are there any anticipated changes to the COBS Rule 17.3 / MIR Rule 3.2.1 that Authorised Persons should be preparing for in the near future? If so, what is the expected timeline for these changes to take effect?',
'REGULATORY REQUIREMENTS FOR AUTHORISED PERSONS ENGAGED IN REGULATED ACTIVITIES IN RELATION TO VIRTUAL ASSETS\nCapital Requirements\nWhen applying COBS Rule 17.3 / MIR Rule 3.2.1 to an Authorised Person, the FSRA will apply proportionality in considering whether any additional capital buffer must be held, based on the size, scope, complexity and nature of the activities and operations of the Authorised Person and, if so, the appropriate amount of regulatory capital required as an additional buffer. An Authorised Person that the FSRA considers to be high risk may attract higher regulatory capital requirements.\n',
'In exceptional circumstances, where the Bail-in Tool is applied, the Regulator may exclude or partially exclude certain liabilities from the application of the Write Down or Conversion Power where—\n(a)\tit is not possible to bail-in that liability within a reasonable time despite the reasonable efforts of the Regulator;\n(b)\tthe exclusion is strictly necessary and is proportionate to achieve the continuity of Critical Functions and Core Business Lines in a manner that maintains the ability of the Institution in Resolution to continue key operations, services and transactions;\n(c)\tthe exclusion is strictly necessary and proportionate to avoid giving rise to widespread contagion, in particular as regards Deposits and Eligible Deposits which would severely disrupt the functioning of financial markets, including financial market infrastructures, in a manner that could cause broader financial instability; or\n(d)\tthe application of the Bail-in Tool to those liabilities would cause a destruction of value such that the losses borne by other creditors would be higher than if those liabilities were excluded from bail-in.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.6234 |
cosine_accuracy@3 | 0.7636 |
cosine_accuracy@5 | 0.8113 |
cosine_accuracy@10 | 0.8558 |
cosine_precision@1 | 0.6234 |
cosine_precision@3 | 0.2688 |
cosine_precision@5 | 0.1757 |
cosine_precision@10 | 0.0953 |
cosine_recall@1 | 0.5458 |
cosine_recall@3 | 0.6823 |
cosine_recall@5 | 0.7314 |
cosine_recall@10 | 0.7835 |
cosine_ndcg@10 | 0.6893 |
cosine_mrr@10 | 0.7027 |
cosine_map@100 | 0.6455 |
dot_accuracy@1 | 0.3447 |
dot_accuracy@3 | 0.5656 |
dot_accuracy@5 | 0.6639 |
dot_accuracy@10 | 0.7787 |
dot_precision@1 | 0.3447 |
dot_precision@3 | 0.1955 |
dot_precision@5 | 0.1403 |
dot_precision@10 | 0.0855 |
dot_recall@1 | 0.3029 |
dot_recall@3 | 0.5 |
dot_recall@5 | 0.5915 |
dot_recall@10 | 0.7071 |
dot_ndcg@10 | 0.5127 |
dot_mrr@10 | 0.4802 |
dot_map@100 | 0.4464 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 22,291 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 16 tokens
- mean: 33.53 tokens
- max: 71 tokens
- min: 15 tokens
- mean: 118.07 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 What constitutes a "sufficiently advanced stage of development" for a FinTech Proposal to qualify for a live test under the RegLab framework, as mentioned in criterion (c)?
Evaluation Criteria. To qualify for authorisation under the RegLab framework, the applicant must demonstrate how it satisfies the following evaluation criteria:
(a) the FinTech Proposal promotes FinTech innovation, in terms of the business application and deployment model of the technology.
(b) the FinTech Proposal has the potential to:
i. promote significant growth, efficiency or competition in the financial sector;
ii. promote better risk management solutions and regulatory outcomes for the financial industry; or
iii. improve the choices and welfare of clients.
(c) the FinTech Proposal is at a sufficiently advanced stage of development to mount a live test.
(d) the FinTech Proposal can be deployed in the ADGM and the UAE on a broader scale or contribute to the development of ADGM as a financial centre, and, if so, how the applicant intends to do so on completion of the validity period.Are there any upcoming regulatory changes that Authorised Persons should be aware of regarding the handling or classification of Virtual Assets within the ADGM?
CONCEPTS RELATING TO THE DISCLOSURE OF PETROLEUM ACTIVITIES
Petroleum Projects and materiality
If a Petroleum Reporting Entity discloses estimates that it viewed as material at the time of disclosure, but subsequently forms a view that they are no longer material, the FSRA expects the Petroleum Reporting Entity to make a further disclosure providing the clear rationale for the change view on materiality. Such reasoning would generally follow the considerations outlined in paragraph 24 above.What are the ADGM's requirements for VC Managers regarding the periodic assessment and audit of their compliance frameworks, and who is qualified to conduct such assessments?
Principle 1 – A Robust and Transparent Risk-Based Regulatory Framework. The framework encompasses a suite of regulations, activity-specific rules and supporting guidance that delivers protection to investors, maintains market integrity and future-proofs against financial stability risks. In particular, it introduces a clear taxonomy defining VAs as commodities within the wider Digital Asset universe and requires the licensing of entities engaged in regulated activities that use VAs within ADGM.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | cosine_map@100 |
---|---|---|---|
0.0897 | 200 | - | 0.5597 |
0.1794 | 400 | - | 0.5674 |
0.2242 | 500 | 0.7416 | - |
0.2691 | 600 | - | 0.4684 |
0.3587 | 800 | - | 0.5593 |
0.4484 | 1000 | 0.6613 | 0.5502 |
0.5381 | 1200 | - | 0.5740 |
0.6278 | 1400 | - | 0.5398 |
0.6726 | 1500 | 0.5382 | - |
0.7175 | 1600 | - | 0.5820 |
0.8072 | 1800 | - | 0.5770 |
0.8969 | 2000 | 0.4959 | 0.5834 |
0.9865 | 2200 | - | 0.5382 |
1.0 | 2230 | - | 0.3223 |
1.0762 | 2400 | - | 0.5532 |
1.1211 | 2500 | 0.3796 | - |
1.1659 | 2600 | - | 0.5817 |
1.2556 | 2800 | - | 0.5929 |
1.3453 | 3000 | 0.367 | 0.5937 |
1.4350 | 3200 | - | 0.5907 |
1.5247 | 3400 | - | 0.6024 |
1.5695 | 3500 | 0.2877 | - |
1.6143 | 3600 | - | 0.6006 |
1.7040 | 3800 | - | 0.6131 |
1.7937 | 4000 | 0.2818 | 0.6167 |
1.8834 | 4200 | - | 0.6040 |
1.9731 | 4400 | - | 0.6144 |
2.0 | 4460 | - | 0.6225 |
2.0179 | 4500 | 0.2529 | - |
2.0628 | 4600 | - | 0.6196 |
2.1525 | 4800 | - | 0.6222 |
2.2422 | 5000 | 0.1409 | 0.6278 |
2.3318 | 5200 | - | 0.6337 |
2.4215 | 5400 | - | 0.6409 |
2.4664 | 5500 | 0.1213 | - |
2.5112 | 5600 | - | 0.6424 |
2.6009 | 5800 | - | 0.6412 |
2.6906 | 6000 | 0.1218 | 0.6432 |
2.7803 | 6200 | - | 0.6456 |
2.8700 | 6400 | - | 0.6446 |
2.9148 | 6500 | 0.1247 | - |
2.9596 | 6600 | - | 0.6458 |
3.0 | 6690 | - | 0.6455 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.1.0+cu118
- Accelerate: 1.2.0.dev0
- Datasets: 3.1.0
- Tokenizers: 0.20.3
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 30
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for DrishtiSharma/stella-en-1.5B-v5-obliqa-5-epochs
Base model
dunzhang/stella_en_1.5B_v5Evaluation results
- Cosine Accuracy@1 on Unknownself-reported0.623
- Cosine Accuracy@3 on Unknownself-reported0.764
- Cosine Accuracy@5 on Unknownself-reported0.811
- Cosine Accuracy@10 on Unknownself-reported0.856
- Cosine Precision@1 on Unknownself-reported0.623
- Cosine Precision@3 on Unknownself-reported0.269
- Cosine Precision@5 on Unknownself-reported0.176
- Cosine Precision@10 on Unknownself-reported0.095
- Cosine Recall@1 on Unknownself-reported0.546
- Cosine Recall@3 on Unknownself-reported0.682