SeanLee97 commited on
Commit
04c674a
1 Parent(s): d324ec0

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md CHANGED
@@ -1,87 +1,144 @@
1
  ---
2
- license: mit
3
- base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
 
 
 
4
  tags:
5
- - generated_from_trainer
6
- model-index:
7
- - name: pre-pubmedbert-base-embedding
8
- results: []
9
  ---
10
 
11
- # WhereIsAI/pubmed-angle-base-en
12
 
13
- This model is an example model for the Chinese blog post [title](#) and [angle tutorial](https://angle.readthedocs.io/en/latest/notes/tutorial.html#tutorial).
14
 
15
- It was fine-tuned with [AnglE Loss](https://arxiv.org/abs/2309.12871) using the official [angle-emb](https://github.com/SeanLee97/AnglE).
16
 
17
- Here are the details:
 
 
 
 
 
 
 
 
18
 
19
- - Base model: [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext)
20
- - Training Data: [WhereIsAI/medical-triples](https://huggingface.co/datasets/WhereIsAI/medical-triples), processed from [PubMedQA](https://huggingface.co/datasets/qiaojin/PubMedQA).
21
- - Test Data: [WhereIsAI/pubmedqa-test-angle-format-a](https://huggingface.co/datasets/WhereIsAI/pubmedqa-test-angle-format-a)
22
 
23
- **Performance:**
 
 
24
 
25
- | Model | Pooling Strategy | Spearman's Correlation |
26
- |----------------------------------------|------------------|:----------------------:|
27
- | tavakolih/all-MiniLM-L6-v2-pubmed-full | avg | 84.56 |
28
- | NeuML/pubmedbert-base-embeddings | avg | 84.88 |
29
- | **WhereIsAI/pubmed-angle-base-en** | cls | 86.01 |
30
- | WhereIsAI/pubmed-angle-large-en | cls | 86.21 |
31
 
 
 
 
 
 
 
32
 
33
  ## Usage
34
 
35
- ### via angle-emb
 
 
36
 
37
  ```bash
38
- python -m pip install -U angle-emb
39
  ```
40
 
41
- Example:
42
-
43
  ```python
44
- from angle_emb import AnglE
45
- from angle_emb.utils import cosine_similarity
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
- angle = AnglE.from_pretrained('WhereIsAI/pubmed-angle-base-en', pooling_strategy='cls').cuda()
 
48
 
49
- query = 'How to treat childhood obesity and overweight?'
50
- docs = [
51
- query,
52
- 'The child is overweight. Parents should relieve their children\'s symptoms through physical activity and healthy eating. First, they can let them do some aerobic exercise, such as jogging, climbing, swimming, etc. In terms of diet, children should eat more cucumbers, carrots, spinach, etc. Parents should also discourage their children from eating fried foods and dried fruits, which are high in calories and fat. Parents should not let their children lie in bed without moving after eating. If their children\'s condition is serious during the treatment of childhood obesity, parents should go to the hospital for treatment under the guidance of a doctor in a timely manner.',
53
- 'If you want to treat tonsillitis better, you can choose some anti-inflammatory drugs under the guidance of a doctor, or use local drugs, such as washing the tonsil crypts, injecting drugs into the tonsils, etc. If your child has a sore throat, you can also give him or her some pain relievers. If your child has a fever, you can give him or her antipyretics. If the condition is serious, seek medical attention as soon as possible. If the medication does not have a good effect and the symptoms recur, the author suggests surgical treatment. Parents should also make sure to keep their children warm to prevent them from catching a cold and getting tonsillitis again.',
54
- ]
55
 
56
- embeddings = angle.encode(docs)
57
- query_emb = embeddings[0]
58
 
59
- for doc, emb in zip(docs[1:], embeddings[1:]):
60
- print(cosine_similarity(query_emb, emb))
61
 
62
- # 0.8029839020052982
63
- # 0.4260630076818197
64
- ```
65
 
 
66
 
67
- ### via sentence-transformers
 
68
 
69
- Install sentence-transformers
 
70
 
71
- ```bash
72
- python -m pip install -U sentence-transformers
73
- ```
 
 
74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Citation
77
 
78
- If you use this model for academic papers, please cite angle's paper, as follows:
79
 
80
- ```bibtext
81
- @article{li2023angle,
82
- title={AnglE-optimized Text Embeddings},
83
- author={Li, Xianming and Li, Jing},
84
- journal={arXiv preprint arXiv:2309.12871},
85
- year={2023}
86
- }
87
- ```
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: WhereIsAI/pubmed-angle-base-en
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ pipeline_tag: sentence-similarity
7
  tags:
8
+ - sentence-transformers
9
+ - sentence-similarity
10
+ - feature-extraction
11
+ widget: []
12
  ---
13
 
14
+ # SentenceTransformer based on WhereIsAI/pubmed-angle-base-en
15
 
16
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [WhereIsAI/pubmed-angle-base-en](https://huggingface.co/WhereIsAI/pubmed-angle-base-en). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
17
 
18
+ ## Model Details
19
 
20
+ ### Model Description
21
+ - **Model Type:** Sentence Transformer
22
+ - **Base model:** [WhereIsAI/pubmed-angle-base-en](https://huggingface.co/WhereIsAI/pubmed-angle-base-en) <!-- at revision d324ec037647870570f04d1d9bd7070194d4f3ff -->
23
+ - **Maximum Sequence Length:** 512 tokens
24
+ - **Output Dimensionality:** 768 tokens
25
+ - **Similarity Function:** Cosine Similarity
26
+ <!-- - **Training Dataset:** Unknown -->
27
+ <!-- - **Language:** Unknown -->
28
+ <!-- - **License:** Unknown -->
29
 
30
+ ### Model Sources
 
 
31
 
32
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
33
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
34
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
35
 
36
+ ### Full Model Architecture
 
 
 
 
 
37
 
38
+ ```
39
+ SentenceTransformer(
40
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
41
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
42
+ )
43
+ ```
44
 
45
  ## Usage
46
 
47
+ ### Direct Usage (Sentence Transformers)
48
+
49
+ First install the Sentence Transformers library:
50
 
51
  ```bash
52
+ pip install -U sentence-transformers
53
  ```
54
 
55
+ Then you can load this model and run inference.
 
56
  ```python
57
+ from sentence_transformers import SentenceTransformer
58
+
59
+ # Download from the 🤗 Hub
60
+ model = SentenceTransformer("WhereIsAI/pubmed-angle-base-en")
61
+ # Run inference
62
+ sentences = [
63
+ 'The weather is lovely today.',
64
+ "It's so sunny outside!",
65
+ 'He drove to the stadium.',
66
+ ]
67
+ embeddings = model.encode(sentences)
68
+ print(embeddings.shape)
69
+ # [3, 768]
70
+
71
+ # Get the similarity scores for the embeddings
72
+ similarities = model.similarity(embeddings, embeddings)
73
+ print(similarities.shape)
74
+ # [3, 3]
75
+ ```
76
 
77
+ <!--
78
+ ### Direct Usage (Transformers)
79
 
80
+ <details><summary>Click to see the direct usage in Transformers</summary>
 
 
 
 
 
81
 
82
+ </details>
83
+ -->
84
 
85
+ <!--
86
+ ### Downstream Usage (Sentence Transformers)
87
 
88
+ You can finetune this model on your own dataset.
 
 
89
 
90
+ <details><summary>Click to expand</summary>
91
 
92
+ </details>
93
+ -->
94
 
95
+ <!--
96
+ ### Out-of-Scope Use
97
 
98
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
99
+ -->
100
+
101
+ <!--
102
+ ## Bias, Risks and Limitations
103
 
104
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
105
+ -->
106
+
107
+ <!--
108
+ ### Recommendations
109
+
110
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
111
+ -->
112
+
113
+ ## Training Details
114
+
115
+ ### Framework Versions
116
+ - Python: 3.10.12
117
+ - Sentence Transformers: 3.0.1
118
+ - Transformers: 4.42.3
119
+ - PyTorch: 2.3.0+cu121
120
+ - Accelerate: 0.30.1
121
+ - Datasets: 2.19.1
122
+ - Tokenizers: 0.19.1
123
 
124
  ## Citation
125
 
126
+ ### BibTeX
127
 
128
+ <!--
129
+ ## Glossary
130
+
131
+ *Clearly define terms in order to be accessible across audiences.*
132
+ -->
133
+
134
+ <!--
135
+ ## Model Card Authors
136
+
137
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
138
+ -->
139
+
140
+ <!--
141
+ ## Model Card Contact
142
+
143
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
144
+ -->
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
3
  "architectures": [
4
  "BertModel"
5
  ],
@@ -17,7 +17,7 @@
17
  "num_hidden_layers": 12,
18
  "pad_token_id": 0,
19
  "position_embedding_type": "absolute",
20
- "torch_dtype": "float32",
21
  "transformers_version": "4.42.3",
22
  "type_vocab_size": 2,
23
  "use_cache": false,
 
1
  {
2
+ "_name_or_path": "WhereIsAI/pubmed-angle-base-en",
3
  "architectures": [
4
  "BertModel"
5
  ],
 
17
  "num_hidden_layers": 12,
18
  "pad_token_id": 0,
19
  "position_embedding_type": "absolute",
20
+ "torch_dtype": "float16",
21
  "transformers_version": "4.42.3",
22
  "type_vocab_size": 2,
23
  "use_cache": false,
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.42.3",
5
+ "pytorch": "2.3.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:805188f640cda837fc10ab7da12c74a2c9ea429b26c78357028e47ba3fad78fd
3
- size 437951328
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1afbc3337a2dedec780d476a0d013125227167709e8ca0d37ab9866954170d2c
3
+ size 218986728
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json CHANGED
@@ -1,7 +1,37 @@
1
  {
2
- "cls_token": "[CLS]",
3
- "mask_token": "[MASK]",
4
- "pad_token": "[PAD]",
5
- "sep_token": "[SEP]",
6
- "unk_token": "[UNK]"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  }
 
1
  {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
  }
tokenizer_config.json CHANGED
@@ -46,7 +46,7 @@
46
  "do_basic_tokenize": true,
47
  "do_lower_case": true,
48
  "mask_token": "[MASK]",
49
- "model_max_length": 1000000000000000019884624838656,
50
  "never_split": null,
51
  "pad_token": "[PAD]",
52
  "sep_token": "[SEP]",
 
46
  "do_basic_tokenize": true,
47
  "do_lower_case": true,
48
  "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
  "never_split": null,
51
  "pad_token": "[PAD]",
52
  "sep_token": "[SEP]",