yurakuratov
commited on
readme: update bib entry and links - GENA in NAR
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ Differences between GENA-LM (`gena-lm-bert-large-t2t`) and DNABERT:
|
|
17 |
|
18 |
Source code and data: https://github.com/AIRI-Institute/GENA_LM
|
19 |
|
20 |
-
Paper: https://
|
21 |
|
22 |
This repository also contains models that are finetuned on downstream tasks:
|
23 |
- promoters prediction 300bp (branch [promoters_300_run_1](https://huggingface.co/AIRI-Institute/gena-lm-bert-large-t2t/tree/promoters_300_run_1))
|
@@ -81,20 +81,23 @@ GENA-LM (`gena-lm-bert-large-t2t`) model is trained in a masked language model (
|
|
81 |
We pre-trained `gena-lm-bert-large-t2t` using the latest T2T human genome assembly (https://www.ncbi.nlm.nih.gov/assembly/GCA_009914755.3/). The data was augmented by sampling mutations from 1000-genome SNPs (gnomAD dataset). Pre-training was performed for 1,750,000 iterations with batch size 256 and sequence length was equal to 512 tokens. We modified Transformer with [Pre-Layer normalization](https://arxiv.org/abs/2002.04745).
|
82 |
|
83 |
## Evaluation
|
84 |
-
For evaluation results, see our paper: https://
|
85 |
|
86 |
|
87 |
## Citation
|
88 |
```bibtex
|
89 |
@article{GENA_LM,
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
|
|
|
|
|
|
99 |
}
|
100 |
```
|
|
|
17 |
|
18 |
Source code and data: https://github.com/AIRI-Institute/GENA_LM
|
19 |
|
20 |
+
Paper: https://academic.oup.com/nar/article/53/2/gkae1310/7954523
|
21 |
|
22 |
This repository also contains models that are finetuned on downstream tasks:
|
23 |
- promoters prediction 300bp (branch [promoters_300_run_1](https://huggingface.co/AIRI-Institute/gena-lm-bert-large-t2t/tree/promoters_300_run_1))
|
|
|
81 |
We pre-trained `gena-lm-bert-large-t2t` using the latest T2T human genome assembly (https://www.ncbi.nlm.nih.gov/assembly/GCA_009914755.3/). The data was augmented by sampling mutations from 1000-genome SNPs (gnomAD dataset). Pre-training was performed for 1,750,000 iterations with batch size 256 and sequence length was equal to 512 tokens. We modified Transformer with [Pre-Layer normalization](https://arxiv.org/abs/2002.04745).
|
82 |
|
83 |
## Evaluation
|
84 |
+
For evaluation results, see our paper: https://academic.oup.com/nar/article/53/2/gkae1310/7954523
|
85 |
|
86 |
|
87 |
## Citation
|
88 |
```bibtex
|
89 |
@article{GENA_LM,
|
90 |
+
author = {Fishman, Veniamin and Kuratov, Yuri and Shmelev, Aleksei and Petrov, Maxim and Penzar, Dmitry and Shepelin, Denis and Chekanov, Nikolay and Kardymon, Olga and Burtsev, Mikhail},
|
91 |
+
title = {GENA-LM: a family of open-source foundational DNA language models for long sequences},
|
92 |
+
journal = {Nucleic Acids Research},
|
93 |
+
volume = {53},
|
94 |
+
number = {2},
|
95 |
+
pages = {gkae1310},
|
96 |
+
year = {2025},
|
97 |
+
month = {01},
|
98 |
+
issn = {0305-1048},
|
99 |
+
doi = {10.1093/nar/gkae1310},
|
100 |
+
url = {https://doi.org/10.1093/nar/gkae1310},
|
101 |
+
eprint = {https://academic.oup.com/nar/article-pdf/53/2/gkae1310/61443229/gkae1310.pdf},
|
102 |
}
|
103 |
```
|