--- tags: - dna --- # GENA-LM Yeast 🍞 (gena-lm-bert-base-yeast) GENA-LM is a Family of Open-Source Foundational Models for Long DNA Sequences. `gena-lm-bert-base-yeast` is trained on the baker’s yeast (Saccharomyces cerevisiae) genome. ## Model description GENA-LM (`gena-lm-bert-base-yeast`) model is trained with a masked language model (MLM) objective, following data preprocessing methods pipeline in the BigBird paper and by masking 15% of tokens. Model config for `gena-lm-bert-base-yeast` is similar to the bert-base: - 512 Maximum sequence length - 12 Layers, 12 Attention heads - 768 Hidden size - 32k Vocabulary size We pre-trained `gena-lm-bert-base-yeast` on data obtained from [O’Donnell et al.](https://doi.org/10.1038/s41588-023-01459-y) and includes telomere-to-telomere assemblies of 142 strains. Specific accessions are available [here](https://github.com/AIRI-Institute/GENA_LM/tree/main/data/yeasts/ENA_PRJEB59413_assmebly_links.tsv). Pre-training was performed for 3,325,000 iterations with batch size 256 and sequence length was equal to 512 tokens. We modified Transformer to use [Pre-Layer normalization](https://arxiv.org/abs/2002.04745). We upload the checkpoint with the best loss on validation set. Source code and data: https://github.com/AIRI-Institute/GENA_LM Paper: https://academic.oup.com/nar/article/53/2/gkae1310/7954523 ## Examples ### How to load pre-trained model for Masked Language Modeling ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('AIRI-Institute/gena-lm-bert-base-yeast') model = AutoModel.from_pretrained('AIRI-Institute/gena-lm-bert-base-yeast', trust_remote_code=True) ``` ### How to load pre-trained model to fine-tune it on classification task Get model class from GENA-LM repository: ```bash git clone https://github.com/AIRI-Institute/GENA_LM.git ``` ```python from GENA_LM.src.gena_lm.modeling_bert import BertForSequenceClassification from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('AIRI-Institute/gena-lm-bert-base-yeast') model = BertForSequenceClassification.from_pretrained('AIRI-Institute/gena-lm-bert-base-yeast') ``` or you can just download [modeling_bert.py](https://github.com/AIRI-Institute/GENA_LM/tree/main/src/gena_lm) and put it close to your code. OR you can get model class from HuggingFace AutoModel: ```python from transformers import AutoTokenizer, AutoModel model = AutoModel.from_pretrained('AIRI-Institute/gena-lm-bert-base-yeast', trust_remote_code=True) gena_module_name = model.__class__.__module__ print(gena_module_name) import importlib # available class names: # - BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction, # - BertForSequenceClassification, BertForMultipleChoice, BertForTokenClassification, # - BertForQuestionAnswering # check https://huggingface.co./docs/transformers/model_doc/bert cls = getattr(importlib.import_module(gena_module_name), 'BertForSequenceClassification') print(cls) model = cls.from_pretrained('AIRI-Institute/gena-lm-bert-base-yeast', num_labels=2) ``` ## Evaluation For evaluation results, see our paper: https://academic.oup.com/nar/article/53/2/gkae1310/7954523 ## Citation ```bibtex @article{GENA_LM, author = {Fishman, Veniamin and Kuratov, Yuri and Shmelev, Aleksei and Petrov, Maxim and Penzar, Dmitry and Shepelin, Denis and Chekanov, Nikolay and Kardymon, Olga and Burtsev, Mikhail}, title = {GENA-LM: a family of open-source foundational DNA language models for long sequences}, journal = {Nucleic Acids Research}, volume = {53}, number = {2}, pages = {gkae1310}, year = {2025}, month = {01}, issn = {0305-1048}, doi = {10.1093/nar/gkae1310}, url = {https://doi.org/10.1093/nar/gkae1310}, eprint = {https://academic.oup.com/nar/article-pdf/53/2/gkae1310/61443229/gkae1310.pdf}, } ```