--- tags: - text2text-generation license: mit datasets: - CohereForAI/aya_dataset - CohereForAI/aya_collection_language_split - MBZUAI/Bactrian-X language: - de pipeline_tag: text2text-generation --- # Model Card of germanInstructionBERTcased for Bertology A minimalistic german instruction model with an already good analyzed and pretrained encoder like dbmdz/bert-base-german-cased. So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf). The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert). We used the Huggingface API for [warm-starting](https://huggingface.co./blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co./docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co./docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose. ## Training parameters - base model: "dbmdz/bert-base-german-cased" - trained for 3 epochs - batch size of 16 - 40000 warm-up steps - learning rate of 0.0001 ## Purpose of germanInstructionBERTcased InstructionMBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.