Transformer language model for Croatian and Serbian
Trained on 28GB datasets that contain Croatian and Serbian language for one epochs (3 mil. steps). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets
Model | #params | Arch. | Training data |
---|---|---|---|
Andrija/SRoBERTa-XL |
80M | Forth | Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr (28 GB of text) |
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.