This model is mbart-large-50-many-to-many-mmt model fine-tuned on the text part of SLURP spoken language understanding dataset.

The scores on the test set are 85.68% and 79.00% for Intent accuracy and SLU-F1 respectively.

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.