dataset_info:
features:
- name: src
dtype: string
- name: ref
dtype: string
- name: translation
dtype: string
- name: mqm_norm_score
dtype: string
- name: da_norm_score
dtype: string
- name: error_spans
list:
- name: span_end_offset
dtype: int64
- name: span_no
dtype: int64
- name: span_severity
dtype: string
- name: span_start_offset
dtype: int64
- name: span_text
dtype: string
- name: span_type
dtype: string
- name: language
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 2486339
num_examples: 2380
- name: validation
num_bytes: 1032240
num_examples: 1000
- name: train
num_bytes: 5473569
num_examples: 4997
download_size: 1831234
dataset_size: 8992148
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- split: train
path: data/train-*
IndicMT-Eval
This repository contains the code for the paper "IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation Metrics for Indian Languages" to appear at ACL 2023
Contents
Overview
We contribute a Multidimensional Quality Metric (MQM) dataset for Indian languages created by taking outputs generated by 7 popular MT systems and asking human annotators to judge the quality of the translations using the MQM style guidelines. Using this rich set of annotated data, we show the performance of 16 metrics of various types on evaluating en-xx translations for 5 Indian languages. We provide an updated metric called Indic-COMET which not only shows stronger correlations with human judgement on Indian languages, but is also more robust to perturbations.
Please find more details of this work in our paper (link coming soon).
MQM Dataset
The MQM annotated dataset collected with the help of language experts for the 5 Indian lamguages (Hindi, Tamil, Marathi, Malayalam, Gujarati) can be downloaded from here (link coming soon).
An example of an MQM annotation containing the source, reference and the translated output with error spans as demarcated by the annotator looks like the following:
More details regarding the instructions provided and the procedures followed for annotations are present in the paper.
How to use
The datasets
library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset
function.
Before downloading first follow the following steps:
- Gain access to the dataset and get the HF access token from: https://huggingface.co./settings/tokens.
- Install dependencies and login HF:
- Install Python
- Run
pip install librosa soundfile datasets huggingface_hub[cli]
- Login by
huggingface-cli login
and paste the HF access token. Check here for details.
For example:
from datasets import load_dataset
ds = load_dataset("ai4bharat/IndicMTEval")
Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True
argument to the load_dataset
function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
from datasets import load_dataset
ds = load_dataset("ai4bharat/IndicMTEval",streaming=True)
print(next(iter(ds)))
Indic Comet
We load the pretrained encoder and initialize it with either XLM-Roberta, COMET-DA or COME-MQM weights. During training, we divide the model parameters into two groups: the encoder parameters, that include the encoder model and the regressor parameters, that include the parameters from the top feed-forward network. We apply gradual unfreezing and discriminative learning rates, meaning that the encoder model is frozen for one epoch while the feed-forward is optimized with a learning rate. After the first epoch, the entire model is fine-tuned with a different learning rate. Since we are fine-tuning on a small dataset, we make use of early stopping with a patience of 3. The best saved checkpoint is decided using the overall Kendall-tau correlation on the test set. We use the COMET repository for training and our checkpoints are compatible with their setup.
Download the best checkpoint here
Other Metrics
We followed the implementation of metrics with the help of the following repositories: For BLEU, METEOR, ROUGE-L, CIDEr, Embedding Averaging, Greedy Matching, and Vector Extrema, we use the implementation provided by Sharma et al. (2017). For chrF++, TER, BERTScore, and BLEURT, we use the repository of Castro Ferreira et al. (2020). For SMS, WMDo, and Mover-Score, we use the implementation provided by Fabbri et al. (2020). For all the remaining task-specific metrics, we use the official codes from the respective papers.
The python file code/evaluate.py runs all of these metrics on the given dataset.
Citation
If you find IndicMTEval useful in your research or work, please consider citing our paper.
@article{DBLP:journals/corr/abs-2212-10180,
author = {Ananya B. Sai and
Tanay Dixit and
Vignesh Nagarajan and
Anoop Kunchukuttan and
Pratyush Kumar and
Mitesh M. Khapra and
Raj Dabre},
title = {IndicMT Eval: {A} Dataset to Meta-Evaluate Machine Translation metrics
for Indian Languages},
journal = {CoRR},
volume = {abs/2212.10180},
year = {2022}
}
@article{singh2024good,
title={How Good is Zero-Shot MT Evaluation for Low Resource Indian Languages?},
author={Singh, Anushka and Sai, Ananya B and Dabre, Raj and Puduppully, Ratish and Kunchukuttan, Anoop and Khapra, Mitesh M},
journal={arXiv preprint arXiv:2406.03893},
year={2024}
}