model update
Browse files
README.md
CHANGED
@@ -46,21 +46,6 @@ model-index:
|
|
46 |
- name: MoverScore (Question Generation)
|
47 |
type: moverscore_question_generation
|
48 |
value: 64.99
|
49 |
-
- name: BLEU4 (Question & Answer Generation (with Gold Answer))
|
50 |
-
type: bleu4_question_answer_generation_with_gold_answer
|
51 |
-
value: 13.96
|
52 |
-
- name: ROUGE-L (Question & Answer Generation (with Gold Answer))
|
53 |
-
type: rouge_l_question_answer_generation_with_gold_answer
|
54 |
-
value: 42.37
|
55 |
-
- name: METEOR (Question & Answer Generation (with Gold Answer))
|
56 |
-
type: meteor_question_answer_generation_with_gold_answer
|
57 |
-
value: 40.22
|
58 |
-
- name: BERTScore (Question & Answer Generation (with Gold Answer))
|
59 |
-
type: bertscore_question_answer_generation_with_gold_answer
|
60 |
-
value: 94.63
|
61 |
-
- name: MoverScore (Question & Answer Generation (with Gold Answer))
|
62 |
-
type: moverscore_question_answer_generation_with_gold_answer
|
63 |
-
value: 69.72
|
64 |
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
65 |
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
|
66 |
value: 95.54
|
@@ -79,21 +64,6 @@ model-index:
|
|
79 |
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
80 |
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
|
81 |
value: 71.13
|
82 |
-
- name: BLEU4 (Question & Answer Generation)
|
83 |
-
type: bleu4_question_answer_generation
|
84 |
-
value: 3.95
|
85 |
-
- name: ROUGE-L (Question & Answer Generation)
|
86 |
-
type: rouge_l_question_answer_generation
|
87 |
-
value: 25.63
|
88 |
-
- name: METEOR (Question & Answer Generation)
|
89 |
-
type: meteor_question_answer_generation
|
90 |
-
value: 25.64
|
91 |
-
- name: BERTScore (Question & Answer Generation)
|
92 |
-
type: bertscore_question_answer_generation
|
93 |
-
value: 90.91
|
94 |
-
- name: MoverScore (Question & Answer Generation)
|
95 |
-
type: moverscore_question_answer_generation
|
96 |
-
value: 61.98
|
97 |
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
|
98 |
type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
|
99 |
value: 93.23
|
@@ -399,40 +369,24 @@ output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as
|
|
399 |
|
400 |
| | Score | Type | Dataset |
|
401 |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
|
402 |
-
| BERTScore | 94.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
403 |
-
| Bleu_1 | 43.22 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
404 |
-
| Bleu_2 | 29.09 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
405 |
-
| Bleu_3 | 19.77 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
406 |
-
| Bleu_4 | 13.96 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
407 |
-
| METEOR | 40.22 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
408 |
-
| MoverScore | 69.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
409 |
| QAAlignedF1Score (BERTScore) | 95.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
410 |
| QAAlignedF1Score (MoverScore) | 70.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
411 |
| QAAlignedPrecision (BERTScore) | 95.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
412 |
| QAAlignedPrecision (MoverScore) | 71.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
413 |
| QAAlignedRecall (BERTScore) | 95.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
414 |
| QAAlignedRecall (MoverScore) | 70.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
415 |
-
| ROUGE_L | 42.37 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
416 |
|
417 |
|
418 |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/bart-large-squad-ae`](https://huggingface.co/lmqg/bart-large-squad-ae). [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_bart-large-squad-ae.json)
|
419 |
|
420 |
| | Score | Type | Dataset |
|
421 |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
|
422 |
-
| BERTScore | 90.91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
423 |
-
| Bleu_1 | 26.04 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
424 |
-
| Bleu_2 | 14.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
425 |
-
| Bleu_3 | 7.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
426 |
-
| Bleu_4 | 3.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
427 |
-
| METEOR | 25.64 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
428 |
-
| MoverScore | 61.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
429 |
| QAAlignedF1Score (BERTScore) | 93.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
430 |
| QAAlignedF1Score (MoverScore) | 64.76 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
431 |
| QAAlignedPrecision (BERTScore) | 93.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
432 |
| QAAlignedPrecision (MoverScore) | 64.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
433 |
| QAAlignedRecall (BERTScore) | 93.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
434 |
| QAAlignedRecall (MoverScore) | 64.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
435 |
-
| ROUGE_L | 25.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
436 |
|
437 |
|
438 |
- ***Metrics (Question Generation, Out-of-Domain)***
|
|
|
46 |
- name: MoverScore (Question Generation)
|
47 |
type: moverscore_question_generation
|
48 |
value: 64.99
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
50 |
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
|
51 |
value: 95.54
|
|
|
64 |
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
65 |
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
|
66 |
value: 71.13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
|
68 |
type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
|
69 |
value: 93.23
|
|
|
369 |
|
370 |
| | Score | Type | Dataset |
|
371 |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
372 |
| QAAlignedF1Score (BERTScore) | 95.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
373 |
| QAAlignedF1Score (MoverScore) | 70.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
374 |
| QAAlignedPrecision (BERTScore) | 95.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
375 |
| QAAlignedPrecision (MoverScore) | 71.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
376 |
| QAAlignedRecall (BERTScore) | 95.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
377 |
| QAAlignedRecall (MoverScore) | 70.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
|
|
378 |
|
379 |
|
380 |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/bart-large-squad-ae`](https://huggingface.co/lmqg/bart-large-squad-ae). [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_bart-large-squad-ae.json)
|
381 |
|
382 |
| | Score | Type | Dataset |
|
383 |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
384 |
| QAAlignedF1Score (BERTScore) | 93.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
385 |
| QAAlignedF1Score (MoverScore) | 64.76 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
386 |
| QAAlignedPrecision (BERTScore) | 93.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
387 |
| QAAlignedPrecision (MoverScore) | 64.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
388 |
| QAAlignedRecall (BERTScore) | 93.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
389 |
| QAAlignedRecall (MoverScore) | 64.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
|
|
|
390 |
|
391 |
|
392 |
- ***Metrics (Question Generation, Out-of-Domain)***
|