akoksal commited on
Commit
1fc27eb
·
1 Parent(s): 6d3f295

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -80,12 +80,12 @@ We provide in-depth evaluation of LongForm models and baselines in the paper. We
80
  | **Flan-T5** | 10.6 | 20.9* | 3.5 | 7.4 |
81
  | **Alpaca-LLaMA-7B** | 14.6 | 19.5 | 12.5 | 11.8 |
82
  | **OPT-30B** | 11.1 | 18.6 | 12.2 | 2.6 |
83
- | **[LongForm-T5-XL](https://huggingface.co/akoksal/LongForm-T5-XL)** | 16.3 | 20.2 | 18.3 | 10.6 |
84
- | **[LongForm-OPT-2.7B](https://huggingface.co/akoksal/LongForm-OPT-2.7B)** | 17.8 | 15.5 | 17.9 | **19.9** |
85
- | **[LongForm-OPT-6.7B](https://huggingface.co/akoksal/LongForm-OPT-6.7B)** | 17.7 | 16.9 | 17.2 | 19.0 |
86
- | **LongForm-LLaMA-7B**‡ | **19.7** | **21.7** | **18.6** | 18.9 |
87
 
88
- ‡: We cannot release LongForm-LLaMA-7B publicly due to restrictions of LLaMA models.
89
 
90
  ## Limitations
91
  The LongForm dataset and models mainly focus on long text generation and have limitations regarding structured prediction tasks in NLP. Additionally, we observe that LongForm models may present hallucination problems similar to those found in LLMs.
 
80
  | **Flan-T5** | 10.6 | 20.9* | 3.5 | 7.4 |
81
  | **Alpaca-LLaMA-7B** | 14.6 | 19.5 | 12.5 | 11.8 |
82
  | **OPT-30B** | 11.1 | 18.6 | 12.2 | 2.6 |
83
+ | [**LongForm-T5-XL**](https://huggingface.co/akoksal/LongForm-T5-XL) | 16.3 | 20.2 | 18.3 | 10.6 |
84
+ | [**LongForm-OPT-2.7B**](https://huggingface.co/akoksal/LongForm-OPT-2.7B) | 17.8 | 15.5 | 17.9 | **19.9** |
85
+ | [**LongForm-OPT-6.7B**](https://huggingface.co/akoksal/LongForm-OPT-6.7B) | 17.7 | 16.9 | 17.2 | 19.0 |
86
+ | [**LongForm-LLaMA-7B**](https://huggingface.co/akoksal/LongForm-LLaMA-7B-diff)‡ | **19.7** | **21.7** | **18.6** | 18.9 |
87
 
88
+ ‡: We can just release the difference between LongForm-LLaMA-7B and pretrained LLaMA-7B publicly due to restrictions of LLaMA models.
89
 
90
  ## Limitations
91
  The LongForm dataset and models mainly focus on long text generation and have limitations regarding structured prediction tasks in NLP. Additionally, we observe that LongForm models may present hallucination problems similar to those found in LLMs.