Post
π₯ Today's pick in Interpretability & Analysis of LMs: Fine-grained Hallucination Detection and Editing For Language Models by
@abhika-m
@akariasai
@vidhisha
et al.
Authors introduce a new taxonomy for fine-grained annotation of hallucinations in LM generations and propose Factuality Verification with Augmented Knowledge (FAVA), a retrieval-augmented LM fine-tuned to detect and edit hallucinations in LM outputs, outperforming ChatGPT and LLama2 Chat on both detection and editing tasks.
π Website: https://fine-grained-hallucination.github.io
π Paper: Fine-grained Hallucination Detection and Editing for Language Models (2401.06855)
π Demo: fava-uw/fava
π€ Model: fava-uw/fava-model
π‘ Dataset: fava-uw/fava-data
Authors introduce a new taxonomy for fine-grained annotation of hallucinations in LM generations and propose Factuality Verification with Augmented Knowledge (FAVA), a retrieval-augmented LM fine-tuned to detect and edit hallucinations in LM outputs, outperforming ChatGPT and LLama2 Chat on both detection and editing tasks.
π Website: https://fine-grained-hallucination.github.io
π Paper: Fine-grained Hallucination Detection and Editing for Language Models (2401.06855)
π Demo: fava-uw/fava
π€ Model: fava-uw/fava-model
π‘ Dataset: fava-uw/fava-data