SLIMER-3: Show Less Instruct More Entity Recognition LLaMA3
This LLaMA-3 based model scores +12.5 % wrt paper's original SLIMER LLaMA-2.
⚠️ A more powerful model, which scores +17% and allows parallel NEs extraction, can be found at:
https://huggingface.co./expertai/SLIMER-PARALLEL-LLaMA3
GitHub repository: https://github.com/andrewzamai/SLIMER
SLIMER is an LLM specifically instructed for zero-shot NER on English language.
SLIMER for Italian language can be found at: https://huggingface.co./expertai/LLaMAntino-3-SLIMER-IT
Instructed on a reduced number of samples, it is designed to tackle never-seen-before Named Entity tags by leveraging a prompt enriched with a DEFINITION and GUIDELINES for the NE to be extracted.
Instruction Tuning Prompt
<|start_header_id|>user<|end_header_id|>
You are given a text chunk (delimited by triple quotes) and an instruction.
Read the text and answer to the instruction in the end.
"""
{input text}
"""
Instruction: Extract the Named Entities of type DATE from the text chunk you have read.
You are given a DEFINITION and some GUIDELINES.
DEFINITION: DATE refers to specific points in time, including days, months, years, and relative time expressions like 'Week 2'.
GUIDELINES: Avoid labeling non-specific time references like 'recently' or 'soon'. Exercise caution with ambiguous terms like 'May' (month or verb) and 'Wednesday Adams' (person's name which includes a day of the week).
Return a JSON list of instances of this Named Entity type (for example [\"text_span_1\", \"text_span_2\"]. Return an empty list [] if no instances are present. Return only the JSON list, no further motivations or introduction to the answer.
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Currently existing approaches fine-tune on an extensive number of entity classes (around 13K) and assess zero-shot NER capabilities on Out-Of-Distribution input domains.
SLIMER performs comparably to these state-of-the-art models on OOD input domains, while being trained only a reduced number of samples and a set of NE tags that overlap in lesser degree with test sets.
We extend the standard zero-shot evaluations (CrossNER and MIT) with BUSTER, which is characterized by financial entities that are rather far from the more traditional tags observed by all models during training.
An inverse trend can be observed, with SLIMER emerging as the most effective in dealing with these unseen labels, thanks to its lighter instruction tuning methodology and the use of definition and guidelines.
Model |
Backbone |
#Params |
MIT |
CrossNER |
BUSTER |
AVG |
|
|
|
Movie |
Restaurant |
AI |
Literature |
Music |
Politics |
Science |
|
|
ChatGPT |
gpt-3.5-turbo |
- |
5.3 |
32.8 |
52.4 |
39.8 |
66.6 |
68.5 |
67.0 |
- |
- |
InstructUIE |
Flan-T5-xxl |
11B |
63.0 |
21.0 |
49.0 |
47.2 |
53.2 |
48.2 |
49.3 |
- |
- |
UniNER-type |
LLaMA-1 |
7B |
42.4 |
31.7 |
53.5 |
59.4 |
65.0 |
60.8 |
61.1 |
34.8 |
51.1 |
GoLLIE |
Code-LLaMA |
7B |
63.0 |
43.4 |
59.1 |
62.7 |
67.8 |
57.2 |
55.5 |
27.7 |
54.6 |
GLiNER-L |
DeBERTa-v3 |
0.3B |
57.2 |
42.9 |
57.2 |
64.4 |
69.6 |
72.6 |
62.6 |
26.6 |
56.6 |
GNER-T5 |
Flan-T5-xxl |
11B |
62.5 |
51.0 |
68.2 |
68.7 |
81.2 |
75.1 |
76.7 |
27.9 |
63.9 |
GNER-LLaMA |
LLaMA-1 |
7B |
68.6 |
47.5 |
63.1 |
68.2 |
75.7 |
69.4 |
69.9 |
23.6 |
60.8 |
SLIMER |
LLaMA-3.1-Instruct |
8B |
56.4 |
44.8 |
55.6 |
63.3 |
68.8 |
69.2 |
67.3 |
45.9 |
59.0 |
JSON Template
JSON SLIMER prompt
{
"description": "SLIMER prompt",
"prompt_input": "<|start_header_id|>system<|end_header_id|>\n\nYou are an expert in Named Entity Recognition designed to output JSON only.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nYou are given a text chunk (delimited by triple quotes) and an instruction.\nRead the text and answer to the instruction in the end.\n\"\"\"\n{input}\n\"\"\"\nInstruction: Extract the Named Entities of type {NE_name} from the text chunk you have read. You are given a DEFINITION and some GUIDELINES.\nDEFINITION: {definition}\nGUIDELINES: {guidelines}\nReturn a JSON list of instances of this Named Entity type (for example [\"text_span_1\", \"text_span_2\"]. Return an empty list [] if no instances are present. Return only the JSON list, no further motivations or introduction to the answer.<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n"
}
from vllm import LLM, SamplingParams
vllm_model = LLM(model="expertai/SLIMER-LLaMA3")
sampling_params = SamplingParams(temperature=0, max_tokens=128)
prompts = [prompter.generate_prompt(instruction, input) for instruction, input in instruction_input_pairs]
responses = vllm_model.generate(prompts, sampling_params)
Citation
If you find SLIMER useful in your research or work, please cite the following paper:
@misc{zamai2024lessinstructmoreenriching,
title={Show Less, Instruct More: Enriching Prompts with Definitions and Guidelines for Zero-Shot NER},
author={Andrew Zamai and Andrea Zugarini and Leonardo Rigutini and Marco Ernandes and Marco Maggini},
year={2024},
eprint={2407.01272},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.01272},
}