Token Classification
Token classification is a natural language understanding task in which a label is assigned to some tokens in a text. Some popular token classification subtasks are Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. NER models could be trained to identify specific entities in a text, such as dates, individuals and places; and PoS tagging would identify, for example, which words in a text are verbs, nouns, and punctuation marks.
Input
My name is Omar and I live in Zürich.
About Token Classification
Use Cases
Information Extraction from Invoices
You can extract entities of interest from invoices automatically using Named Entity Recognition (NER) models. Invoices can be read with Optical Character Recognition models and the output can be used to do inference with NER models. In this way, important information such as date, company name, and other named entities can be extracted.
Task Variants
Named Entity Recognition (NER)
NER is the task of recognizing named entities in a text. These entities can be the names of people, locations, or organizations. The task is formulated as labeling each token with a class for each named entity and a class named "0" for tokens that do not contain any entities. The input for this task is text and the output is the annotated text with named entities.
Inference
You can use the 🤗 Transformers library ner
pipeline to infer with NER models.
from transformers import pipeline
classifier = pipeline("ner")
classifier("Hello I'm Omar and I live in Zürich.")
Part-of-Speech (PoS) Tagging
In PoS tagging, the model recognizes parts of speech, such as nouns, pronouns, adjectives, or verbs, in a given text. The task is formulated as labeling each word with a part of the speech.
Inference
You can use the 🤗 Transformers library token-classification
pipeline with a POS tagging model of your choice. The model will return a json with PoS tags for each token.
from transformers import pipeline
classifier = pipeline("token-classification", model = "vblagoje/bert-english-uncased-finetuned-pos")
classifier("Hello I'm Omar and I live in Zürich.")
This is not limited to transformers! You can also use other libraries such as Stanza, spaCy, and Flair to do inference! Here is an example using a canonical spaCy model.
!pip install https://huggingface.co./spacy/en_core_web_sm/resolve/main/en_core_web_sm-any-py3-none-any.whl
import en_core_web_sm
nlp = en_core_web_sm.load()
doc = nlp("I'm Omar and I live in Zürich.")
for token in doc:
print(token.text, token.pos_, token.dep_, token.ent_type_)
## I PRON nsubj
## 'm AUX ROOT
## Omar PROPN attr PERSON
### ...
Useful Resources
Would you like to learn more about token classification? Great! Here you can find some curated resources that you may find helpful!
Notebooks
Scripts for training
Documentation
Compatible libraries
Note A robust performance model to identify people, locations, organizations and names of miscellaneous entities.
Note A strong model to identify people, locations, organizations and names in multiple languages.
Note A token classification model specialized on medical entity recognition.
Note Flair models are typically the state of the art in named entity recognition tasks.
Note A widely used dataset useful to benchmark named entity recognition models.
Note A multilingual dataset of Wikipedia articles annotated for named entity recognition in over 150 different languages.
Note An application that can recognizes entities, extracts noun chunks and recognizes various linguistic features of each token.
- accuracy
- Accuracy is the proportion of correct predictions among the total number of cases processed. It can be computed with: Accuracy = (TP + TN) / (TP + TN + FP + FN) Where: TP: True positive TN: True negative FP: False positive FN: False negative
- recall
- Recall is the fraction of the positive examples that were correctly labeled by the model as positive. It can be computed with the equation: Recall = TP / (TP + FN) Where TP is the true positives and FN is the false negatives.
- precision
- Precision is the fraction of correctly labeled positive examples out of all of the examples that were labeled as positive. It is computed via the equation: Precision = TP / (TP + FP) where TP is the True positives (i.e. the examples correctly labeled as positive) and FP is the False positive examples (i.e. the examples incorrectly labeled as positive).
- f1
- The F1 score is the harmonic mean of the precision and recall. It can be computed with the equation: F1 = 2 * (precision * recall) / (precision + recall)