Hugging Face with Bias Data in CoNLL Format
Introduction
This README provides guidance on how to use the Hugging Face platform with bias-tagged datasets in the CoNLL format. Such datasets are essential for studying and mitigating bias in AI models. This dataset is curated by Shaina Raza. The methods and formatting discussed here are based on the seminal work "Nbias: A natural language processing framework for BIAS identification in text" by Raza et al. (2024) (see citation below).
Prerequisites
- Install the Hugging Face
transformers
anddatasets
libraries:pip install transformers datasets
Data Format
Bias data in CoNLL format can be structured similarly to standard CoNLL, but with labels indicating bias instead of named entities:
The O
book O
written B-BIAS
by I-BIAS
egoist I-BIAS
women I-BIAS
is O
good O
. O
Here, B-
prefixes indicate the beginning of a biased term,I-
indicates inside biased terms, and O
stands for outside any biased entity.
Steps to Use with Hugging Face
Loading Bias-tagged CoNLL Data with Hugging Face
- If your bias-tagged dataset in CoNLL format is publicly available on the Hugging Face
datasets
hub, use:from datasets import load_dataset dataset = load_dataset("newsmediabias/BIAS-CONLL")
- For custom datasets, ensure they are formatted correctly and use a local path to load them.
If the dataset is gated/private, make sure you have run
huggingface-cli login
- If your bias-tagged dataset in CoNLL format is publicly available on the Hugging Face
Preprocessing the Data
- Tokenization:
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("YOUR_PREFERRED_MODEL_CHECKPOINT") tokenized_input = tokenizer(dataset['train']['tokens'])
- Tokenization:
Training a Model on Bias-tagged CoNLL Data
- Depending on your task, you may fine-tune a model on the bias data using Hugging Face's
Trainer
class or native PyTorch/TensorFlow code.
- Depending on your task, you may fine-tune a model on the bias data using Hugging Face's
Evaluation
- After training, evaluate the model's ability to recognize and possibly mitigate bias.
- This might involve measuring the model's precision, recall, and F1 score on recognizing bias in text.
Deployment
- Once satisfied with the model's performance, deploy it for real-world applications, always being mindful of its limitations and potential implications.
Please cite us if you use it.
Reference to cite us
@article{raza2024nbias,
title={Nbias: A natural language processing framework for BIAS identification in text},
author={Raza, Shaina and Garg, Muskan and Reji, Deepak John and Bashir, Syed Raza and Ding, Chen},
journal={Expert Systems with Applications},
volume={237},
pages={121542},
year={2024},
publisher={Elsevier}
}
- Downloads last month
- 6