Dzarashield

Dzarashield is a fine-tuned model based on DzaraBert . It specializes in hate speech detection for Algerian Arabic text (Darija). It has been trained on a dataset consisting of 13.5k documents, constructed from manually labeled documents and various sources, achieving an F1 score of 0.87 on a holdout test of 2.5k samples.

Limitations

It's important to note that this model has been fine-tuned solely on Arabic characters, which means that tokens from other languages have been pruned.

How to use

Setup:

!git lfs install
!git clone https://huggingface.co./Sifal/dzarashield
%cd dzarashield

from model import BertClassifier
from transformers import PreTrainedTokenizerFast
import torch

# Check if a GPU is available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# Specify paths
MODEL_PATH = "./model.pth"
TOKENIZER_PATH = "./tokenizer.json"

# Load the model with the appropriate map_location
dzarashield = BertClassifier()
dzarashield.load_state_dict(torch.load(MODEL_PATH, map_location=device))

# Load the tokenizer
tokenizer = PreTrainedTokenizerFast(tokenizer_file=TOKENIZER_PATH)

Example:

idx_to_label = {0: 'non-hate', 1: 'hate'}
sentences = ['يا وحد الشموتي، تكول دجاج آآآه', 'واش خويا راك غايا؟']

def predict_label(sentence):
    tokenized = tokenizer(sentence, return_tensors='pt')
    with torch.no_grad():
        outputs = dzarashield(**tokenized)
        return idx_to_label[outputs.logits.argmax().item()]

for sentence in sentences:
    label = predict_label(sentence)
    print(f'sentence: {sentence} label: {label}')

Acknowledgments

Dzarashield is built upon the foundations of Dziribert, and I am grateful for their work in making this project possible.

References

Downloads last month

-

Downloads are not tracked for this model. How to track
Safetensors
Model size
113M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) does not yet support PyTorch models for this pipeline type.