Edit model card

Toxic post classification using DistilBert

Use a pretrained DistilBert to train a classifier on the Toxic Comment dataset https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge. The goal is to classify whether a comment is toxic or not. Note that the labels from the original datasets are more fine-grained (i.e. different types of toxicity). The model here obatains a test accuracy of 95% on a balanced split.

Downloads last month
76
Safetensors
Model size
67M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.