RedRocket Joint Tagger Project
JTP-1.1: PILOT2
NEW 2024-07-02: This model is an incremental improvement over PILOT. It features a gated model head and some changes were made to training hyperparameters. Inference code will need some changes to account for the architecture changes. Both models will be hosted on HF spaces until more testing between the two can be done.
JTP-1: PILOT
This model is a multi-label classifier model designed and trained by RedRocket for use on furry images, using E621 tags.
PILOT is the first model of this series. It is trained on over 9000 tags -- tags were selected with the criteria of being e621 tags with more than 500 occurrences, that are not artist or character tags.
Model Details
Model Description
- Developed by: RedRocket
- Compute power provided by: Minotoro and Frosting.ai (thank you)
- Model type: Multi-label classifier
- License: Apache 2.0
- Finetuned from model: ViT-SO400M-14-SigLIP
Model Sources
- Repository: Here!
- Demo (PILOT): https://huggingface.co./spaces/RedRocket/JointTaggerProject-Inference
- Demo (PILOT2): https://huggingface.co./spaces/RedRocket/JointTaggerProject-Inference-Beta
Uses
Direct Use
Use it to tag furry images.
Downstream Use
Use it to train a text-to-image model on synthetic tags. It might just be good enough for that by now. Should wait for more extensive evaluation though to be safe. I would suggest supplementing tags rather than replacing them.
Out-of-Scope Use
Use it to tag non-furry images. It might not work terribly well but it might also work surprisingly well! Great entertainment value either way.
Bias, Risks, and Limitations
This model may contain biases. Tags that are poorly tagged in the original data may be weakly predicted by the classifier, for instance. Tags that are very commonly present alongside other tags may be hallucinated.
The model has been known to show biases towards English defaultness, specifically outputting the tag english_text
on text that does not belong to any specific language, for example Arabic numerals and onomatopoeia.
Recommendations
Check at least some portion of your outputs manually, preferably a diverse sample, to verify the correctness of its outputs, and apply a different threshold if it seems necessary.
How to Get Started with the Model
Use the included code to launch a Gradio demo for playing with the model. We recommend a threshold of 0.2 for starting out. Validation stats during training showed a Bookmaker's Informedness of 0.725 at this value (this means that the model is that much better at guessing tags than random guessing). Manual evaluation seems to suggest that a large portion of the gap between that value and 1 is likely to be due to false negatives from the dataset.
Training Details
Training Data
The model was trained on a roughly 4 million image subset of e621. No dataset filtering was applied.
Loss weighting was informed by a Bayesian prior model trained on a set of tags from non-deleted post tag strings from an e621 database dump.
Training Procedure
Images go in, logits come out. You can't explain that.
Loss objective is F.binary_cross_entropy_with_logits(output, target, torch.maximum(target, 1.0 - prior_output))
.
Preprocessing [optional]
Image preprocessing should be done in the following order:
- Resize image to longest side 384.
- torchvision.transforms.ToTensor()
- Composite the alpha channel, if present, with 50% gray.
- Normalize to mean 0.5 and std 0.5 (changing the range from (0, 1) to (-1, 1))
- Pad image to 384x384 (torchvision.transforms.CenterCrop((384,384)) will do this)
Training Hyperparameters
- Training regime: Model was trained for 4 epochs on a batch size of 512, using Schedule Free Adam
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
A validation set of approximately 128,000 images was reserved for validation testing.
Metrics
Bookmaker's Informedness at thresholds of 0.2, 0.3, and 0.5, as well as loss on the validation set were monitored throughout training. Training was terminated at 5 epochs, and the checkpoint with the lowest validation loss (end of 4th epoch) was taken.
Results
It seems to tag furry images fairly well.
- Downloads last month
- 0