Dataset Viewer
Auto-converted to Parquet
The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Server-side error
Error code:   ClientConnectionError

Dataset Card for Danbooru 2025 Metadata

Latest Post ID: 8,877,698 (as of February 18, 2025)

This repository provides a comprehensive, up-to-date metadata dump for Danbooru. The metadata was freshly scraped starting January 2, 2025, featuring more extensive tag annotations for older posts, fewer errors, and fewer unlabeled AI-generated images compared to previous scrapes.

Dataset Details

Overview Danbooru is a well-known imageboard focusing on anime-style artwork, hosting millions of user-submitted images with extensive tagging. This dataset offers metadata (in Parquet format) for all posts up to the specified date, including details such as tags, upload timestamps, and file properties.

Key Advantages

  • Consolidated Metadata: All available metadata is contained within this single dataset, eliminating the need to merge multiple partial scrapes.
  • Improved Tag Accuracy: Historical tag renames and additions are accurately reflected, reducing the potential mismatch or redundancy often found in older metadata dumps.
  • Less AI Noise: Compared to many legacy scrapes, the 2025 data incorporates updated annotations and filters out many unlabeled AI-generated images.

Usage

You can load and filter this dataset using the Hugging Face datasets library:

from datasets import load_dataset

danbooru_metadata = load_dataset("trojblue/danbooru2025-metadata", split="train")
df = danbooru_metadata.to_pandas()

This metadata can be used for research, indexing, or as a foundation for building image-based machine learning pipelines. However, please be mindful of any copyright, content, or platform-specific policies.

Dataset Structure

The metadata schema is closely aligned with Danbooru’s JSON structure, ensuring familiarity for those who have used other Danbooru scrapes. Below are the main columns:

Index([
  'approver_id', 'bit_flags', 'created_at', 'down_score', 'fav_count',
  'file_ext', 'file_size', 'file_url', 'has_active_children', 'has_children',
  'has_large', 'has_visible_children', 'id', 'image_height', 'image_width',
  'is_banned', 'is_deleted', 'is_flagged', 'is_pending', 'large_file_url',
  'last_comment_bumped_at', 'last_commented_at', 'last_noted_at', 'md5',
  'media_asset_created_at', 'media_asset_duration', 'media_asset_file_ext',
  'media_asset_file_key', 'media_asset_file_size', 'media_asset_id',
  'media_asset_image_height', 'media_asset_image_width',
  'media_asset_is_public', 'media_asset_md5', 'media_asset_pixel_hash',
  'media_asset_status', 'media_asset_updated_at', 'media_asset_variants',
  'parent_id', 'pixiv_id', 'preview_file_url', 'rating', 'score', 'source',
  'tag_count', 'tag_count_artist', 'tag_count_character',
  'tag_count_copyright', 'tag_count_general', 'tag_count_meta', 'tag_string',
  'tag_string_artist', 'tag_string_character', 'tag_string_copyright',
  'tag_string_general', 'tag_string_meta', 'up_score', 'updated_at',
  'uploader_id'
], dtype='object')

Dataset Creation

Scraping Process

  • Post IDs from 1 to the latest ID (8,877,698) were retrieved using a distributed scraping approach.
  • Certain restricted tags (e.g., loli) are inaccessible without special permissions and are therefore absent in this dataset.
  • If you require more comprehensive metadata (including hidden or restricted tags), consider merging this data with older scrapes such as Danbooru2021.

Below is a simplified example of how the raw JSON was converted into a flattened Parquet file:

import pandas as pd
from pandarallel import pandarallel

pandarallel.initialize(nb_workers=4, progress_bar=True)

def flatten_dict(d, parent_key='', sep='_'):
    """Recursively flattens a nested dictionary."""
    items = []
    for k, v in d.items():
        new_key = f"{parent_key}{sep}{k}" if parent_key else k
        if isinstance(v, dict):
            items.extend(flatten_dict(v, new_key, sep=sep).items())
        elif isinstance(v, list):
            items.append((new_key, ', '.join(map(str, v))))
        else:
            items.append((new_key, v))
    return dict(items)

def extract_all_illust_info(json_content):
    """Parses and flattens Danbooru JSON into a pandas Series."""
    flattened_data = flatten_dict(json_content)
    return pd.Series(flattened_data)

def dicts_to_dataframe_parallel(dicts):
    """Converts a list of dicts to a flattened DataFrame using pandarallel."""
    df = pd.DataFrame(dicts)
    flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
    return flattened_df

Considerations & Recommendations

  • Adult/NSFW Content: Danbooru includes adult imagery and explicit tags. Exercise caution, especially if sharing or using this data in public-facing contexts.
  • Licensing & Copyright: Images referenced by this metadata may be copyrighted. Refer to Danbooru’s Terms of Service and respect artists’ rights.
  • Potential Bias: Tags are community-curated and can reflect the inherent biases of the user base (e.g., under- or over-tagging certain categories).
  • Missing or Restricted Tags: Some tags require special permissions on Danbooru; hence they do not appear in this dataset.

For further integration of historical data, consider merging with previous Danbooru scrapes. If you use this dataset in research or production, please cite appropriately and abide by all relevant terms and conditions.


Last Updated: February 18, 2025

Downloads last month
274

Data Sourcing report

powered
by Spawning.ai

Some elements in this dataset have been identified as opted-out, or opted-in, by their creator.

Opted-out
Opted-in