Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    ReadTimeout
Message:      (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: c1f23e3a-43f8-4715-9b51-6a31a0448bf8)')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 352, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 277, in get_dataset_config_info
                  builder = load_dataset_builder(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1849, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1731, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1688, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1026, in get_module
                  standalone_yaml_path = cached_path(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 178, in cached_path
                  resolved_path = huggingface_hub.HfFileSystem(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
                  repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
                  self._api.repo_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info
                  return method(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info
                  r = get_session().get(path, headers=headers, timeout=timeout, params=params)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get
                  return self.request("GET", url, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send
                  return super().send(request, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: c1f23e3a-43f8-4715-9b51-6a31a0448bf8)')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Key Features πŸ”‘

  • 1 million+ steps of enhanced digital twins of long-horizon real-world tasks from Agibot World.
  • 500,000+ steps of atomic tasks automatically generated by agents.
  • 180+ classes of objects.
  • 5 classes of scenes.

News 🌍

  • [2025/2/24] AgiBot Digital World released on Huggingface. Download Link

TODO List πŸ“…

  • AgiBot Digital World: More high-quality simulation data, including atomic skill task data and digital twin-enhanced data aligned with tasks in Agibot World. (ongoing open source)

Get started πŸ”₯

Download the Dataset

To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install

# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co./settings/tokens
git clone https://huggingface.co./datasets/agibot-world/AgiBotDigitalWorld

# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co./datasets/agibot-world/AgiBotDigitalWorld

If you only want to download a specific task from the AgiBotDigitalWorld dataset, such as digitaltwin_5,follow these steps:.

# Ensure Git LFS is installed (https://git-lfs.com)
git lfs install

# Initialize an empty Git repository
git init AgiBotDigitalWorld
cd AgiBotDigitalWorld

# Set the remote repository
git remote add origin https://huggingface.co./datasets/agibot-world/AgiBotDigitalWorld

# Enable sparse-checkout
git sparse-checkout init

# Specify the folders and files you want to download
git sparse-checkout set observations/digitaltwin_5 task_info/digitaltwin_5.json scripts proprio_stats parameters

# Pull the data from the main branch
git pull origin main

To facilitate the inspection of the dataset's internal structure and examples, we also provide a sample dataset. Please refer to sample_dataset.zip.

Dataset Preprocessing

Our project relies solely on the lerobot library (dataset v2.0), please follow their installation instructions. Here, we provide scripts for converting it to the lerobot format.

Requirements

Requeir ffmpeg>=7.1(You can install with conda install -y -c conda -forge ffmpeg)

export SVT_LOG=0
python scripts/convert_to_lerobot.py --data_dir DATASET_FOLDER --save_dir SAVE_FOLDER --repo_id=agibot/agibotdigital --preprocess_video

## Example
# python scripts/convert_to_lerobot.py --data_dir final_format_data --save_dir ./output --repo_id=agibot/agibotdigital --preprocess_video

Visualization

python scripts/visualize_dataset.py --repo-id='agibot/agibotdigital' --episode-index=0 --dataset-path=SAVE_FOLDER

## Example
# python scripts/visualize_dataset.py --repo-id='agibot/agibotdigital'--episode-index=0 --dataset-path=output

We sincerely thank the developers of lerobot for their exceptional contributions to the open-source community..

Dataset Structure

Folder hierarchy

data
β”œβ”€β”€ observations
β”‚   β”œβ”€β”€ digitaltwin_0 # This represents the task id.
β”‚   β”‚   β”œβ”€β”€ 9b21cf2e-829f-4aad-9b61-9edc5b947163 # This represents the episode uuid.
β”‚   β”‚   β”‚   β”œβ”€β”€ depth # This is a folder containing depth information saved in PNG format.
β”‚   β”‚   β”‚   β”œβ”€β”€ video # This is a folder containing videos from all camera perspectives.
β”‚   β”‚   β”œβ”€β”€ 131e407a-b828-4937-a554-e6706cbc5e2f
β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ digitaltwin_1
β”‚   β”‚   β”œβ”€β”€ 95808182-501f-4dca-984b-7404df844d31
β”‚   β”‚   β”‚   β”œβ”€β”€ depth
β”‚   β”‚   β”‚   β”œβ”€β”€ video
β”‚   β”‚   β”œβ”€β”€ edb0774b-13bb-4a8b-8bb0-71e82fe3ff6a
β”‚   β”‚   β”‚   └── ...
β”‚   └── ...
β”œβ”€β”€ meta_info
β”‚   β”œβ”€β”€ digitaltwin_0 # This represents the task id.
β”‚   β”‚   β”œβ”€β”€ 9b21cf2e-829f-4aad-9b61-9edc5b947163  # This represents the episode uuid.
β”‚   β”‚   β”‚   β”œβ”€β”€ task_info.json # This represents the task information.
β”‚   β”‚   β”‚   β”œβ”€β”€ proprio_meta_info.h5 # This file contains all the robot's proprioceptive information.
β”‚   β”‚   β”‚   β”œβ”€β”€ camera_parameter.json # This contains all the cameras' intrinsic and extrinsic parameters.
β”‚   β”‚   β”œβ”€β”€ 131e407a-b828-4937-a554-e6706cbc5e2f
β”‚   β”‚   β”‚   β”œβ”€β”€ task_info.json 
β”‚   β”‚   β”‚   β”œβ”€β”€ proprio_meta_info.h5
β”‚   β”‚   β”‚   β”œβ”€β”€ camera_parameter.json
β”‚   β”‚   └── ...
β”‚   └── digitaltwin_1
β”‚       β”œβ”€β”€ 95808182-501f-4dca-984b-7404df844d31
β”‚   β”‚   β”‚   β”œβ”€β”€ task_info.json 
β”‚   β”‚   β”‚   β”œβ”€β”€ proprio_meta_info.h5
β”‚   β”‚   β”‚   β”œβ”€β”€ camera_parameter.json
β”‚       └── edb0774b-13bb-4a8b-8bb0-71e82fe3ff6a
β”‚   β”‚   β”‚   β”œβ”€β”€ task_info.json 
β”‚   β”‚   β”‚   β”œβ”€β”€ proprio_meta_info.h5
β”‚   β”‚   β”‚   β”œβ”€β”€ camera_parameter.json
|       └── ...

json file format

In the task_info.json file, we store the basic information of every episode along with the language instructions. Here, we will further explain several specific keywords.

  • action_config: The content corresponding to this key is a list composed of all action slices from the episode. Each action slice includes a start and end time, the corresponding atomic skill, and the language instruction.
  • key_frame: The content corresponding to this key consists of annotations for keyframes, including the start and end times of the keyframes and detailed descriptions.
{
  "episode_id": "9b21cf2e-829f-4aad-9b61-9edc5b947163",
  "task_id": "digitaltwin_5",
  "task_name": "pick_toys_into_box",
  "init_scene_text": "",
  "label_info": {
    "objects": {
      "extra_objects": [
        {
          "object_id": "omni6DPose_book_000",
          "workspace_id": "book_table_extra"
        }
      ],
      "task_related_objects": [
        {
          "object_id": "omni6DPose_toy_motorcycle_023",
          "workspace_id": "book_table_dual_left"
        },
        {
          "object_id": "omni6DPose_toy_truck_030",
          "workspace_id": "book_table_dual_right"
        },
        {
          "object_id": "genie_storage_box_002",
          "workspace_id": "book_table_dual_middle"
        }
      ]
    },
    "action_config": [
      {
        "start_frame": 0,
        "end_frame": 178,
        "action_text": "",
        "skill": "Pick",
        "active_object": "gripper",
        "passive_object": "omni6DPose_toy_motorcycle_023"
      },
      {
        "start_frame": 179,
        "end_frame": 284,
        "action_text": "",
        "skill": "Place",
        "active_object": "omni6DPose_toy_motorcycle_023",
        "passive_object": "genie_storage_box_002"
      },
      {
        "start_frame": 285,
        "end_frame": 430,
        "action_text": "",
        "skill": "Pick",
        "active_object": "gripper",
        "passive_object": "omni6DPose_toy_truck_030"
      },
      {
        "start_frame": 431,
        "end_frame": 536,
        "action_text": "",
        "skill": "Place",
        "active_object": "omni6DPose_toy_truck_030",
        "passive_object": "genie_storage_box_002"
      }
    ],
    "key_frame": []
  }
}

h5 file format

In the proprio_stats.h5 file, we store all the robot's proprioceptive data. For more detailed information, please refer to the explanation of proprioceptive state.

|-- timestamp
|-- state
    |-- effector
        |-- force
        |-- index
        |-- position
    |-- end
        |-- angular
        |-- orientation
        |-- position
        |-- velocity
        |-- wrench
    |-- joint
        |-- current_value
        |-- effort
        |-- position
        |-- velocity
    |-- robot
        |-- orientation
        |-- orientation_drift
        |-- position
        |-- position_drift
|-- action
    |-- effector
        |-- force
        |-- index
        |-- position
    |-- end
        |-- angular
        |-- orientation
        |-- position
        |-- velocity
        |-- wrench
    |-- joint
        |-- effort
        |-- index
        |-- position
        |-- velocity
    |-- robot
        |-- index
        |-- orientation
        |-- position
        |-- velocity

Explanation of Proprioceptive State

Terminology

The definitions and data ranges in this section may change with software and hardware version. Stay tuned.

State and action

  1. State State refers to the monitoring information of different sensors and actuators.
  2. Action Action refers to the instructions sent to the hardware abstraction layer, where controller would respond to these instructions. Therefore, there is a difference between the issued instructions and the actual executed state.

Actuators

  1. Effector: refers to the end effector, for example dexterous hands or grippers.
  2. End: refers to the 6DoF end pose on the robot flange.
  3. Joint: refers to the joints of the robot, with 34 degrees of freedom (2 DoF head, 2 Dof waist, 7 DoF each arm, 8 Dof each gripper).
  4. Robot: refers to the robot's pose in its surrouding environment. The orientation and position refer to the robot's relative pose in the odometry coordinate syste

Common fields

  1. Position: Spatial position, encoder position, angle, etc.
  2. Velocity: Speed
  3. Angular: Angular velocity
  4. Effort: Torque of the motor. Not available for now.
  5. Wrench: Six-dimensional force, force in the xyz directions, and torque. Not available for now.

Value shapes and ranges

Group Shape Meaning
/timestamp [N] timestamp in seconds:nanoseconds in simulation time
/state/effector/position (gripper) [N, 2] left [:, 0], right [:, 1], gripper open range in mm
/state/end/orientation [N, 2, 4] left [:, 0, :], right [:, 1, :], flange quaternion with wxyz
/state/end/position [N, 2, 3] left [:, 0, :], right [:, 1, :], flange xyz in meters
/state/joint/position [N, 34] joint position based on joint names
/state/joint/velocity [N, 34] joint velocity based on joint names
/state/joint/effort [N, 34] joint effort based on joint names
/state/robot/orientation [N, 4] quaternion in wxyz
/state/robot/position [N, 3] xyz position, where z is always 0 in meters
/action/*/index [M] actions indexes refer to when the control source is actually sending signals
/action/effector/position (gripper) [N, 2] left [:, 0], right [:, 1], gripper open range in mm
/action/end/orientation [N, 2, 4] same as /state/end/orientation
/action/end/position [N, 2, 3] same as /state/end/position
/action/end/index [M_2] same as other indexes
/action/joint/position [N, 14] same as /state/joint/position
/action/joint/index [M_4] same as other indexes
/action/robot/velocity [N, 2] vel along x axis [:, 0], yaw rate [:, 1]
/action/robot/index [M_5] same as other indexes

License and Citation

All the data and code within this repo are under CC BY-NC-SA 4.0. Please consider citing our project if it helps your research.

@misc{contributors2025agibotdigitalworld,
  title={AgiBot DigitalWorld},
  author={Jiyao Zhang, Mingjie Pan, Baifeng Xie, Yinghao Zhao, Wenlong Gao, Guangte Xiang, Jiawei Zhang, Dong Li, Zhijun Li, Sheng Zhang, Hongwei Fan, Chengyue Zhao, Shukai Yang, Maoqing Yao, Chuanzhe Suo, Hao Dong},
  howpublished={\url{https://agibot-digitalworld.com/}},
  year={2025}
}
Downloads last month
16,508