Datasets:

Modalities:
Video
ArXiv:
Libraries:
Datasets
License:
Search is not available for this dataset
video
video

ACT-Bench

ACT-Bench is a dedicated framework for quantitatively evaluating the action controllability of world models for autonomous driving. It focuses on measuring how well a world model can generate driving scenes conditioned on specified trajectories.

Overview of the ACT-Bench

For more details, please refer our paper and code repository.

Data fields

Key Value
sample_id 0
label 'straight_constant_speed/straight_constant_speed_10kmph'
context_frames ['sweeps/CAM_FRONT/n015-2018-08-02-17-16-37+0800__CAM_FRONT__1533201487512460.jpg','sweeps/CAM_FRONT/n015-2018-08-02-17-16-37+0800__CAM_FRONT__1533201487612460.jpg','samples/CAM_FRONT/n015-2018-08-02-17-16-37+0800__CAM_FRONT__1533201487762460.jpg']
instruction_trajs [[[0.0, 0.0, 0.0], [0.1545938742, 0.0015977411, -0.0206596931], [0.3063817081, 0.0019410192, -0.0114837886], [0.4497439065, 0.0027015515, -0.027025583], [0.6050902104, 0.002897561, -0.0230843033], [0.7498937661, 0.0028746845, -0.0432883387], [0.8932801666, 0.0027740429, -0.0449931452], [1.0461783792, 0.0027341864, -0.0609023298], [1.1934560378, 0.0026207325, -0.0626793795], [1.3314688069, 0.002331042, -0.083836065], [1.4882952761, 0.0021888225, -0.0833974201], [1.6343445922, 0.0021784357, -0.1030258874], [1.7774288686, 0.0019280693, -0.0995250479], [1.9282369453, 0.0017355272, -0.1257697433], [2.0736730734, 0.0013283469, -0.1290765928], [2.2130084402, 0.0011821295, -0.1515462308], [2.3587170349, 0.0011464657, -0.1489038888], [2.5127366379, 0.0010401979, -0.1685882206], [2.652663411, 0.0008351443, -0.1706014231], [2.8040034852, 0.0005308638, -0.1906429445], [2.9546874643, 0.0003028058, -0.1814105658], [3.098129893, 0.0001099507, -0.1986876182], [3.2477339776, -9.86779e-05, -0.1938415363], [3.3913945285, -0.0004952867, -0.2175208151], [3.5375306412, -0.0010135945, -0.2182340147], [3.6820731288, -0.001606249, -0.2416164848], [3.8279886236, -0.0021923962, -0.2411775227], [3.969924299, -0.0025448799, -0.2629197723], [4.1173996536, -0.0032625234, -0.263342105], [4.2608852146, -0.00372057, -0.2862758575], [4.3976864233, -0.0043610743, -0.2868744325], [4.5461465324, -0.0048756002, -0.3147401786], [4.6937375295, -0.0055456191, -0.3118187509], [4.8355738212, -0.0058713778, -0.3335816396], [4.9815369191, -0.0058726867, -0.3481201454], [5.1292536114, -0.0065586828, -0.343004249], [5.2652689873, -0.0072471006, -0.3474833218], [5.4155127607, -0.0074426697, -0.3684240186], [5.5638769338, -0.0081954509, -0.3638649342], [5.707405646, -0.0085145329, -0.37765957], [5.8565373943, -0.0093398237, -0.3754173488], [5.9987280205, -0.0099226852, -0.4002108294], [6.1446056388, -0.0107350954, -0.4018748844], [6.2867674027, -0.0115938312, -0.4275775659], [6.4344388492, -0.0125163437, -0.4219962191], [6.576710747, -0.0136388196, -0.4450902563], [6.716435109, -0.0145731886, -0.4416513665], [6.868338088, -0.0157876493, -0.4588417966], [7.0145481629, -0.0169398894, -0.4566243329], [7.1504452338, -0.0183231311, -0.4806745948], [7.3029298241, -0.0194984322, -0.4857886661], [7.4431224522, -0.0208558857, -0.5107711508], [7.5846788069, -0.0219164955, -0.5117771397], [7.7352918213, -0.0229614355, -0.5298967143], [7.8822503429, -0.0238655488, -0.5281344161], [8.0203600833, -0.0247095883, -0.5483177376], [8.1746536442, -0.0259923694, -0.5476485202], [8.3163978205, -0.0268716349, -0.5702512244], [8.4553645875, -0.0278297602, -0.5790391197], [8.5969749414, -0.0289897489, -0.6055032887]], ...
reference_traj [[0.0, 0.0, 0.0], [0.3063817081, 0.0019410192, -0.0114837886], [0.6050902104, 0.002897561, -0.0230843033], [0.8932801666, 0.0027740429, -0.0449931452], [1.1934560378, 0.0026207325, -0.0626793795], [1.4882952761, 0.0021888225, -0.0833974201], [1.7774288686, 0.0019280693, -0.0995250479], [2.0736730734, 0.0013283469, -0.1290765928], [2.3587170349, 0.0011464657, -0.1489038888], [2.652663411, 0.0008351443, -0.1706014231], [2.9546874643, 0.0003028058, -0.1814105658], [3.2477339776, -9.86779e-05, -0.1938415363], [3.5375306412, -0.0010135945, -0.2182340147], [3.8279886236, -0.0021923962, -0.2411775227], [4.1173996536, -0.0032625234, -0.263342105], [4.3976864233, -0.0043610743, -0.2868744325], [4.6937375295, -0.0055456191, -0.3118187509], [4.9815369191, -0.0058726867, -0.3481201454], [5.2652689873, -0.0072471006, -0.3474833218], [5.5638769338, -0.0081954509, -0.3638649342], [5.8565373943, -0.0093398237, -0.3754173488], [6.1446056388, -0.0107350954, -0.4018748844], [6.4344388492, -0.0125163437, -0.4219962191], [6.716435109, -0.0145731886, -0.4416513665], [7.0145481629, -0.0169398894, -0.4566243329], [7.3029298241, -0.0194984322, -0.4857886661], [7.5846788069, -0.0219164955, -0.5117771397], [7.8822503429, -0.0238655488, -0.5281344161], [8.1746536442, -0.0259923694, -0.5476485202], [8.4553645875, -0.0278297602, -0.5790391197], [8.7460786149, -0.0302324411, -0.6148562878], [9.040228578, -0.0320762238, -0.6391508753], [9.3238627154, -0.0334427094, -0.6567384988], [9.6242967538, -0.0349175272, -0.675390711], [9.9159747274, -0.0361669985, -0.7020474284], [10.2029848123, -0.0383206259, -0.7409547588], [10.485217797, -0.0402655886, -0.7784671144], [10.7852857398, -0.0415422365, -0.808403356], [11.0714450976, -0.0426406971, -0.8327939143], [11.3716683909, -0.0438444619, -0.8601736098], [11.6663477515, -0.044536854, -0.8800681964], [11.9537060995, -0.0457889104, -0.9064147281], [12.2546047035, -0.046582522, -0.932251343], [12.5430745076, -0.046996187, -0.961586981], [12.8331523584, -0.0482537294, -0.9948334053], [13.1342964502, -0.0489972987, -1.0360002826], [13.4312132278, -0.0493575167, -1.0671797101], [13.7167782768, -0.0493845646, -1.0965326758], [14.0242449794, -0.0484199283, -1.122908192], [14.3217588305, -0.0482550798, -1.1533763216]]
intrinsic [[1266.4172030466, 0.0, 816.2670197448],[0.0, 1266.4172030466, 491.5070657929],[0.0, 0.0, 1.0]]

Labels

The label field is a string that represents one of the following high-level driving actions:

[
    'curving_to_left/curving_to_left_moderate',
    'curving_to_left/curving_to_left_sharp',
    'curving_to_left/curving_to_left_wide',
    'curving_to_right/curving_to_right_moderate',
    'curving_to_right/curving_to_right_sharp',
    'curving_to_right/curving_to_right_wide',
    'shifting_towards_left/shifting_towards_left_short',
    'shifting_towards_right/shifting_towards_right_long',
    'shifting_towards_right/shifting_towards_right_short',
    'starting/starting_20kmph',
    'starting/starting_25kmph',
    'starting/starting_30kmph',
    'stopping/stopping_15kmph',
    'stopping/stopping_20kmph',
    'stopping/stopping_25kmph',
    'stopping/stopping_30kmph',
    'stopping/stopping_35kmph',
    'stopping/stopping_40kmph',
    'stopping/stopping_45kmph',
    'straight_accelerating/straight_accelerating_15kmph',
    'straight_accelerating/straight_accelerating_20kmph',
    'straight_accelerating/straight_accelerating_25kmph',
    'straight_constant_speed/straight_constant_speed_10kmph',
    'straight_constant_speed/straight_constant_speed_15kmph',
    'straight_constant_speed/straight_constant_speed_20kmph',
    'straight_constant_speed/straight_constant_speed_25kmph',
    'straight_constant_speed/straight_constant_speed_30kmph',
    'straight_constant_speed/straight_constant_speed_35kmph',
    'straight_constant_speed/straight_constant_speed_40kmph',
    'straight_constant_speed/straight_constant_speed_45kmph',
    'straight_constant_speed/straight_constant_speed_5kmph',
    'straight_decelerating/straight_decelerating_30kmph',
    'straight_decelerating/straight_decelerating_35kmph',
    'straight_decelerating/straight_decelerating_40kmph'
]

The preprocessing function in the ACT-Bench converts the above labels into the following 9 classes to be compared with the estimated action by the ACT-Estimator.

Context Frames

The context_frames field contains the list of image paths that are used to generate the driving scenes conditioned on the instruction_trajs. This image path is relative to the dataroot directory of the nuScenes dataset. Make sure to download the nuScenes dataset before generating the driving scenes.

Command Classes

The command output by the ACT-Estimator represents the predicted high-level driving action. The output is a logits of 9 classes as follows:

LABELS = [
    "curving_to_left",              # command = 0
    "curving_to_right",             # command = 1
    "straight_constant_high_speed", # command = 2
    "straight_constant_low_speed",  # command = 3
    "straight_accelerating",        # command = 4
    "straight_decelerating",        # command = 5
    "starting",                     # command = 6
    "stopping",                     # command = 7
    "stopped",                      # command = 8
]

Instruction Trajectory and Reference Trajectory

The instruction_trajs (shaped as (50, 60, 3)) is created by dividing the reference_traj into segments at each time step, and serves as the input to condition the driving scene generation. The reference_traj (shaped as (50, 3)) is the ground truth trajectory to evaluate the performance of action controllability of the driving world model. So it is expected that if the generated video accurately follows the instruction_trajs, the generated trajectory should be close to the reference_traj. Although both consists of 50 waypoints at each time step, the ACT-Bench evaluation framework only uses the first 44 waypoints to evaluate the performance of action controllability of the driving world model.

Each waypoint is represented as a 2D vector (x, y) in a 2D Cartesian coordinate system.

  • The origin (0, 0) is defined as the initial position of the vehicle at the start of the video.
  • The x-axis corresponds to the forward direction of the vehicle, with positive values indicating forward movement.
  • The y-axis corresponds to the lateral direction of the vehicle, with positive values indicating movement to the left.

Note that this coordinate system is different from the one used in the ACT-Estimator's waypoints output. The conversion between the two coordinate systems is automatically performed by the ACT-Bench evaluation framework.

Authors

Here are the team members who contributed to the development of ACT-Bench:

  • Hidehisa Arai
  • Keishi Ishihara
  • Tsubasa Takahashi
  • Yu Yamaguchi

How to use

The following code snippet demonstrates how to load the ACT-Bench dataset.

from datasets import load_dataset

benchmark_dataset = load_dataset("turing-motors/ACT-Bench", data_files="act_bench.jsonl", split="train")

See here for the instructions on how to evaluate driving world models using ACT-Bench.

License

The ACT-Bench is licensed under the Apache License 2.0.

Citation

If you find our work helpful, please feel free to cite us.

@misc{arai2024actbench,
      title={ACT-Bench: Towards Action Controllable World Models for Autonomous Driving},
      author={Hidehisa Arai and Keishi Ishihara and Tsubasa Takahashi and Yu Yamaguchi},
      year={2024},
      eprint={2412.05337},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.05337},
}
Downloads last month
75