CogACT-Large

CogACT is a new advanced VLA architecture derived from VLM. Unlike previous works that directly repurpose VLM for action prediction by simple action quantization, we propose a componentized VLA architecture that has a specialized action module conditioned on VLM output. CogACT-Large employs a DiT-L model as the action module.

All our code, pretrained model weights, are licensed under the MIT license.

Please refer to our project page and paper for more details.

Model Summary

Uses

CogACT takes a language instruction and a single view RGB image as input and predicts the next 16 normalized robot actions (consisting of the 7-DoF end effector deltas of the form x, y, z, roll, pitch, yaw, gripper). These actions should be unnormalized and integrated by our Adaptive Action Ensemble(Optional). Unnormalization and ensemble depend on the dataset statistics.

CogACT models can be used zero-shot to control robots for setups seen in the Open-X pretraining mixture. They can also be fine-tuned for new tasks and robot setups with an extremely small amount of demonstrations. See our repository for more information.

Here is a simple example for inference.

# Please clone and install dependencies in our repo
# Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...)

from PIL import Image
from vla import load_vla
import torch

model = load_vla(
      'CogACT/CogACT-Large',
      load_for_training=False,
      action_model_type='DiT-L',
      future_action_window_size=15,
    )                                 
# about 30G Memory in fp32; 

# (Optional) use "model.vlm = model.vlm.to(torch.bfloat16)" to load vlm in bf16

model.to('cuda:0').eval()

image: Image.Image = <input_your_image>
prompt = "move sponge near apple"           # input your prompt

# Predict Action (7-DoF; un-normalize for RT-1 google robot data, i.e. fractal20220817_data)
actions, _ = model.predict_action(
          image,
          prompt,
          unnorm_key='fractal20220817_data', # input your unnorm_key of dataset
          cfg_scale = 1.5,                   # cfg from 1.5 to 7 also performs well
          use_ddim = True,                   # use DDIM sampling
          num_ddim_steps = 10,               # number of steps for DDIM sampling
        )

# results in 7-DoF actions of 16 steps with shape [16, 7]

Citation

@article{li2024cogact,
  title={CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation},
  author={Li, Qixiu and Liang, Yaobo and Wang, Zeyu and Luo, Lin and Chen, Xi and Liao, Mozheng and Wei, Fangyun and Deng, Yu and Xu, Sicheng and Zhang, Yizhong and others},
  journal={arXiv preprint arXiv:2411.19650},
  year={2024}
}
}
Downloads last month
372
Video Preview
loading