Robotics
Safetensors
lerobot

The model card literrally says nothing about how to properly use this model for a person who is less familar with lerobot

#5
by Jaykumaran17 - opened

Hello,

I'm trying to inference as steps followed by HF blog,

python lerobot/scripts/eval.py \ --pretrained_policy.path= lerobot/pi0

Facing error,
eval.py: error: unrecognized arguments: --pretrained_policy.path = lerobot/pi0

I referred the repo, stilll couldn't figure out,

python lerobot/scripts/eval.py --policy.path = lerobot/pi0

eval.py: error: unrecognized arguments: --policy.path = lerobot/pi0

Where should i pass the hf model path; lerobot/pi0 ?

usage: eval.py [-h] [--config_path str] [--env str] [--env.type {aloha,pusht,xarm}] [--env.task str] [--env.fps str] [--env.features str] [--env.features_map str] [--env.episode_length str] [--env.obs_type str] [--env.render_mode str] [--env.visualization_width str] [--env.visualization_height str] [--eval str] [--eval.n_episodes str] [--eval.batch_size str] [--eval.use_async_envs str] [--policy str] [--policy.type {act,diffusion,tdmpc,vqbet}] [--policy.chunk_size str] [--policy.replace_final_stride_with_dilation str] [--policy.pre_norm str] [--policy.dim_model str] [--policy.n_heads str] [--policy.dim_feedforward str] [--policy.feedforward_activation str] [--policy.n_encoder_layers str] [--policy.n_decoder_layers str] [--policy.use_vae str] [--policy.n_vae_encoder_layers str] [--policy.temporal_ensemble_coeff str] [--policy.kl_weight str] [--policy.optimizer_lr_backbone str] [--policy.drop_n_last_frames str] [--policy.use_separate_rgb_encoder_per_camera str] [--policy.down_dims str] [--policy.kernel_size str] [--policy.n_groups str] [--policy.diffusion_step_embed_dim str] [--policy.use_film_scale_modulation str] [--policy.noise_scheduler_type str] [--policy.num_train_timesteps str] [--policy.beta_schedule str] [--policy.beta_start str] [--policy.beta_end str] [--policy.prediction_type str] [--policy.clip_sample str] [--policy.clip_sample_range str] [--policy.num_inference_steps str] [--policy.do_mask_loss_for_padding str] [--policy.scheduler_name str] [--policy.n_action_repeats str] [--policy.horizon str] [--policy.n_action_steps str] [--policy.image_encoder_hidden_dim str] [--policy.state_encoder_hidden_dim str] [--policy.latent_dim str] [--policy.q_ensemble_size str] [--policy.mlp_dim str] [--policy.discount str] [--policy.use_mpc str] [--policy.cem_iterations str] [--policy.max_std str] [--policy.min_std str] [--policy.n_gaussian_samples str] [--policy.n_pi_samples str] [--policy.uncertainty_regularizer_coeff str] [--policy.n_elites str] [--policy.elite_weighting_temperature str] [--policy.gaussian_mean_momentum str] [--policy.max_random_shift_ratio str] [--policy.reward_coeff str] [--policy.expectile_weight str] [--policy.value_coeff str] [--policy.consistency_coeff str] [--policy.advantage_scaling str] [--policy.pi_coeff str] [--policy.temporal_decay_coeff str] [--policy.target_model_momentum str] [--policy.n_obs_steps str] [--policy.normalization_mapping str] [--policy.input_features str] [--policy.output_features str] [--policy.n_action_pred_token str] [--policy.action_chunk_size str] [--policy.vision_backbone str] [--policy.crop_shape str] [--policy.crop_is_random str] [--policy.pretrained_backbone_weights str] [--policy.use_group_norm str] [--policy.spatial_softmax_num_keypoints str] [--policy.n_vqvae_training_steps str] [--policy.vqvae_n_embed str] [--policy.vqvae_embedding_dim str] [--policy.vqvae_enc_hidden_dim str] [--policy.gpt_block_size str] [--policy.gpt_input_dim str] [--policy.gpt_output_dim str] [--policy.gpt_n_layer str] [--policy.gpt_n_head str] [--policy.gpt_hidden_dim str] [--policy.dropout str] [--policy.mlp_hidden_dim str] [--policy.offset_loss_weight str] [--policy.primary_code_loss_weight str] [--policy.secondary_code_loss_weight str] [--policy.bet_softmax_temperature str] [--policy.sequentially_select str] [--policy.optimizer_lr str] [--policy.optimizer_betas str] [--policy.optimizer_eps str] [--policy.optimizer_weight_decay str] [--policy.optimizer_vqvae_lr str] [--policy.optimizer_vqvae_weight_decay str] [--policy.scheduler_warmup_steps str] [--output_dir str] [--job_name str] [--device str] [--use_amp str] [--seed str]

Can you help in here with instructions for inference and see some visuals as seen is aloha or pusht.

and with fine-tuning how to prepare the dataset

Sign up or log in to comment