Play TicTacToe-play-with-bot with GumbelMuZero Policy
Model Description
This implementation applies GumbelMuZero to the OpenAI/Gym/Atari TicTacToe-play-with-bot environment using LightZero and DI-engine.
LightZero is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios.
Model Usage
Install the Dependencies
(Click for Details)
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env,video]
pip3 install LightZero
Git Clone from Huggingface and Run the Model
(Click for Details)
# running with trained model
python3 -u run.py
run.py
from lzero.agent import GumbelMuZeroAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = GumbelMuZeroAgent(
env_id="TicTacToe-play-with-bot", exp_name="TicTacToe-play-with-bot-GumbelMuZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
Run Model by Using Huggingface_ding
(Click for Details)
# running with trained model
python3 -u run.py
run.py
from lzero.agent import GumbelMuZeroAgent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/TicTacToe-play-with-bot-GumbelMuZero")
# Instantiate the agent
agent = GumbelMuZeroAgent(
env_id="TicTacToe-play-with-bot", exp_name="TicTacToe-play-with-bot-GumbelMuZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
Model Training
Train the Model and Push to Huggingface_hub
(Click for Details)
#Training Your Own Agent
python3 -u train.py
train.py
from lzero.agent import GumbelMuZeroAgent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = GumbelMuZeroAgent(env_id="TicTacToe-play-with-bot", exp_name="TicTacToe-play-with-bot-GumbelMuZero")
# Train the agent
return_ = agent.train(step=int(10000000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Atari",
task_name="TicTacToe-play-with-bot",
algo_name="GumbelMuZero",
github_repo_url="https://github.com/opendilab/LightZero",
github_doc_model_url=None,
github_doc_env_url=None,
installation_guide='''
pip3 install DI-engine[common_env,video]
pip3 install LightZero
''',
usage_file_by_git_clone="./gumbel_muzero/tictactoe_play_with_bot_gumbel_muzero_deploy.py",
usage_file_by_huggingface_ding="./gumbel_muzero/tictactoe_play_with_bot_gumbel_muzero_download.py",
train_file="./gumbel_muzero/tictactoe_play_with_bot_gumbel_muzero.py",
repo_id="OpenDILabCommunity/TicTacToe-play-with-bot-GumbelMuZero",
platform_info="[LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine)",
model_description="**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co./papers/2310.08348).",
create_repo=True
)
Configuration
(Click for Details)
exp_config = {
'main_config': {
'exp_name': 'TicTacToe-play-with-bot-GumbelMuZero',
'seed': 0,
'env': {
'env_id': 'TicTacToe-play-with-bot',
'battle_mode': 'play_with_bot_mode',
'collector_env_num': 8,
'evaluator_env_num': 5,
'n_evaluator_episode': 5,
'manager': {
'shared_memory': False
}
},
'policy': {
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'model': {
'observation_shape': [3, 3, 3],
'action_space_size': 9,
'image_channel': 3,
'num_res_blocks': 1,
'num_channels': 16,
'fc_reward_layers': [8],
'fc_value_layers': [8],
'fc_policy_layers': [8],
'support_scale': 10,
'reward_support_size': 21,
'value_support_size': 21
},
'use_rnd_model': False,
'sampled_algo': False,
'gumbel_algo': True,
'mcts_ctree': True,
'collector_env_num': 8,
'evaluator_env_num': 5,
'env_type': 'board_games',
'action_type': 'varied_action_space',
'battle_mode': 'play_with_bot_mode',
'monitor_extra_statistics': True,
'game_segment_length': 5,
'transform2string': False,
'gray_scale': False,
'use_augmentation': False,
'augmentation': ['shift', 'intensity'],
'ignore_done': False,
'update_per_collect': 50,
'model_update_ratio': 0.1,
'batch_size': 256,
'optim_type': 'Adam',
'learning_rate': 0.003,
'target_update_freq': 100,
'target_update_freq_for_intrinsic_reward': 1000,
'weight_decay': 0.0001,
'momentum': 0.9,
'grad_clip_value': 0.5,
'n_episode': 8,
'num_simulations': 30,
'discount_factor': 1,
'td_steps': 9,
'num_unroll_steps': 3,
'reward_loss_weight': 1,
'value_loss_weight': 0.25,
'policy_loss_weight': 1,
'policy_entropy_loss_weight': 0,
'ssl_loss_weight': 0,
'lr_piecewise_constant_decay': False,
'threshold_training_steps_for_final_lr': 50000,
'manual_temperature_decay': False,
'threshold_training_steps_for_final_temperature': 100000,
'fixed_temperature_value': 0.25,
'use_ture_chance_label_in_chance_encoder': False,
'use_priority': True,
'priority_prob_alpha': 0.6,
'priority_prob_beta': 0.4,
'root_dirichlet_alpha': 0.3,
'root_noise_weight': 0.25,
'random_collect_episode_num': 0,
'eps': {
'eps_greedy_exploration_in_collect': False,
'type': 'linear',
'start': 1.0,
'end': 0.05,
'decay': 100000
},
'cfg_type': 'GumbelMuZeroPolicyDict',
'max_num_considered_actions': 3,
'reanalyze_ratio': 0.0,
'eval_freq': 2000,
'replay_buffer_size': 10000
},
'wandb_logger': {
'gradient_logger': False,
'video_logger': False,
'plot_logger': False,
'action_logger': False,
'return_logger': False
}
},
'create_config': {
'env': {
'type': 'tictactoe',
'import_names': ['zoo.board_games.tictactoe.envs.tictactoe_env']
},
'env_manager': {
'type': 'subprocess'
},
'policy': {
'type': 'gumbel_muzero',
'import_names': ['lzero.policy.gumbel_muzero']
}
}
}
Training Procedure
- Weights & Biases (wandb): monitor link
Model Information
- Github Repository: repo link
- Doc: Algorithm link
- Configuration: config link
- Demo: video
- Parameters total size: 91.5 KB
- Last Update Date: 2024-02-01
Environments
- Benchmark: OpenAI/Gym/Atari
- Task: TicTacToe-play-with-bot
- Gym version: 0.25.1
- DI-engine version: v0.5.0
- PyTorch version: 2.0.1+cu117
- Doc: Environments link
Evaluation results
- mean_reward on TicTacToe-play-with-botself-reported0.7 +/- 0.46