Skip to content

Commit f09c12a

Browse files
[Tutorial] New tutorial on creating a scenario and training it (#151)
* tuto * tuto
1 parent 6b81c10 commit f09c12a

File tree

3 files changed

+2935
-0
lines changed

3 files changed

+2935
-0
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
[![GitHub license](https://img.shields.io/badge/license-GPLv3.0-blue.svg)](https://github.com/proroklab/VectorizedMultiAgentSimulator/blob/main/LICENSE)
99
[![arXiv](https://img.shields.io/badge/arXiv-2207.03530-b31b1b.svg)](https://arxiv.org/abs/2207.03530)
1010
[![Discord Shield](https://dcbadge.vercel.app/api/server/dg8txxDW5t?style=flat)](https://discord.gg/dg8txxDW5t)
11+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/proroklab/VectorizedMultiAgentSimulator/blob/main/notebooks/Simulation_and_training_in_VMAS_and_BenchMARL.ipynb)
1112

1213
<p align="center">
1314
<img src="https://github.com/matteobettini/vmas-media/blob/main/media/VMAS_scenarios.gif?raw=true" alt="drawing"/>
@@ -94,6 +95,7 @@ Watch the talk at DARS 2022 about VMAS.
9495
### Notebooks
9596
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/proroklab/VectorizedMultiAgentSimulator/blob/main/notebooks/VMAS_Use_vmas_environment.ipynb) &ensp; **Using a VMAS environment**.
9697
Here is a simple notebook that you can run to create, step and render any scenario in VMAS. It reproduces the `use_vmas_env.py` script in the `examples` folder.
98+
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/proroklab/VectorizedMultiAgentSimulator/blob/main/notebooks/Simulation_and_training_in_VMAS_and_BenchMARL.ipynb) &ensp; **Creating a VMAS scenario and training it in [BenchMARL](https://github.com/facebookresearch/BenchMARL)**. We will create a scenario where multiple robots with different embodiments need to navigate to their goals while avoiding each other (as well as obstacles) and train it using MAPPO and MLP/GNN policies.
9799
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/facebookresearch/BenchMARL/blob/main/notebooks/run.ipynb) &ensp; **Training VMAS in BenchMARL (suggested)**. In this notebook, we show how to use VMAS in [BenchMARL](https://github.com/facebookresearch/BenchMARL), TorchRL's MARL training library.
98100
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/rl/blob/gh-pages/_downloads/a977047786179278d12b52546e1c0da8/multiagent_ppo.ipynb) &ensp; **Training VMAS in TorchRL**. In this notebook, [available in the TorchRL docs](https://pytorch.org/rl/stable/tutorials/multiagent_ppo.html#), we show how to use any VMAS scenario in TorchRL. It will guide you through the full pipeline needed to train agents using MAPPO/IPPO.
99101
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/rl/blob/gh-pages/_downloads/d30bb6552cc07dec0f1da33382d3fa02/multiagent_competitive_ddpg.py) &ensp; **Training competitive VMAS MPE in TorchRL**. In this notebook, [available in the TorchRL docs](https://pytorch.org/rl/stable/tutorials/multiagent_competitive_ddpg.html), we show how to solve a Competitive Multi-Agent Reinforcement Learning (MARL) problem using MADDPG/IDDPG.

docs/source/usage/notebooks.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ Notebooks
44
In the following you can find a list of :colab:`null` Google Colab notebooks to help you learn how to use VMAS:
55

66
- :colab:`null` `Using a VMAS environment <https://colab.research.google.com/github/proroklab/VectorizedMultiAgentSimulator/blob/main/notebooks/VMAS_Use_vmas_environment.ipynb>`_. Here is a simple notebook that you can run to create, step and render any scenario in VMAS. It reproduces the ``use_vmas_env.py`` script in the ``examples`` folder
7+
- :colab:`null` `Creating a VMAS scenario and training it in BenchMARL <https://colab.research.google.com/github/proroklab/VectorizedMultiAgentSimulator/blob/main/notebooks/Simulation_and_training_in_VMAS_and_BenchMARL.ipynb>`_. We will create a scenario where multiple robots with different embodiments need to navigate to their goals while avoiding each other (as well as obstacles) and train it using MAPPO and MLP/GNN policies.
78
- :colab:`null` `Training VMAS in BenchMARL (suggested) <https://colab.research.google.com/github/facebookresearch/BenchMARL/blob/main/notebooks/run.ipynb>`_. In this notebook, we show how to use VMAS in BenchMARL, TorchRL's MARL training library
89
- :colab:`null` `Training VMAS in TorchRL <https://colab.research.google.com/github/pytorch/rl/blob/gh-pages/_downloads/a977047786179278d12b52546e1c0da8/multiagent_ppo.ipynb>`_. In this notebook, `available in the TorchRL docs <https://pytorch.org/rl/stable/tutorials/multiagent_ppo.html#>`__, we show how to use any VMAS scenario in TorchRL. It will guide you through the full pipeline needed to train agents using MAPPO/IPPO.
910
- :colab:`null` `Training competitive VMAS MPE in TorchRL <https://colab.research.google.com/github/pytorch/rl/blob/gh-pages/_downloads/d30bb6552cc07dec0f1da33382d3fa02/multiagent_competitive_ddpg.py>`_. In this notebook, `available in the TorchRL docs <https://pytorch.org/rl/stable/tutorials/multiagent_competitive_ddpg.html>`__, we show how to solve a Competitive Multi-Agent Reinforcement Learning (MARL) problem using MADDPG/IDDPG.

0 commit comments

Comments
 (0)