You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+37-31Lines changed: 37 additions & 31 deletions
Original file line number
Diff line number
Diff line change
@@ -1,31 +1,31 @@
1
1
# MDP Playground
2
-
A python package to benchmark low-level dimensions of difficulties for RL agents.
2
+
A python package to inject low-level dimensions of difficulties in RL environments. There are toy environments to design and debug RL agents. And complex environment wrappers for Atari and Mujoco to test robustness to these dimensions in complex environments.
3
3
4
4
## Getting started
5
-
There are 3 parts to the package:
6
-
1)**Environments**: The base Environment in [`mdp_playground/envs/rl_toy_env.py`](mdp_playground/envs/rl_toy_env.py) implements all the functionality, including discrete and continuous environments, and is parameterised by a `config` dict which contains all the information needed to instantiate the required MDP. Please see [`example.py`](example.py) for some simple examples of how to use the MDP environments in the package. For further details, please refer to the documentation in [`mdp_playground/envs/rl_toy_env.py`](mdp_playground/envs/rl_toy_env.py).
5
+
There are 4 parts to the package:
6
+
1)**Toy Environments**: The base toy Environment in [`mdp_playground/envs/rl_toy_env.py`](mdp_playground/envs/rl_toy_env.py) implements the toy environment functionality, including discrete and continuous environments, and is parameterised by a `config` dict which contains all the information needed to instantiate the required MDP. Please see [`example.py`](example.py) for some simple examples of how to use the MDP environments in the package. For further details, please refer to the documentation in [`mdp_playground/envs/rl_toy_env.py`](mdp_playground/envs/rl_toy_env.py).
7
7
8
-
2)**Experiments**: Experiments are launched using [`run_experiments.py`](run_experiments.py). Config files for experiments are located inside the [`experiments`](experiments) directory. Please read the [instructions](#running-experiments) below for details.
8
+
2)**Complex Environment Wrappers**: Similar to the toy environment, this is parameterised by a `config` dict which contains all the information needed to inject the dimensions into Atari or Mujoco environments. Please see [`example.py`](example.py) for some simple examples of how to use these. The Atari wrapper is in [`mdp_playground/envs/gym_env_wrapper.py`](mdp_playground/envs/gym_env_wrapper.py) and the Mujoco wrapper is in [`mdp_playground/envs/mujoco_env_wrapper.py`](mdp_playground/envs/mujoco_env_wrapper.py).
9
9
10
-
3)**Analysis**: [`plot_experiments.ipynb`](plot_experiments.ipynb) contains code to plot the standard plots from the paper.
10
+
3)**Experiments**: Experiments are launched using [`run_experiments.py`](run_experiments.py). Config files for experiments are located inside the [`experiments`](experiments) directory. Please read the [instructions](#running-experiments) below for details.
11
11
12
-
## Installation
13
-
**IMPORTANT**
12
+
4)**Analysis**: [`plot_experiments.ipynb`](plot_experiments.ipynb) contains code to plot the standard plots from the paper.
14
13
15
-
We recommend using `conda` environments to manage virtual `Python` environments to run the experiments. Unfortunately, you will have to maintain 2 environments - 1 for **discrete** experiments and 1 for **continuous** experiments from the paper. As mentioned in Appendix H in the paper, this is because of issues with Ray, the library that we used for our baseline agents.
14
+
## Installation
15
+
We recommend using `conda` environments to manage virtual `Python` environments to run the experiments. Unfortunately, you will have to maintain 2 environments - 1 for the "older" **discrete toy** experiments and 1 for the "newer" **continuous and complex** experiments from the paper. As mentioned in Appendix P in the paper, this is because of issues with Ray, the library that we used for our baseline agents.
16
16
17
-
Please follow the following commands to install for the discrete experiments:
17
+
Please follow the following commands to install for the discrete toy experiments:
18
18
```
19
-
conda create -n py36_toy_rl_disc python=3.6
20
-
conda activate py36_toy_rl_disc
19
+
conda create -n py36_toy_rl_disc_toy python=3.6
20
+
conda activate py36_toy_rl_disc_toy
21
21
cd mdp-playground
22
22
pip install -e .[extras_disc]
23
23
```
24
24
25
-
Please follow the following commands to install for the continuous experiments:
25
+
Please follow the following commands to install for the continuous and complex experiments:
@@ -45,36 +45,42 @@ The `exp_name` is a prefix for the filenames of CSV files where stats for the ex
45
45
Each of the command line arguments has defaults. Please refer to the documentation inside [`run_experiments.py`](run_experiments.py) for further details on the command line arguments. (Or run it with the `-h` flag to bring up help.)
46
46
47
47
The config files for experiments from the [paper](https://arxiv.org/abs/1909.07750) are in the experiments directory.<br>
48
-
The name of the file corresponding to an experiment is formed as: `<algorithm_name>_<meta_feature_names>.py`<br>
48
+
The name of the file corresponding to an experiment is formed as: `<algorithm_name>_<dimension_names>.py`<br>
49
49
Some sample `algorithm_name`s are: `dqn`, `rainbow`, `a3c`, `a3c_lstm`, `ddpg`, `td3` and `sac`<br>
50
-
Some sample `meta_feature_name`s are: `seq_del` (for **delay** and **sequence length** varied together), `p_r_noises` (for **P** and **R noises** varied together),
50
+
Some sample `dimension_name`s are: `seq_del` (for **delay** and **sequence length** varied together), `p_r_noises` (for **P** and **R noises** varied together),
For example, for algorithm **DQN** when varying meta-features**delay** and **sequence length**, the corresponding experiment file is [`dqn_seq_del.py`](experiments/dqn_seq_del.py)
52
+
For example, for algorithm **DQN** when varying dimensions**delay** and **sequence length**, the corresponding experiment file is [`dqn_seq_del.py`](experiments/dqn_seq_del.py)
53
53
54
54
## Running experiments from the main paper
55
-
For completeness, we list here the commands for the experiments from the main paper:
55
+
We list here the commands for the experiments from the main paper:
# For the spider plots, experiments for all the agents and dimensions will need to be run from the experiments directory, i.e., for discrete: dqn_p_r_noises.py, a3c_p_r_noises, ..., dqn_seq_del, ..., dqn_sparsity, ..., dqn_image_representations, ...
# and then follow the instructions in plot_experiments.ipynb
80
+
81
+
# For the bsuite debugging experiment, please run the bsuite sonnet dqn agent on our toy environment while varying reward density. Commit https://github.com/deepmind/bsuite/commit/5116216b62ce0005100a6036fb5397e358652530 should work fine.
77
82
```
83
+
78
84
The CSV stats files will be saved to the current directory and can be analysed in [`plot_experiments.ipynb`](plot_experiments.ipynb).
0 commit comments