Skip to content

Neural simulator for training visual navigation policies

License

Notifications You must be signed in to change notification settings

SplatLearn/SplatGym

Repository files navigation

SplatGym

Open Source Neural Simulator for Robot Learning.


Introduction


SplatGym is a simulator for reinforcement learning free space navigation policies in Gaussian splat environments.

It has the following main features:

  • Novel View Synthesis - Generate photo-realistic images of the scene from any arbitrary camera pose.
  • Collision Detection - Detect collision between the camera and the underlying scene objects.

Software Stack


Installation

Docker Container

To run a pre-built image:

docker run --gpus all \
            -u $(id -u):$(id -u) \
            -v /folder/of/your/data:/workspace/ \
            -v /home/$USER/.cache/:/home/user/.cache/ \
            -p 7007:7007 \
            --rm \
            -it \
            --shm-size=12gb \
            ghcr.io/splatlearn/splatgym:main

To build image from scratch, run the following command at the root of this repository:

docker build .

Native Installation

First, install nerfstudio and its dependencies by following this guide. This project uses nerfstudio 1.1.3. Then run the following command:

> sudo apt-get install swig
> pip install -r src/requirements.txt

Refer to collision_detector to build the collision detector for your architecture. In your build folder, your should now have a pybind_collision_detector.cpython-<version>-<arch>-<os>.so file.

Now set PYTHONPATH to allow python to find all splat gym modules

> export PYTHONPATH="<path to collision_detector>/build:<path to SplatGym>/src":$PYTHONPATH

Usage

Refer to examples/poster_env for an end-to-end example using demo data.

To use the gym environment, a scene must be trained using nerfstudio

> ns-train splatfacto --data <data_folder>

Point cloud must be obtained for the scene

> ns-train nerfacto --data <data_folder> --pipeline.model.predict-normals=True
> ns-export pointcloud --load-config <nerfacto config> --output-dir <output_dir> 

Then the gym environment can be constructed from python:

from nerfgym.NeRFEnv import NeRFEnv

config_path = Path("...")
pcd_path = Path("...")
nerf_env = NeRFEnv(config_path, pcd_path)

The environment follows the Gymnasium Env API and can be used seamlessly with existing reinforcement learning libraries.

A training script has been written to use PPO to solve the free space navigation problem. This can be invoked:

> python3 src/training.py train --num_training_steps 300000 --env_id free

Citation

You can find a paper writeup of the framework on arXiv.

If you use this library or find the documentation useful for your research, please consider citing:

@INPROCEEDINGS{10817965,
  author={Zhou, Liyou and Sinavski, Oleg and Polydoros, Athanasios},
  booktitle={2024 Eighth IEEE International Conference on Robotic Computing (IRC)}, 
  title={Robotic Learning in your Backyard: A Neural Simulator from Open Source Components}, 
  year={2024},
  volume={},
  number={},
  pages={131-138},
  keywords={Training;Visualization;Solid modeling;Three-dimensional displays;Navigation;Virtual environments;Reinforcement learning;Collision avoidance;Software tools;Robots;Vi-sual Navigation},
  doi={10.1109/IRC63610.2024.00031}}

About

Neural simulator for training visual navigation policies

Resources

License

Stars

Watchers

Forks

Packages