We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello,
do you think it is possible to run the rllib integration in docker? My idea of the process so far has been:
-e SDL_VIDEODRIVER=x11 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix
Does anyone have experiences with a docker setup?
Kind regards and thanks for sharing the rllib-integration.
The text was updated successfully, but these errors were encountered:
Hi @ll7 ! I think this is totally doable. Maybe the only missing step is adding the CARLA Python API to the PYTHONPATH.
PYTHONPATH
enabling a gui? is this necessary or does everything run headless?
Actually, there is one parameter in https://github.com/carla-simulator/rllib-integration/blob/main/dqn_example/dqn_config.yaml#L50 to that end. For testing purposes, it may be useful to enable the gui.
docker run with port connection for tensorboard? does pytorch from the dqn example use tensorboard?
yeah, the dqn_train.py example is automatically opening a tensoboard server in the port 6006.
dqn_train.py
6006
Sorry, something went wrong.
No branches or pull requests
Hello,
do you think it is possible to run the rllib integration in docker? My idea of the process so far has been:
-e SDL_VIDEODRIVER=x11 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix
Does anyone have experiences with a docker setup?
Kind regards and thanks for sharing the rllib-integration.
The text was updated successfully, but these errors were encountered: