Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning [Official]
This repository contains the official implementation of paper Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning.
Before getting started, ensure, that you have Python 3.6+ ready. Start by cloning this repository with all submodules.
$ git clone --recurse-submodules https://github.com/jkulhanek/robot-visual-navigation.git
For training in the simulator environment, you have to install our fork of DeepMind Lab. Please follow the instructions in ./dmlab-vn/python/pip_package/README.md
.
Install the deeprl package.
pip install -e deep-rl-pytorch
Training scripts are in the python
directory. To test if your environment is correctly configured, run ./test-train.py
and ./test-env.py
.
Start the training by running ./train.py <trainer>
, where trainer
is the experiment you want to run. Available experiments are in the trainer.py
file.