Skip to content

Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning Official Implementation

License

Notifications You must be signed in to change notification settings

leefree-GIT/robot-visual-navigation

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning [Official]

This repository contains the official implementation of paper Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning.

Getting started

Before getting started, ensure, that you have Python 3.6+ ready. Start by cloning this repository with all submodules.

$ git clone --recurse-submodules https://github.com/jkulhanek/robot-visual-navigation.git

For training in the simulator environment, you have to install our fork of DeepMind Lab. Please follow the instructions in ./dmlab-vn/python/pip_package/README.md.

Install the deeprl package.

pip install -e deep-rl-pytorch

Training scripts are in the python directory. To test if your environment is correctly configured, run ./test-train.py and ./test-env.py.

Start the training by running ./train.py <trainer>, where trainer is the experiment you want to run. Available experiments are in the trainer.py file.

About

Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning Official Implementation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 76.8%
  • CMake 21.6%
  • Shell 1.4%
  • HTML 0.2%