This is the official repository of
PlanScope: Learning to Plan Within Decision Scope Does Matter
Ren Xin, Jie Cheng, and Jun Ma
Setup the nuPlan dataset following the offiical-doc
conda create -n planscope python=3.9
conda activate planscope
# install nuplan-devkit
git clone https://github.com/motional/nuplan-devkit.git && cd nuplan-devkit
pip install -e .
pip install -r ./requirements.txt
# setup planscope
cd ..
git clone https://github.com/Rex-sys-hk/PlanScope && cd planscope
sh ./script/setup_env.sh
Preprocess the dataset to accelerate training. It is recommended to run a small sanity check to make sure everything is correctly setup.
python run_training.py \
py_func=cache +training=train_pluto \
scenario_builder=nuplan_mini \
cache.cache_path=/nuplan/exp/sanity_check \
cache.cleanup_cache=true \
scenario_filter=training_scenarios_tiny \
worker=sequential
Then preprocess the whole nuPlan training set (this will take some time). You may need to change cache.cache_path
to suit your condition
export PYTHONPATH=$PYTHONPATH:$(pwd)
python run_training.py \
py_func=cache +training=train_pluto \
scenario_builder=nuplan \
cache.cache_path=/nuplan/exp/cache_pluto_1M \
cache.cleanup_cache=true \
scenario_filter=training_scenarios_1M \
worker.threads_per_node=40
sh train_scope.sh
- you can remove wandb related configurations if your prefer tensorboard.
Copy your the chekpoint path to sim_scope.sh
or sim_pluto.sh
and replace the value of CKPT_N
to run the evaluation.
Model | Download |
---|---|
Pluto-aux-nocil-m6-baseline | OneDrive |
PlanScope-h10-m6 | OneDrive |
PlanScope-h20-m6 | OneDrive |
Run simulation for a random scenario in the nuPlan-mini split
sh ./sim_scope.sh
The code is under cleaning and will be released gradually.
- improve docs
- training code
- visualization
- Scope-planner & checkpoint
- feature builder & model
- initial repo & paper
If you find this repo useful, please consider giving us a star 🌟 and citing our related paper.
@misc{planscope,
title={{PlanScope:} Learning to Plan Within Decision Scope Does Matter},
author={Ren Xin and Jie Cheng and Jun Ma},
year={2024},
eprint={2411.00476},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2411.00476},
}
This work investigates a technique to enhance the performance of planning models in a pure learning framework. We have deliberately omitted the rule-based pre- and post-processing modules from the baseline approach to mitigate the impact of artificially crafted rules, as claimed in our paper. A certain unauthorized publication led to inaccuracies in the depiction of its state-of-the-art (SOTA) capabilities. We hereby clarify this to prevent misunderstanding.
Nevertheless, the method introduced in our article is worth trying and could potentially serve as an add-on to augment the performance of the models you are developing, especially when the dataset is small. We are open to sharing and discussing evaluation results to foster a collaborative exchange.