Skip to content

Proof of concept example for the idea of using distributed model predictive control as a function approximator in distributed reinforcement learning.

License

Notifications You must be signed in to change notification settings

Niufuxi/dmpcrl-concept

 
 

Repository files navigation

Multi-Agent Reinforcement Learning via Distributed MPC as a Function Approximator

Source Code License Python 3.9 Code style: black

This repository contains the source code used to produce the results obtained in Multi-Agent Reinforcement Learning via Distributed MPC as a Function Approximator submitted to Automatica.

In this work we propose the use of a distributed model predictive control scheme as a function approximator for multi-agent reinforcement learning. We consider networks of linear dynamical systems.

If you find the paper or this repository helpful in your publications, please consider citing it.

@article{mallick2023multi,
  title = {Multi-Agent Reinforcement Learning via Distributed MPC as a Function Approximator},
  author = {Mallick, Samuel and Airaldi, Filippo and De Schutter, Bart and Dabiri, Azita},
  journal={arXiv preprint arXiv:2312.05166},
  year = {2023},
  url = {https://arxiv.org/abs/2312.05166}
}

Installation

The code was created with Python 3.9. To access it, clone the repository

git clone https://github.com/SamuelMallick/dmpcrl-concept.git
cd dmpcrl-concept

and then install the required packages by, e.g., running

pip install -r requirements.txt

Structure

The repository code is structured in the following way

  • data contains the .pkl data files that have been generated for the paper Multi-Agent Reinforcement Learning via Distributed MPC as a Function Approximator.
  • plotting contains scripts that are used for generating the images used in the paper Multi-Agent Reinforcement Learning via Distributed MPC as a Function Approximator.
  • power_system contains contains all files relating to the power system example in the paper Multi-Agent Reinforcement Learning via Distributed MPC as a Function Approximator. q_learning_power.py runs the MARL training algorithm.
  • academic_example.py runs the MARL training algorithm for the academic example in Multi-Agent Reinforcement Learning via Distributed MPC as a Function Approximator.

License

The repository is provided under the GNU General Public License. See the LICENSE file included with this repository.


Author

Samuel Mallick, PhD Candidate [[email protected] | [email protected]]

Delft Center for Systems and Control in Delft University of Technology

This research is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 101018826 - CLariNet).

Copyright (c) 2023 Samuel Mallick.

Copyright notice: Technische Universiteit Delft hereby disclaims all copyright interest in the program “dmpcrl-concept” (Multi-Agent Reinforcement Learning via Distributed MPC as a Function Approximator) written by the Author(s). Prof. Dr. Ir. Fred van Keulen, Dean of 3mE.

About

Proof of concept example for the idea of using distributed model predictive control as a function approximator in distributed reinforcement learning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Batchfile 0.2%