Skip to content

Commit

Permalink
Merge branch 'main' into support_evaluator_env
Browse files Browse the repository at this point in the history
  • Loading branch information
AngelFP committed Sep 14, 2023
1 parent b56ebf4 commit 9ec2fe9
Show file tree
Hide file tree
Showing 21 changed files with 175 additions and 301 deletions.
9 changes: 7 additions & 2 deletions .github/workflows/unix.yml
Original file line number Diff line number Diff line change
@@ -1,13 +1,18 @@
name: Unix

on: [push, pull_request]
on:
push:
pull_request:
# Run daily at midnight (UTC).
schedule:
- cron: '0 0 * * *'

jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8]
python-version: [3.8, 3.9, '3.10', 3.11]

steps:
- uses: actions/checkout@v2
Expand Down
98 changes: 0 additions & 98 deletions Analyze_optimization_results.ipynb

This file was deleted.

80 changes: 74 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,81 @@
<p align="center">
<img width="450" src="https://user-images.githubusercontent.com/20479420/219680583-34ac9525-7715-4e2a-b4fe-74848e9f59b2.png" alt="optimas logo"/>
</p>
<!-- <hr/> -->

# Optimization at scale, powered by [libEnsemble](https://libensemble.readthedocs.io/)
[![PyPI](https://img.shields.io/pypi/v/optimas)](https://pypi.org/project/optimas/)
[![tests badge](https://github.com/optimas-org/optimas/actions/workflows/unix.yml/badge.svg)](https://github.com/optimas-org/optimas/actions)
[![Documentation Status](https://readthedocs.org/projects/optimas/badge/?version=latest)](https://optimas.readthedocs.io/en/latest/?badge=latest)
[![DOI](https://zenodo.org/badge/287560975.svg)](https://zenodo.org/badge/latestdoi/287560975)
[![License](https://img.shields.io/pypi/l/optimas.svg)](license.txt)

<!-- PROJECT LOGO -->
<br />
<div align="center">
<a href="https://github.com/othneildrew/Best-README-Template">
<img src="https://user-images.githubusercontent.com/20479420/219680583-34ac9525-7715-4e2a-b4fe-74848e9f59b2.png" alt="optimas logo" width="350">
</a>

<h3 align="center">
Optimization at scale, powered by
<a href="https://libensemble.readthedocs.io/"><strong>libEnsemble</strong></a>
</h3>

<p align="center">
<a href="https://optimas.readthedocs.io/"><strong>Explore the docs »</strong></a>
<br />
<br />
<a href="https://optimas.readthedocs.io/en/latest/examples/index.html">View Examples</a>
·
<a href="https://optimas-group.slack.com/">Support</a>
·
<a href="https://optimas.readthedocs.io/en/latest/api/index.html">API Reference</a>
</p>
</div>

Optimas is a Python library for scalable optimization on massively-parallel supercomputers. See the [documentation](https://optimas.readthedocs.io/) for installation instructions, tutorials, and more information.

## Installation
From PyPI
```sh
pip install optimas
```
From GitHub
```sh
pip install git+https://github.com/optimas-org/optimas.git
```
Make sure `mpi4py` is available in your environment prior to installing optimas (see [here](https://optimas.readthedocs.io/en/latest/user_guide/installation_local.html) for more details).

Optimas is regularly used and tested in large distributed HPC systems.
We have prepared installation instructions for
[JUWELS (JSC)](https://optimas.readthedocs.io/en/latest/user_guide/installation_juwels.html),
[Maxwell (DESY)](https://optimas.readthedocs.io/en/latest/user_guide/installation_maxwell.html) and
[Perlmutter (NERSC)](https://optimas.readthedocs.io/en/latest/user_guide/installation_perlmutter.html).

## Citing optimas
If your usage of `optimas` leads to a scientific publication, please consider citing the original [paper](https://link.aps.org/doi/10.1103/PhysRevAccelBeams.26.084601):
```bibtex
@article{PhysRevAccelBeams.26.084601,
title = {Bayesian optimization of laser-plasma accelerators assisted by reduced physical models},
author = {Ferran Pousa, A. and Jalas, S. and Kirchen, M. and Martinez de la Ossa, A. and Th\'evenet, M. and Hudson, S. and Larson, J. and Huebl, A. and Vay, J.-L. and Lehe, R.},
journal = {Phys. Rev. Accel. Beams},
volume = {26},
issue = {8},
pages = {084601},
numpages = {9},
year = {2023},
month = {Aug},
publisher = {American Physical Society},
doi = {10.1103/PhysRevAccelBeams.26.084601},
url = {https://link.aps.org/doi/10.1103/PhysRevAccelBeams.26.084601}
}
```
and libEnsemble:
```bibtex
@article{Hudson2022,
title = {{libEnsemble}: A Library to Coordinate the Concurrent
Evaluation of Dynamic Ensembles of Calculations},
author = {Stephen Hudson and Jeffrey Larson and John-Luke Navarro and Stefan M. Wild},
journal = {{IEEE} Transactions on Parallel and Distributed Systems},
volume = {33},
number = {4},
pages = {977--988},
year = {2022},
doi = {10.1109/tpds.2021.3082815}
}
```
2 changes: 1 addition & 1 deletion doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@
},
{
"name": "Slack",
"url": "https://optimas.slack.com/",
"url": "https://optimas-group.slack.com/",
"icon": "fa-brands fa-slack",
},
],
Expand Down
2 changes: 1 addition & 1 deletion doc/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ parallel optimization, from a typical laptop to exascale HPC systems. It is
built on top of
`libEnsemble <https://libensemble.readthedocs.io/>`_.

.. grid:: 3
.. grid:: 1 1 3 3
:gutter: 2

.. grid-item-card:: User guide
Expand Down
4 changes: 2 additions & 2 deletions doc/source/user_guide/basic_usage/analyze_output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ In every run, the following log files are generated:

- ``libE-stats.txt``: log indicating the worker, start time, end time, etc. of
each evaluation.
- ``ensemble.log``: log of ``libEnsemble`` containg the main events of
- ``ensemble.log``: log of ``libEnsemble`` containing the main events of
the run. This includes the commands with each evaluation is launched.
- ``libE_history_for_run_starting_<start_time>_after_sim_<last_simulation_number>.npy``:
numpy file that contains the
Expand Down Expand Up @@ -57,7 +57,7 @@ generated. This is the case, for example, of the
``AxClient`` with the surrogate model used for Bayesian optimization.

Generators that have this capability can also save the internal model
model to file with a certain periodicity (set by the ``model_save_period``
to file with a certain periodicity (set by the ``model_save_period``
attribute). By default, these models will be saved in a
``exploration/model_history`` directory.

Expand Down
38 changes: 38 additions & 0 deletions doc/source/user_guide/citation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
Citing optimas
==============

If your usage of ``optimas`` leads to a scientific publication, please consider
citing the original `paper <https://link.aps.org/doi/10.1103/PhysRevAccelBeams.26.084601>`_:

.. code-block:: bibtex
@article{PhysRevAccelBeams.26.084601,
title = {Bayesian optimization of laser-plasma accelerators assisted by reduced physical models},
author = {Ferran Pousa, A. and Jalas, S. and Kirchen, M. and Martinez de la Ossa, A. and Th\'evenet, M. and Hudson, S. and Larson, J. and Huebl, A. and Vay, J.-L. and Lehe, R.},
journal = {Phys. Rev. Accel. Beams},
volume = {26},
issue = {8},
pages = {084601},
numpages = {9},
year = {2023},
month = {Aug},
publisher = {American Physical Society},
doi = {10.1103/PhysRevAccelBeams.26.084601},
url = {https://link.aps.org/doi/10.1103/PhysRevAccelBeams.26.084601}
}
and libEnsemble:

.. code-block:: bibtex
@article{Hudson2022,
title = {{libEnsemble}: A Library to Coordinate the Concurrent
Evaluation of Dynamic Ensembles of Calculations},
author = {Stephen Hudson and Jeffrey Larson and John-Luke Navarro and Stefan M. Wild},
journal = {{IEEE} Transactions on Parallel and Distributed Systems},
volume = {33},
number = {4},
pages = {977--988},
year = {2022},
doi = {10.1109/tpds.2021.3082815}
}
9 changes: 9 additions & 0 deletions doc/source/user_guide/dependencies.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Dependencies
============

Optimas relies on the following packages:

* `mpi4py <https://pypi.org/project/mpi4py/>`_ - Python bindings for MPI. Required for launching parallel simulations.
* `libEnsemble <https://pypi.org/project/libensemble/>`_ - The backbone of optimas, orchestrates the concurrent evaluation of simulations, the resource detection and allocation, and the communication between simulations and manager.
* `jinja2 <https://pypi.org/project/jinja2/>`_ - Needed to generate simulation scripts from templates.
* `Ax <https://pypi.org/project/ax-platform/>`_ - Algorithms for Bayesian optimization.
16 changes: 13 additions & 3 deletions doc/source/user_guide/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,25 @@ User guide
==========

.. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: Installation

installation
dependencies
installation_local
installation_maxwell
installation_juwels
installation_perlmutter

.. toctree::
:maxdepth: 2
:caption: Basic usage

basic_usage/basic_setup
basic_usage/running_with_simulations
basic_usage/analyze_output
basic_usage/analyze_output

.. toctree::
:maxdepth: 1
:caption: Citation

citation
30 changes: 0 additions & 30 deletions doc/source/user_guide/installation.rst

This file was deleted.

24 changes: 0 additions & 24 deletions doc/source/user_guide/installation_local.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,30 +3,6 @@ Installation on a local computer

The recommended approach is to install optimas in a ``conda`` environment.

Install basic dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code::
conda install numpy pandas
Install PyTorch
~~~~~~~~~~~~~~~

If your computer does not feature a CUDA-capable GPU, install PyTorch for CPU:

.. code::
conda install pytorch cpuonly -c pytorch
If you have a CUDA-capable GPU and want to take make it available to optimas,
install PyTorch with:

.. code::
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
Install ``mpi4py``
~~~~~~~~~~~~~~~~~~
If your system has already an MPI implementation installed, install ``mpi4py``
Expand Down
Loading

0 comments on commit 9ec2fe9

Please sign in to comment.