This repository demonstrates several MLFlow capabilities regarding experiment tracking to model registry.
Two demonstrations have been included here.
This demonstration is adapted from the original MLFlow example that demonstrates autolog with pre-defined configuration files.
In order to run, please refer to the README.md in the mnist_recognizer
sub-directory.
This demonstration includes setting up a local MLFlow tracking server to log a pytorch lightning autoencoder with some hyperparameter tuning capabilities.
In order to run:
- Create a conda environment and install dependencies using
requirements.txt
.
pip install -r requirements.txt
- In terminal, setup the tracking server using make file command.
make init_server
- Navigate to the
pytorch_autoencoder
directory
for autolog example, run:
python autolog_example.py
for manual log example with model schema demonstration, run:
python manual_log_example.py
- For navigation, open a web browser and naviage to the uri provided during server setup. The default port for this example has been set to
8080
and can be changed in the.env
file. One example for the corresponding uri would be:localhost:8080