Skip to content

Latest commit

 

History

History
84 lines (54 loc) · 4.44 KB

README.md

File metadata and controls

84 lines (54 loc) · 4.44 KB

MLOS Autotuning Template Repo

This repo is a barebones template for a developer environment with some basic scripts and configs to help do autotuning for a new target using MLOS.

Getting Started

  1. Fork this repository, or "Use Template" (preferred).

  2. Open the new repository in VSCode.

  3. Reopen in a devcontainer.

    For additional dev environment details, see the devcontainer README.md

  4. Reopen the workspace.

  5. Add some configs and script to the config/ tree.

  6. Activate the conda environment in the integrated terminal:

    conda activate mlos
  7. Login to the Azure CLI:

    az login
  8. Stash some relevant auth info (e.g., subscription ID, resource group, etc.):

    ./MLOS/scripts/generate-azure-credentials-config.sh > global_azure_config.json
  9. Run the mlos_bench tool.

    For instance, to run the Redis example from the upstream MLOS repo (which is pulled locally automatically by the devcontainer startup scripts):

    mlos_bench --config "./MLOS/mlos_bench/mlos_bench/config/cli/azure-redis-opt.jsonc" --globals "./MLOS/mlos_bench/mlos_bench/config/experiments/experiment_RedisBench.jsonc" --max_iterations 10

    This should take a few minutes to run and does the following:

    • Loads the CLI config.

      • The "experiment" config specified by the --globals parameter further customizes that config with the experiment specific parameters (e.g., telling it which tunable parameters to use for the experiment, the experiment name, etc.).

        Alternatively, other config files from the config/experiments/ directory can be referenced with the --globals argument as well in order to customize the experiment, while keeping the core other configs the same.

    • The CLI config also references and loads the root Environment config for Redis.

      • In that config the setup section lists commands used to
        1. Prepare a config for the redis instance based on the tunable parameters specified in the experiment config,
      • Next, the run section lists commands used to
        1. Runs a basic redis-benchmark to exercise the instance.
        2. assemble the results into a file that is read in the read_results_file config section in order to store them into the mlos_bench results database.
      • Finally, since their is an optimizer specified, this process repeats 10 times to sample several different config points.

    The overall process looks like this:

    optimization loop

    Source: LlamaTune: VLDB 2022

See Also

Additional Examples

Data Science APIs