Skip to content

Latest commit

 

History

History
61 lines (48 loc) · 3.31 KB

PublishedResults.md

File metadata and controls

61 lines (48 loc) · 3.31 KB

Published Benchmark

Many Deephaven benchmarks are published to the deephaven-benchmark GCloud bucket, accessible only through Google's storage API. Benchmarks are organized in the bucket in much the same ways as the results generated from the command line.

Main Query Snippet

The easiest way to access the published benchmark results is by running the following Deephaven query code snippet in an instance of the Deephaven Engine.

from urllib.request import urlopen; import os

root = 'file:///nfs' if os.path.exists('/nfs/deephaven-benchmark') else 'https://storage.googleapis.com'
with urlopen(root + '/deephaven-benchmark/benchmark_tables.dh.py') as r:
    benchmark_storage_uri_arg = root + '/deephaven-benchmark'
    benchmark_category_arg = 'release'  # release | nightly    
    benchmark_max_runs_arg = 2  # Latest X runs to include   
    exec(r.read().decode(), globals(), locals())

This will process the available benchmarks for the given benchmark category (release or nightly), merge test runs together, and generate some useful Deephaven tables that can be used to explore the benchmarks. This is the main query snippet that other query snippets below rely on.

Requirements:

Tables Generated by the Main Query Snippet

  • bench_results: A merge of all available benchmark runs for a category
  • bench_platforms: A merge the JVM configuration and hardware for the benchmark runs
  • bench_metrics: A merge of the JVM metrics taken before and after each running
  • bench_metrics_diff: The difference of the before and after metrics for each benchmark
  • bench_results_diff: Bench results with some metrics diffs added as columns
  • bench_results_change: Bench results with analysis of variability and rate change compared to past runs

Adhoc Query Snippet

The adhoc query snippet is for use by developers who have executed on-demand benchmarks runs. These usually appear during an investigation of a subset of benchmarks. Only Deephaven developers can make these runs, but they are still publicly available in the deephaven-benchmark/adhoc GCloud bucket. (Note: These benchmark sets are often deleted after an investigation is over.)

from urllib.request import urlopen; import os

root = 'file:///nfs' if os.path.exists('/nfs/deephaven-benchmark') else 'https://storage.googleapis.com'
with urlopen(root + '/deephaven-benchmark/adhoc_tables.dh.py') as r:
    benchmark_sets_arg = ['user1/setname1','user1/setname2']  # The set names to run including user (ex. ['user1/myset1','user1/myset2'])
    benchmark_set_runs_arg = 5  # Maximum number of runs to include per set
    exec(r.read().decode(), globals(), locals())

This will process two or more adhoc benchmark sets and produce a table the compares the rates in each. Loading an caching of the benchmark runs is done automatically.

Requirements:

Tables Generated by the Adhoc Query Snippet

  • adhoc_set_compare: Shows benchmarks, rate and variability columns for each benchmark set