Skip to content

Commit

Permalink
Merge branch 'develop' into bugfix/process_radar
Browse files Browse the repository at this point in the history
  • Loading branch information
EdwardSnyder-NOAA authored Aug 8, 2023
2 parents cce9dd2 + 26b3ac8 commit 900f013
Show file tree
Hide file tree
Showing 55 changed files with 2,358 additions and 106 deletions.
13 changes: 11 additions & 2 deletions .cicd/Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -127,12 +127,13 @@ pipeline {
}

// Run the unittest functional tests that require an HPC platform
stage('Functional Tests') {
stage('Functional UnitTests') {
steps {
echo "Running unittest on retrieve_data.py"
sh 'bash --login "${WORKSPACE}/.cicd/scripts/srw_unittest.sh"'
}
}

// Run the unified build script; if successful create a tarball of the build and upload to S3
stage('Build') {
steps {
Expand All @@ -148,6 +149,14 @@ pipeline {
}
}

// Try a few Workflow Task scripts to make sure E2E tests can be launched in a follow-on 'Test' stage
stage('Functional WorkflowTaskTests') {
steps {
echo "Running simple workflow script task tests on ${env.SRW_PLATFORM} (using ${env.WORKSPACE})"
sh 'bash --login "${WORKSPACE}/.cicd/scripts/srw_ftest.sh"'
}
}

// Run the unified test script
stage('Test') {
environment {
Expand Down Expand Up @@ -187,7 +196,7 @@ pipeline {
sh 'cd "${SRW_WE2E_EXPERIMENT_BASE_DIR}" && tar --create --gzip --verbose --dereference --file "${WORKSPACE}/we2e_test_logs-${SRW_PLATFORM}-${SRW_COMPILER}.tgz" */log.generate_FV3LAM_wflow */log/* ${WORKSPACE}/tests/WE2E/WE2E_tests_*yaml WE2E_summary*txt ${WORKSPACE}/tests/WE2E/log.*'
// Remove the data sets from the experiments directory to conserve disk space
sh 'find "${SRW_WE2E_EXPERIMENT_BASE_DIR}" -regextype posix-extended -regex "^.*(orog|[0-9]{10})$" -type d | xargs rm -rf'
s3Upload consoleLogLevel: 'INFO', dontSetBuildResultOnFailure: false, dontWaitForConcurrentBuildCompletion: false, entries: [[bucket: 'noaa-epic-prod-jenkins-artifacts', excludedFile: '', flatten: false, gzipFiles: false, keepForever: false, managedArtifacts: true, noUploadOnFailure: false, selectedRegion: 'us-east-1', showDirectlyInBrowser: false, sourceFile: 'we2e_test_results-*-*.txt', storageClass: 'STANDARD', uploadFromSlave: false, useServerSideEncryption: false], [bucket: 'noaa-epic-prod-jenkins-artifacts', excludedFile: '', flatten: false, gzipFiles: false, keepForever: false, managedArtifacts: true, noUploadOnFailure: false, selectedRegion: 'us-east-1', showDirectlyInBrowser: false, sourceFile: 'we2e_test_logs-*-*.tgz', storageClass: 'STANDARD', uploadFromSlave: false, useServerSideEncryption: false]], pluginFailureResultConstraint: 'FAILURE', profileName: 'main', userMetadata: []
s3Upload consoleLogLevel: 'INFO', dontSetBuildResultOnFailure: false, dontWaitForConcurrentBuildCompletion: false, entries: [[bucket: 'noaa-epic-prod-jenkins-artifacts', excludedFile: '', flatten: false, gzipFiles: false, keepForever: false, managedArtifacts: true, noUploadOnFailure: false, selectedRegion: 'us-east-1', showDirectlyInBrowser: false, sourceFile: '*_test_results-*-*.txt', storageClass: 'STANDARD', uploadFromSlave: false, useServerSideEncryption: false], [bucket: 'noaa-epic-prod-jenkins-artifacts', excludedFile: '', flatten: false, gzipFiles: false, keepForever: false, managedArtifacts: true, noUploadOnFailure: false, selectedRegion: 'us-east-1', showDirectlyInBrowser: false, sourceFile: 'we2e_test_logs-*-*.tgz', storageClass: 'STANDARD', uploadFromSlave: false, useServerSideEncryption: false]], pluginFailureResultConstraint: 'FAILURE', profileName: 'main', userMetadata: []
}
}
}
Expand Down
28 changes: 22 additions & 6 deletions .cicd/scripts/srw_ftest.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
# We want this following a normal SRW build in an attempt to exercise environment setup, modules,
# data sets, and workflow scripts, without using too much time nor account resources.
# We hope to catch any snags that might prevent WE2E fundamental testing, which follows this gate.
# NOTE: At this time, this script is a placeholder for functional test framework.
# NOTE:
# At this time, we are leaving the exercise of graphical plotting for a later stage, perhaps WE2E state.
#
# Required:
Expand Down Expand Up @@ -71,19 +71,27 @@ echo "DATA_LOCATION=${DATA_LOCATION}"
sed "s|^task_get_extrn_ics:|task_get_extrn_ics:\n EXTRN_MDL_SOURCE_BASEDIR_ICS: ${DATA_LOCATION}/FV3GFS/grib2/2019061518|1" -i ush/config.yaml
sed "s|^task_get_extrn_lbcs:|task_get_extrn_lbcs:\n EXTRN_MDL_SOURCE_BASEDIR_LBCS: ${DATA_LOCATION}/FV3GFS/grib2/2019061518|1" -i ush/config.yaml

# Use staged data for HPSS supported machines
sed 's|^platform:|platform:\n EXTRN_MDL_DATA_STORES: disk|g' -i ush/config.yaml

# Activate the workflow environment ...
source etc/lmod-setup.sh ${platform,,}
module use modulefiles
module load build_${platform,,}_${SRW_COMPILER}
module load wflow_${platform,,}

[[ ${FORGIVE_CONDA} == true ]] && set +e +u # Some platforms have incomplete python3 or conda support, but wouldn't necessarily block workflow tests
conda activate regional_workflow
conda activate workflow_tools
set -e -u

export PYTHONPATH=${workspace}/ush/python_utils/workflow-tools:${workspace}/ush/python_utils/workflow-tools/src

# Adjust for strict limitation of stack size
sed "s|ulimit -s unlimited;|ulimit -S -s unlimited;|" -i ${workspace}/ush/machine/hera.yaml

cd ${workspace}/ush
# Consistency check ...
./config_utils.py -c ./config.yaml -v ./config_defaults.yaml -k "(\!rocoto\b)"
#./config_utils.py -c ./config.yaml -v ./config_defaults.yaml -k "(\!rocoto\b)"
# Generate workflow files ...
./generate_FV3LAM_wflow.py
cd ${workspace}
Expand Down Expand Up @@ -115,16 +123,24 @@ export OMP_NUM_THREADS=1
)
set +x

echo "# Try the first few simple SRW tasks ..."
results_file=${workspace}/functional_test_results_${SRW_PLATFORM}_${SRW_COMPILER}.txt
rm -f ${results_file}

status=0
for task in ${TASKS[@]:0:${TASK_DEPTH}} ; do

# Limit to machines that are fully ready
deny_machines=( gaea )
if [[ ${deny_machines[@]} =~ ${platform,,} ]] ; then
echo "# Deny ${platform} - incomplete configuration." | tee -a ${results_file}
else
echo "# Try ${platform} with the first few simple SRW tasks ..." | tee -a ${results_file}
for task in ${TASKS[@]:0:${TASK_DEPTH}} ; do
echo -n "./$task.sh ... "
./$task.sh > $task-log.txt 2>&1 && echo "COMPLETE" || echo "FAIL rc=$(( status+=$? ))"
# stop at the first sign of trouble ...
[[ 0 != ${status} ]] && echo "$task: FAIL" >> ${results_file} && break || echo "$task: COMPLETE" >> ${results_file}
done
done
fi

# Set exit code to number of failures
set +e
Expand Down
12 changes: 6 additions & 6 deletions .cicd/scripts/srw_metric_example.sh
Original file line number Diff line number Diff line change
Expand Up @@ -50,15 +50,15 @@ conda activate regional_workflow
set -e -u

# build srw
cd ${WORKSPACE}/tests
cd ${workspace}/tests
./build.sh ${platform,,} ${SRW_COMPILER}
cd ${WORKSPACE}
cd ${workspace}

# run test
[[ -d ${we2e_experiment_base_dir} ]] && rm -rf ${we2e_experiment_base_dir}
cd ${WORKSPACE}/tests/WE2E
cd ${workspace}/tests/WE2E
./run_WE2E_tests.py -t ${we2e_test_name} -m ${platform,,} -a ${SRW_PROJECT} --expt_basedir "metric_test" --exec_subdir=install_intel/exec -q
cd ${WORKSPACE}
cd ${workspace}

# run skill-score check
# first load MET env variables
Expand All @@ -70,8 +70,8 @@ source ${we2e_experiment_base_dir}/${we2e_test_name}/var_defns.sh
# It is computed by aggregating the output from earlier runs of the Point-Stat and/or Grid-Stat tools over one or more cases.
# In this example, skill score index is a weighted average of 16 skill scores of RMSE statistics for wind speed, dew point temperature,
# temperature, and pressure at lowest level in the atmosphere over 48 hour lead time.
cp ${we2e_experiment_base_dir}/${we2e_test_name}/2019061500/mem000/metprd/PointStat/*.stat ${WORKSPACE}/Indy-Severe-Weather/metprd/point_stat/
${MET_INSTALL_DIR}/${MET_BIN_EXEC}/stat_analysis -config parm/metplus/STATAnalysisConfig_skill_score -lookin ${WORKSPACE}/Indy-Severe-Weather/metprd/point_stat -v 2 -out skill-score.out
cp ${we2e_experiment_base_dir}/${we2e_test_name}/2019061500/metprd/PointStat/*.stat ${workspace}/Indy-Severe-Weather/metprd/point_stat/
${MET_INSTALL_DIR}/${MET_BIN_EXEC}/stat_analysis -config parm/metplus/STATAnalysisConfig_skill_score -lookin ${workspace}/Indy-Severe-Weather/metprd/point_stat -v 2 -out skill-score.out

# check skill-score.out
cat skill-score.out
Expand Down
16 changes: 12 additions & 4 deletions docs/UsersGuide/source/ConfigWorkflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -151,23 +151,31 @@ METplus Parameters
* ``SS`` refers to the two-digit valid seconds of the hour

``CCPA_OBS_DIR``: (Default: "")
User-specified location of top-level directory where CCPA hourly precipitation files used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``GET_OBS_CCPA`` task. (This task is activated in the workflow by using the taskgroup file ``parm/wflow/verify.yaml``).
User-specified location of top-level directory where CCPA hourly precipitation files used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``get_obs_ccpa`` task. (This task is activated in the workflow by using the taskgroup file ``parm/wflow/verify_pre.yaml``).

METplus configuration files require the use of a predetermined directory structure and file names. If the CCPA files are user-provided, they need to follow the anticipated naming structure: ``{YYYYMMDD}/ccpa.t{HH}z.01h.hrap.conus.gb2``, where YYYYMMDD and HH are as described in the note :ref:`above <METParamNote>`. When pulling observations from NOAA HPSS, the data retrieved will be placed in the ``CCPA_OBS_DIR`` directory. This path must be defind as ``/<full-path-to-obs>/ccpa/proc``. METplus is configured to verify 01-, 03-, 06-, and 24-h accumulated precipitation using hourly CCPA files.

.. note::
There is a problem with the valid time in the metadata for files valid from 19 - 00 UTC (i.e., files under the "00" directory). The script to pull the CCPA data from the NOAA HPSS (``scripts/exregional_get_obs_ccpa.sh``) has an example of how to account for this and organize the data into a more intuitive format. When a fix is provided, it will be accounted for in the ``exregional_get_obs_ccpa.sh`` script.

``NOHRSC_OBS_DIR``: (Default: "")
User-specified location of top-level directory where NOHRSC 06- and 24-hour snowfall accumulation files (available every 6 and 12 hours respectively) used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``get_obs_nohrsc`` task. (This task is activated in the workflow by using the taskgroup file ``parm/wflow/verify_pre.yaml``).

METplus configuration files require the use of a predetermined directory structure and file names. If the NOHRSC files are user-provided, they need to follow the anticipated naming structure: ``{YYYYMMDD}/sfav2_CONUS_{AA}h_{YYYYMMDD}{HH}_grid184.grb2``, where AA is the 2-digit accumulation duration, and YYYYMMDD and HH are as described in the note :ref:`above <METParamNote>`. When pulling observations from NOAA HPSS, the data retrieved will be placed in the ``NOHRSC_OBS_DIR`` directory. This path must be defind as ``/<full-path-to-obs>/nohrsc/proc``. METplus is configured to verify 06-, and 24-h accumulated precipitation using NOHRSC files.

.. note::
Due to limited availability of NOHRSC observation data on NOAA HPSS, and the likelihood that snowfall acumulation verification will not be desired outside of winter cases, this verification option is currently not present in the workflow by default. In order to use it, the verification environment variable VX_FIELDS should be updated to include ``ASNOW``. This will allow the related workflow tasks to be run.

``MRMS_OBS_DIR``: (Default: "")
User-specified location of top-level directory where MRMS composite reflectivity files used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``GET_OBS_MRMS`` task (activated in the workflow automatically when using the taskgroup file ``parm/wflow/verify.yaml``). When pulling observations directly from NOAA HPSS, the data retrieved will be placed in this directory. Please note, this path must be defind as ``/<full-path-to-obs>/mrms/proc``.
User-specified location of top-level directory where MRMS composite reflectivity files used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``get_obs_mrms`` task (activated in the workflow automatically when using the taskgroup file ``parm/wflow/verify_pre.yaml``). When pulling observations directly from NOAA HPSS, the data retrieved will be placed in this directory. Please note, this path must be defind as ``/<full-path-to-obs>/mrms/proc``.

METplus configuration files require the use of a predetermined directory structure and file names. Therefore, if the MRMS files are user-provided, they need to follow the anticipated naming structure: ``{YYYYMMDD}/MergedReflectivityQCComposite_00.50_{YYYYMMDD}-{HH}{mm}{SS}.grib2``, where YYYYMMDD and {HH}{mm}{SS} are as described in the note :ref:`above <METParamNote>`.

.. note::
METplus is configured to look for a MRMS composite reflectivity file for the valid time of the forecast being verified; since MRMS composite reflectivity files do not always exactly match the valid time, a script (within the main script that retrieves MRMS data from the NOAA HPSS) is used to identify and rename the MRMS composite reflectivity file to match the valid time of the forecast. The script to pull the MRMS data from the NOAA HPSS has an example of the expected file-naming structure: ``scripts/exregional_get_obs_mrms.sh``. This script calls the script used to identify the MRMS file closest to the valid time: ``ush/mrms_pull_topofhour.py``.

``NDAS_OBS_DIR``: (Default: "")
User-specified location of the top-level directory where NDAS prepbufr files used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``GET_OBS_NDAS`` task (activated in the workflow automatically when using the taskgroup file ``parm/wflow/verify.yaml``). When pulling observations directly from NOAA HPSS, the data retrieved will be placed in this directory. Please note, this path must be defined as ``/<full-path-to-obs>/ndas/proc``. METplus is configured to verify near-surface variables hourly and upper-air variables at 00 and 12 UTC with NDAS prepbufr files.
User-specified location of the top-level directory where NDAS prepbufr files used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``get_obs_ndas`` task (activated in the workflow automatically when using the taskgroup file ``parm/wflow/verify_pre.yaml``). When pulling observations directly from NOAA HPSS, the data retrieved will be placed in this directory. Please note, this path must be defined as ``/<full-path-to-obs>/ndas/proc``. METplus is configured to verify near-surface variables hourly and upper-air variables at 00 and 12 UTC with NDAS prepbufr files.

METplus configuration files require the use of predetermined file names. Therefore, if the NDAS files are user-provided, they need to follow the anticipated naming structure: ``prepbufr.ndas.{YYYYMMDDHH}``, where YYYYMMDDHH is as described in the note :ref:`above <METParamNote>`. The script to pull the NDAS data from the NOAA HPSS (``scripts/exregional_get_obs_ndas.sh``) has an example of how to rename the NDAS data into a more intuitive format with the valid time listed in the file name.

Expand Down Expand Up @@ -380,7 +388,7 @@ Verification Parameters
---------------------------

``GET_OBS``: (Default: "get_obs")
Set the name of the Rocoto workflow task used to load proper module files for ``GET_OBS_*`` tasks. Users typically do not need to change this value.
Set the name of the Rocoto workflow task used to load proper module files for ``get_obs_*`` tasks. Users typically do not need to change this value.


.. _NCOModeParms:
Expand Down
3 changes: 3 additions & 0 deletions docs/UsersGuide/source/Glossary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,9 @@ Glossary
netCDF
NetCDF (`Network Common Data Form <https://www.unidata.ucar.edu/software/netcdf/>`__) is a file format and community standard for storing multidimensional scientific data. It includes a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

NOHRSC
The National Operational Hydrologic Remote Sensing Center, which provides the National Snowfall Analysis, an observation-based, gridded estimate of recent snowfall, now an operational product.

NSSL
The `National Severe Storms Laboratory <https://www.nssl.noaa.gov/>`__.

Expand Down
Loading

0 comments on commit 900f013

Please sign in to comment.