Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding GLM Flash Extent Density to GSI Observer #588

Closed
daviddowellNOAA opened this issue Jul 10, 2023 · 17 comments · Fixed by #590
Closed

Adding GLM Flash Extent Density to GSI Observer #588

daviddowellNOAA opened this issue Jul 10, 2023 · 17 comments · Fixed by #590

Comments

@daviddowellNOAA
Copy link
Collaborator

Preparing for GSI-EnKF assimilation of GLM flash extent density in Rapid Refresh Forecast System (RRFSv1). First, FED observation handling and observation operator will be added to GSI.

@daviddowellNOAA daviddowellNOAA mentioned this issue Jul 13, 2023
9 tasks
@RussTreadon-NOAA
Copy link
Contributor

WCOSS2 ctests

develop at 9e5aa09 and daviddowellNOAA:GSI_FED at e3fb49b were installed on Cactus. The standard suite of ctests were run with the following results

ctest russ.treadon@clogin03:/lfs/h2/emc/da/noscrub/russ.treadon/git/gsi/pr590/build> ctest -j 9
Test project /lfs/h2/emc/da/noscrub/russ.treadon/git/gsi/pr590/build
    Start 1: global_3dvar
    Start 2: global_4dvar
    Start 3: global_4denvar
    Start 4: hwrf_nmm_d2
    Start 5: hwrf_nmm_d3
    Start 6: rtma
    Start 7: rrfs_3denvar_glbens
    Start 8: netcdf_fv3_regional
    Start 9: global_enkf
1/9 Test #8: netcdf_fv3_regional ..............***Failed  2223.40 sec
2/9 Test #7: rrfs_3denvar_glbens ..............   Passed  2225.54 sec
3/9 Test #9: global_enkf ......................   Passed  2778.38 sec
4/9 Test #2: global_4dvar .....................   Passed  3208.50 sec
5/9 Test #5: hwrf_nmm_d3 ......................***Failed  3577.72 sec
6/9 Test #4: hwrf_nmm_d2 ......................***Failed  3641.85 sec
7/9 Test #3: global_4denvar ...................***Failed  3696.52 sec
8/9 Test #1: global_3dvar .....................***Failed  3747.50 sec
9/9 Test #6: rtma .............................   Passed  3876.20 sec

44% tests passed, 5 tests failed out of 9

Total Test time (real) = 3876.23 sec

The following tests FAILED:
          1 - global_3dvar (Failed)
          3 - global_4denvar (Failed)
          4 - hwrf_nmm_d2 (Failed)
          5 - hwrf_nmm_d3 (Failed)
          8 - netcdf_fv3_regional (Failed)
Errors while running CTest

The following failures are due to non-reproducible analysis results between the update (daviddowellNOAA:GSI_FED and control (develop)

  • global_3dvar - first iteration step size differs on first outer loop
  • global_4denvar - first iteration step size differs on first outer loop
  • hwrf_nmm_d2 - first iteration step size differs on first outer loop
  • netcdf_fv3_regional - initial gradients differ on first outer loop

The hwrf_nmm_d3 failed due to

The case has Failed the scalability test.
The slope for the update (.275427 seconds per node) is less than that for the control (1.471644 seconds per node).

A check of the hwrf_nmm_d3 wall times

tmpreg_hwrf_nmm_d3/hwrf_nmm_d3_hiproc_contrl/stdout:The total amount of wall time                        = 57.989026
tmpreg_hwrf_nmm_d3/hwrf_nmm_d3_hiproc_updat/stdout:The total amount of wall time                        = 59.051491
tmpreg_hwrf_nmm_d3/hwrf_nmm_d3_loproc_contrl/stdout:The total amount of wall time                        = 59.460670
tmpreg_hwrf_nmm_d3/hwrf_nmm_d3_loproc_updat/stdout:The total amount of wall time                        = 59.235109

does not reveal anomalous behavior. This is a non-fatal fail. The timing scalability test should be examined and either improved or removed to avoid misleading ctests results.

daviddowellNOAA:GSI_FED can not be merged into develop until the above results are examined and explained. It's possible develop is at fault but this needs to be demonstrated.

@RussTreadon-NOAA
Copy link
Contributor

Orion ctests
Same packages installed on Orion as on Cactus. ctests were run with the following results

Orion-login-2:/work2/noaa/da/rtreadon/git/gsi/pr590/build$ ctest -j 9
Test project /work2/noaa/da/rtreadon/git/gsi/pr590/build
    Start 1: global_3dvar
    Start 2: global_4dvar
    Start 3: global_4denvar
    Start 4: hwrf_nmm_d2
    Start 5: hwrf_nmm_d3
    Start 6: rtma
    Start 7: rrfs_3denvar_glbens
    Start 8: netcdf_fv3_regional
    Start 9: global_enkf
1/9 Test #7: rrfs_3denvar_glbens ..............***Failed  725.88 sec
2/9 Test #8: netcdf_fv3_regional ..............   Passed  782.56 sec
3/9 Test #4: hwrf_nmm_d2 ......................   Passed  786.83 sec
4/9 Test #9: global_enkf ......................   Passed  787.87 sec
5/9 Test #5: hwrf_nmm_d3 ......................***Failed  854.22 sec
6/9 Test #6: rtma .............................   Passed  1270.42 sec
7/9 Test #3: global_4denvar ...................   Passed  1802.64 sec
8/9 Test #2: global_4dvar .....................   Passed  1983.69 sec
9/9 Test #1: global_3dvar .....................   Passed  2102.27 sec

78% tests passed, 2 tests failed out of 9

Total Test time (real) = 2102.27 sec

The following tests FAILED:
          5 - hwrf_nmm_d3 (Failed)
          7 - rrfs_3denvar_glbens (Failed)
Errors while running CTest
Output from these tests are in: /work2/noaa/da/rtreadon/git/gsi/pr590/build/Testing/Temporary/LastTest.log
Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.

The rrfs_3denvar_glbens failure is due to

The case has Failed the scalability test.
The slope for the update (41.368810 seconds per node) is less than that for the control (55.490523 seconds per node).

The loproc control wall time is noticeably larger than the update. The hiproc wall times are comparable.

tmpreg_rrfs_3denvar_glbens/rrfs_3denvar_glbens_hiproc_contrl/stdout:The total amount of wall time                        = 89.561341
tmpreg_rrfs_3denvar_glbens/rrfs_3denvar_glbens_hiproc_updat/stdout:The total amount of wall time                        = 89.326938
tmpreg_rrfs_3denvar_glbens/rrfs_3denvar_glbens_loproc_contrl/stdout:The total amount of wall time                        = 145.051864
tmpreg_rrfs_3denvar_glbens/rrfs_3denvar_glbens_loproc_updat/stdout:The total amount of wall time                        = 122.421986

This is not a fatal fail.

The hwrf_nmm_d3 failure was also due to timing scalability

The case has Failed the scalability test.
The slope for the update (39.772575 seconds per node) is less than that for the control (257.860855 seconds per node).

Comparison of wall times shows a mixed bag

tmpreg_hwrf_nmm_d3/hwrf_nmm_d3_hiproc_contrl/stdout:The total amount of wall time                        = 68.993479
tmpreg_hwrf_nmm_d3/hwrf_nmm_d3_hiproc_updat/stdout:The total amount of wall time                        = 80.108135
tmpreg_hwrf_nmm_d3/hwrf_nmm_d3_loproc_contrl/stdout:The total amount of wall time                        = 326.854334
tmpreg_hwrf_nmm_d3/hwrf_nmm_d3_loproc_updat/stdout:The total amount of wall time                        = 106.623185

The hiproc update wall time is about 12 seconds higher than the control but the loproc control is 3x greater than the update. There is considerable variability in the wall times for the control and update. This is not a fatal fail.

The non-reproducible behavior between update and control observed on WCOSS2 is not observed on Orion.

@hu5970
Copy link
Collaborator

hu5970 commented Aug 27, 2023

I did regression test for this PR on WCOSS2:

[ming.hu@clogin01 build] ctest -j9
Test project /lfs/h2/emc/ptmp/Ming.Hu/test/GSI/build
    Start 1: global_3dvar
    Start 2: global_4dvar
    Start 3: global_4denvar
    Start 4: hwrf_nmm_d2
    Start 5: hwrf_nmm_d3
    Start 6: rtma
    Start 7: rrfs_3denvar_glbens
    Start 8: netcdf_fv3_regional
    Start 9: global_enkf
1/9 Test #8: netcdf_fv3_regional ..............   Passed  484.98 sec
2/9 Test #5: hwrf_nmm_d3 ......................   Passed  495.88 sec
3/9 Test #9: global_enkf ......................   Passed  614.50 sec
4/9 Test #4: hwrf_nmm_d2 ......................   Passed  666.96 sec
5/9 Test #7: rrfs_3denvar_glbens ..............   Passed  724.93 sec
6/9 Test #6: rtma .............................***Failed  1226.90 sec
7/9 Test #3: global_4denvar ...................   Passed  1443.18 sec
8/9 Test #1: global_3dvar .....................   Passed  1564.93 sec
9/9 Test #2: global_4dvar .....................   Passed  1689.94 sec

89% tests passed, 1 tests failed out of 9

Total Test time (real) = 1689.96 sec

The following tests FAILED:
	  6 - rtma (Failed)
Errors while running CTest

The reason for RTMA failure is:
The case has Failed the scalability test.
This is not a fatal failure.

The test results on Cactus:
/lfs/h2/emc/ptmp/Ming.Hu/test/run

@RussTreadon-NOAA
Copy link
Contributor

Thank you very much, @hu5970 , for running this test. Your results make sense. Mine do not. Let me attempt to replicate your results

@RussTreadon-NOAA
Copy link
Contributor

RussTreadon-NOAA commented Aug 27, 2023

I ran netcdf_fv3_regional with the following results

  • pass using Ming's develop and my build of daviddowellNOAA:GSI_FED
  • fail using 8/27 recompiled Ming's develop' and my build of daviddowellNOAA:GSI_FED`
  • fail using my 8/25 build of develop and my build of daviddowellNOAA:GSI_FED
  • fail using 8/27 recompile of develop and my build of daviddowellNOAA:GSI_FED

Ming's pass result is expected given the nature of the netcdf_fv3_regional test and the changes in daviddowellNOAA:GSI_FED. I can only reproduce his results when I use his build of develop.

Is something wrong with my WCOSS2 (Cactus) environment? I'll continue this investigation tomorrow. Sorry for delaying PR #590. Hopefully the reason for the odd results above can be quickly sorted out tomorrow.

@hu5970
Copy link
Collaborator

hu5970 commented Aug 28, 2023

I recompiled the develop twice with Russ' ".bashrc" and recompiled the fed branch. Then I rerun the "netcdf_fv3_regional" twice. Both are passed. Still cannot figure out why executable from Russ produced different results.

When use Russ's develop executable, the "netcdf_fv3_regional" fail. The cost function for radiance are different:
CONTROL: radiance 1.8612244671704459E+04
FED: radiance 1.8612244671704462E+04

@RussTreadon-NOAA
Copy link
Contributor

Interesting result @hu5970 . I, also, can not explain the observed behavior.

I built gsi.x for PRs #605, #608, #609, and #614. For each I ran ctest netcdf_fv3_regional. The updat (PR specific) and contrl (develop) gsi.x generate identical analyses. So far, only gsi.x from PR #590 yields a different analysis for netcdf_fv3_regional.

Who else on the RRFS team has WCOSS2 access? (S)he could repeat our tests and see what happens.

I don't want to delay PR #590 but we need an explanation of the observed behavior.

@hu5970
Copy link
Collaborator

hu5970 commented Aug 29, 2023

@RussTreadon-NOAA I used ems.lam account to build regression test suite and got the same results as my own account.
I will build and test #605 to see if I can get the same results. I will document the process of my test and let you check if it matches your test steps.

@RussTreadon-NOAA
Copy link
Contributor

Excellent idea, @hu5970 . Let me try both the emc.global and emc.da accounts.

@hu5970
Copy link
Collaborator

hu5970 commented Aug 29, 2023

Use PR605 to test the regression suite issue we have for this PR. All tests are under
/lfs/h2/emc/ptmp/Ming.Hu/pr605/

Build PR605 executable

git clone https://github.com/shoyokota/GSI.git
 cd GSI
 git checkout feature/PR_NOAA-EMC_EnVar-DBZ2
 git submodule init
 git submodule update
 cd ush
 ./build.sh

Setup regression:

  cd /lfs/h2/emc/ptmp/Ming.Hu/pr605/GSI/regression
  edit regression_var.sh to add three lines in front of the file:
    ptmp="/lfs/h2/emc/ptmp/Ming.Hu/pr605/run"
    group="emc"
    accnt="RRFS-DEV"

Run regression:

cd /lfs/h2/emc/ptmp/Ming.Hu/pr605/GSI/build
ctest -j9

[ming.hu@clogin01 build] ctest -j9
Test project /lfs/h2/emc/ptmp/Ming.Hu/pr605/GSI/build
   Start 1: global_3dvar
   Start 2: global_4dvar
   Start 3: global_4denvar
   Start 4: hwrf_nmm_d2
   Start 5: hwrf_nmm_d3
   Start 6: rtma
   Start 7: rrfs_3denvar_glbens
   Start 8: netcdf_fv3_regional
   Start 9: global_enkf

The results of the test:

1/9 Test #9: global_enkf ......................***Failed  480.72 sec
2/9 Test #8: netcdf_fv3_regional ..............***Failed  542.65 sec
3/9 Test #7: rrfs_3denvar_glbens ..............   Passed  604.69 sec
4/9 Test #4: hwrf_nmm_d2 ......................   Passed  1385.65 sec
5/9 Test #5: hwrf_nmm_d3 ......................   Passed  1452.14 sec
6/9 Test #2: global_4dvar .....................   Passed  1681.80 sec
7/9 Test #6: rtma .............................   Passed  1688.70 sec
8/9 Test #3: global_4denvar ...................   Passed  1804.49 sec
9/9 Test #1: global_3dvar .....................   Passed  1984.80 sec

Repeat netcdf_fv3_regional case:

[ming.hu@clogin01 build] ctest -R netcdf_fv3_regional
Test project /lfs/h2/emc/ptmp/Ming.Hu/pr605/GSI/build
    Start 8: netcdf_fv3_regional
1/1 Test #8: netcdf_fv3_regional ..............   Passed  669.20 sec

100% tests passed, 0 tests failed out of 1

Total Test time (real) = 669.26 sec

The global enkf fails because of namelist. PR 065 need to be synced with develop.

Checking the iteration of the netcdf_fv3_regional:

[ming.hu@clogin04 tmpreg_netcdf_fv3_regional] grep 'cost,grad,step,b,step? =   1   0' */stdout
netcdf_fv3_regional_hiproc_contrl/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 
netcdf_fv3_regional_hiproc_updat/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 
netcdf_fv3_regional_loproc_contrl/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 
netcdf_fv3_regional_loproc_updat/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 

Those are the same values from PR590 tests.

@RussTreadon-NOAA
Copy link
Contributor

@hu5970 , I can reproduce your PR #590 results if my working copy of develop has an empty fix directory. This happens when we clone NOAA-EMC/GSI without the --recursive option.

Cactus /lfs/h2/emc/ptmp/russ.treadon/pr590_recursive/tmpreg_netcdf_fv3_regional contains ctest results from a setup in which develop was cloned with --recursive. This yields the non-reproducible result I reported above

russ.treadon@clogin09:/lfs/h2/emc/ptmp/russ.treadon/pr590_recursive/tmpreg_netcdf_fv3_regional> grep "cost,grad,step,b,step? =   
2  50" net*/stdout
netcdf_fv3_regional_hiproc_contrl/stdout:cost,grad,step,b,step? =   2  50  2.512419725616740179E+05  6.046083394810837319E+01  8.290106289009062657E-01  1.269187958189923915E+00  good 
netcdf_fv3_regional_hiproc_updat/stdout:cost,grad,step,b,step? =   2  50  2.512410327833037009E+05  5.755566913696180364E+01  1.649679668494160367E+00  9.097536688456433485E-01  good 
netcdf_fv3_regional_loproc_contrl/stdout:cost,grad,step,b,step? =   2  50  2.512419725616740179E+05  6.046083394810837319E+01  8.290106289009062657E-01  1.269187958189923915E+00  good 
netcdf_fv3_regional_loproc_updat/stdout:cost,grad,step,b,step? =   2  50  2.512410327833037009E+05  5.755566913696180364E+01  1.649679668494160367E+00  9.097536688456433485E-01  good 

Cactus /lfs/h2/emc/ptmp/russ.treadon/pr590/tmpreg_netcdf_fv3_regional contains ctest results from a setup in which develop was cloned without --recursive. This yields the reproducible result you report above

russ.treadon@clogin09:/lfs/h2/emc/ptmp/russ.treadon/pr590/tmpreg_netcdf_fv3_regional> grep "cost,grad,step,b,step? =   2  50" net*/stdout
netcdf_fv3_regional_hiproc_contrl/stdout:cost,grad,step,b,step? =   2  50  2.512410327833037009E+05  5.755566913696180364E+01  1.649679668494160367E+00  9.097536688456433485E-01  good 
netcdf_fv3_regional_hiproc_updat/stdout:cost,grad,step,b,step? =   2  50  2.512410327833037009E+05  5.755566913696180364E+01  1.649679668494160367E+00  9.097536688456433485E-01  good 
netcdf_fv3_regional_loproc_contrl/stdout:cost,grad,step,b,step? =   2  50  2.512410327833037009E+05  5.755566913696180364E+01  1.649679668494160367E+00  9.097536688456433485E-01  good 
netcdf_fv3_regional_loproc_updat/stdout:cost,grad,step,b,step? =   2  50  2.512410327833037009E+05  5.755566913696180364E+01  1.649679668494160367E+00  9.097536688456433485E-01  good 

The updat (PR #590) results are identical between both setups The contrl (develop branch) results differ. Why should the presence or absence of fix impact gsi.x results?

The ctests copy their fix from the working copy of daviddowellNOAA:GSI_FED CRTM coefficients come from the same location, /apps/ops/prod/libs/intel/19.1.3.304/crtm/2.4.0/fix, for all runs.

This is odd. I can not explain what I see.

@hu5970
Copy link
Collaborator

hu5970 commented Aug 29, 2023

I used the same steps to run regression tests for PR 614:

1/9 Test #8: netcdf_fv3_regional ..............***Failed  482.82 sec
2/9 Test #9: global_enkf ......................   Passed  488.60 sec
3/9 Test #5: hwrf_nmm_d3 ......................***Failed  492.35 sec
4/9 Test #7: rrfs_3denvar_glbens ..............   Passed  605.05 sec
5/9 Test #4: hwrf_nmm_d2 ......................   Passed  605.84 sec
6/9 Test #6: rtma .............................   Passed  1088.96 sec
7/9 Test #3: global_4denvar ...................   Passed  1326.12 sec
8/9 Test #1: global_3dvar .....................   Passed  1442.74 sec
9/9 Test #2: global_4dvar .....................   Passed  1502.08 sec

78% tests passed, 2 tests failed out of 9

Total Test time (real) = 1502.13 sec

The following tests FAILED:
	  5 - hwrf_nmm_d3 (Failed)
	  8 - netcdf_fv3_regional (Failed)

Both "netcdf_fv3_regional" and "hwrf_nmm_d3" can reproduce the control results. No critical failures in the test.

The check of the first iteration for netcdf_fv3_regional:

[ming.hu@clogin04 tmpreg_netcdf_fv3_regional] grep 'cost,grad,step,b,step? =   1   0' */stdout
netcdf_fv3_regional_hiproc_contrl/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 
netcdf_fv3_regional_hiproc_updat/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 
netcdf_fv3_regional_loproc_contrl/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 
netcdf_fv3_regional_loproc_updat/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 

@hu5970
Copy link
Collaborator

hu5970 commented Aug 29, 2023

@RussTreadon-NOAA

I thought that fix under "develop" is not needed for the regression test and that is why I usually run regression test without fix filled under "develop".

But I did the new test with fix filled with
"git submodule init"
"git submodule update"

I thought it is the same like clone with "--recursive" because the fix is the only submodule.

I am trying to clone develop with "--recursive" and then rerun the regression tests. If it produces different results from "develop" without "--recursive", I will compare two "develop"s to see if there are difference we should investigate.

@hu5970
Copy link
Collaborator

hu5970 commented Aug 29, 2023

@RussTreadon-NOAA I cloned the develop:

git clone https://github.com/NOAA-EMC/GSI.git --recursive
mv GSI/ develop
cd develop/ush
./build.sh

The test of "netcdf_fv3_regional" still has the same results:

[ming.hu@clogin04 tmpreg_netcdf_fv3_regional] grep 'cost,grad,step,b,step? =   1   0' */stdout
netcdf_fv3_regional_hiproc_contrl/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 
netcdf_fv3_regional_hiproc_updat/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 
netcdf_fv3_regional_loproc_contrl/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 
netcdf_fv3_regional_loproc_updat/stdout:cost,grad,step,b,step? =   1   0  6.384146898293155245E+05  5.374321880615575537E+03  9.679495026521620638E-01  0.000000000000000000E+00  good 

@RussTreadon-NOAA
Copy link
Contributor

@hu5970 , you've tried very hard to reproduce what I see. You can't do so from either your WCOSS2 account or ems.lam. I see differences between develop and GSI_FED depending on whether or not develop fix is empty or filled. This does not make sense but the results are on disk.

I am not the handling reviewer for PR #590 I can not approve or merge PR #590. This does not prevent others from approving and merging PR #590.

@hu5970
Copy link
Collaborator

hu5970 commented Aug 30, 2023

@RussTreadon-NOAA Thanks for your hard efforts to make sure the robust of the GSI system. I will try to make a fresh branch and merge the code from David in case there are some unknown history in his branch. I will do test again with new branch and then maybe ask an EMC colleague to help run a single case "netcdf_fv3_regional" to confirm. If the new branch and tests look good, I will try to merge the code.

@RussTreadon-NOAA
Copy link
Contributor

Thank you @hu5970

hu5970 added a commit that referenced this issue Sep 8, 2023
<!-- PLEASE READ -->
<!--
Before opening a PR, please note these guidelines:

- Each PR should only address ONE topic and have an associated issue
- No hardcoded or paths to personal directories should be present
- No temporary or backup files should be committed
- Any code that was disabled by being commented out should be removed
-->

**Description**

Initialization of the operational RRFSv1 will include assimilation of
flash-extent density (FED) observations from the GOES Geostationary
Lightning Mapper (GLM). The current PR is the first of at least 3 that
will be needed to introduce the capability of FED assimilation into the
code and regional workflow. The new capabilities that are added to GSI
are:

* reading NetCDF FED observations
* applying an observation operator that maps the model state to FED.

Much of the code was originally developed by Rong Kong at OU-CAPS (Kong
et al. 2020, Wang et al. 2021, Kong et al. 2022;
https://doi.org/10.1175/MWR-D-19-0192.1,
https://doi.org/10.1175/MWR-D-20-0406.1,
https://doi.org/10.1175/MWR-D-21-0326.1). Recently, the observation
operator has been modified by Amanda Back and Ashley Sebok based on
tests with regional, convection-allowing FV3 forecasts. The new
observation operator includes a cap of 8 flashes / minute for both the
observed and simulated FED.

The observation operator is specific to the 3-km regional FV3
application in RRFS. Development of a more general observation operator
is left to future work.

Fixes #588 

<!-- Please include relevant motivation and context. -->
<!-- Please include a summary of the change and which issue is fixed.
-->
<!-- List any dependencies that are required for this change. -->

<!-- Please provide reference to the issue this pull request is
addressing. -->
<!-- For e.g. Fixes #IssueNumber -->

**Type of change**

Please delete options that are not relevant.

- [ ] Bug fix (non-breaking change which fixes an issue)
- [X] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update

**How Has This Been Tested?**

Initial tests were with NOAA-EMC GSI-EnKF code obtained in April 2023
and modified to include the assimilation of FED observations. A
prototype of RRFSv1 was cycled hourly for 2.5 days, and the EnKF
assimilation included FED data assimilation.

For the current PR, only the GSI observer with FED (and radar
reflectivity) observations was tested. It produces identical results to
those obtained in April 2023.

<!-- Please describe the tests that you ran to verify your changes and
on the platforms these tests were conducted. -->
<!-- Provide instructions so we can reproduce. -->
<!-- Please also list any relevant details for your test configuration
-->
  
**Checklist**

- [ ] My code follows the style guidelines of this project
- [X] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] New and existing tests pass with my changes
- [ ] Any dependent changes have been merged and published

**DUE DATE for this PR is 8/24/2023.** If this PR is not merged into
`develop` by this date, the PR will be closed and returned to the
developer.

---------

Co-authored-by: Ming Hu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants