Short Range Weather App Tutorial:
Simulating the August 10 2020 Derecho

Prepared by Rowin Smith

Overview

This tutorial provides a structured walkthrough for simulating the August 10, 2020 derecho using the Unified Forecast System (UFS) Short Range Weather Application (SRW App) across multiple physics suites and grid scales. The test case chosen for the development of this tutorial is based on Gallus Jr. and Harrold, (2023). After completing this tutorial, users will be able to:
    • Install and build the SRW App
    • Configure SRW App  test cases
    • Use user-downloaded data in the SRW App
    • Run the SRW App workflow across multiple test cases
    • Plot and evaluate model output

The Unified Forecast System

The UFS is a suite of community-based earth system models created with the goal of simplifying and improving NOAA’s operational weather models. The common framework shared between the models allows for easier coupling and the ability to streamline the Research to Operations pipeline. Operational successes of the UFS include the Hurricane Analysis and Forecast System (HAFS), NOAA’s operation hurricane forecasting model.

The SRW App

The SRW App is a weather model within the UFS that is meant for making short-range forecasts (hours to a few days) at regional scales (CONUS or smaller). Its temporal and spatial range makes it particularly suited to forecasting convection such as in severe thunderstorms and heavy rainfall events. In the latest version, the SRW also gained the ability to make smoke and air quality forecasts.

Derechos

A derecho is “a widespread, severe windstorm characterized by a family of destructive downbursts containing multiple 75+ mph gusts associated with an extratropical, cold-pool driven Mesoscale Convective System” (Corfidi et. al., 2025). Progressive derechos are storms that form along stationary fronts in weakly-forced environments and progress through storm propagation (Corfidi et al,, 2025). They start with the formation of a pool of cold air created by a cluster of storms. This cold pool causes warm air on its leading edge to rise, which in turn produces more storms and further intensifies the cold pool. This positive feedback loop can lead to quite intense storms. The August 10, 2020 storm is a classic example of this type of derecho.
Fig1. A radar loop of August 10, 2020 derecho moving through Iowa
The August 10, 2020 derecho was a storm of incredible strength, with a maximum estimated wind speed of 140mph and caused extreme structural damage to an apartment complex in Cedar Rapids, Iowa. These wind speeds are comparable to those of a Category 4 hurricane. In addition to the widespread damage to homes and infrastructure, the storm also destroyed immense swaths of corn and soybean fields. In total, the storm caused $11.2 billion in damage, making it the costliest severe thunderstorm event on record (Smith, 2020). What makes derechos particularly interesting to study is their extreme unpredictability. For example, the Storm Prediction Center’s 0600Z outlook in the early morning hours of August 10 has indicated a marginal (1/5) risk of severe weather across the most affected areas of central and eastern Iowa. This was later upgraded to a moderate (4/5) risk by the 1630Z outlook as the storm was ongoing. While the model in this tutorial simulates the event with reasonable accuracy, higher resolution models (typically favored for short-term weather prediction) failed to accurately forecast the event. This inaccuracy is evident from the 3km model run shown below, which depicts the storm’s greatest intensity over central Illinois and Missouri rather than Iowa.
Fig2. The RRFS_v1beta forecast loop erroneously shows an MCS with peak intensity over Central Illinois and Missouri
For more information on how and why different models resolved this event the way they did, users are advised to review the research article the paper this tutorial is based on, referenced above.

Relevant Documentation:

Part 1: Prerequisites

1.1: Expected Knowledge

This tutorial expects users to have familiarity with Unix/Linux commands. Users should also have at least basic knowledge of meteorology so they can interpret model output.

1.2: Appropriate HPC Systems

This tutorial presumes users have an account with Mississippi State University High Performance Computing (MSU-HPC) to access the Hercules and Orion systems. Users who do not have access to these systems should contact support.epic@noaa.gov to get started. Either Orion or Hercules will work for this tutorial, as they share common working directories.

1.3: Necessary Software

To access the HPC, users will need a Linux/Unix terminal and a way to transfer files. Solutions are described below by operating system:

On Windows: Install Windows Subsystem for Linux (WSL) to set up a Linux distribution as an app. PuTTY also serves a similar purpose, and may be a better option for NOAA users
Users are advised to download and install WinSCP, as it makes file transfer more user-friendly as compared to using command line tools.

On a Mac or Linux device:
MacOS’s terminal uses Linux, so users can just use their device’s terminal to access the HPC network with no need to download additional software.

1.4: Connecting to the MSU-HPC Network

Login to the MSU-HPC network using the following command:

				
					ssh [username]@[machinename]-login-1.hpc.msstate.edu
				
			

Note: bracketed segments throughout the tutorial indicate user-specific information, and should not be used literally. Machinename can be orion or hercules. Below is an example of the above login command using sample information:

				
					ssh alicep@hercules-login-1.hpc.msstate.edu
				
			

The above command connects to the login-1 node, which will allow the user to use Cron to automate the submission of program tasks to the compute nodes.

From here, follow the prompts to enter your password and complete two-factor authentication.

Part 2: Building the UFS SRW Application

2.1: Preparing for Installation

Before starting, check that there is enough storage available to install and run the UFS SRW App. Run the following command to check the status of the EPIC storage allocation:

				
					/apps/bin/reportFSUsage -f /work2/noaa -p epic
				
			

Users need at least 150GB of free space to install the UFS SRW application. If the allocation is full, wait a day or two and space will likely free up. If storage issues do not resolve themselves within a few days, contact support.epic@noaa.gov for more information.

Next, create a directory to store the UFS SRW App. Create a directory under your username within the/work2/noaa/epic directory and create a directory underneath that for the SRW app.

2.2: Aliasing and Exporting

2.2.1: Aliasing

Aliasing is an incredibly useful quality-of-life feature that allows users to create shorthand for frequently used commands. Use of aliasing is highly recommended, although it is optional for this tutorial. To create aliases, navigate to your home directory by using cd with no arguments (just enter cd), then open the .bashrc file. The format for aliasing commands is:
				
					alias [alias name]='[command]'
				
			

As a first step, create an alias that shortcuts navigating to the working directory created in the previous step. After adding a new alias, users have to run source .bashrc to make it usable in the current session. For future login sessions, this update will automatically take effect.

It is recommended to create aliases to navigate to frequently accessed directories throughout the tutorial. It is additionally recommended to alias commands used repeatedly, such as rocoto commands.

2.2.2 Exporting

Exporting is another useful tool for Linux/Unix users. This feature allows users to create environment variables, which are useful for representing file paths in a shorter form. This tutorial will use environment variables to represent paths in a more concise manner, although they are not absolutely necessary.

Unlike aliases, environment variables only last for the current terminal session and will have to be recreated when the terminal is closed.

To create an environment variable, use the following command:

				
					export [VARIABLE_NAME]=[value]
				
			

Users should start by exporting the path to the directory created in step 2.1 to an environment variable. Call the variable WORK. This variable will be used to represent that directory throughout this tutorial. To test that the environment variable works, use the following command:

				
					echo $WORK
				
			

This should print the path to the directory.

2.3: Building

Once users have created your work directory and navigated to it, it’s time to clone the UFS SRW App source code from the UFS community GitHub repository. Run the following command from $WORK:

				
					git clone -b release/public-v3.0.0 https://github.com/ufs-community/ufs-srweather-app.git
				
			

This will clone the latest public release version, the UFS SRW App version3, which this tutorial assumes you are using. Navigate to the newly created $WORK/ufs-srweather-app directory.

Next, check out the external dependencies, which are necessary utilities to run the UFS SRW App, including post-processing and plotting. Run the following command from the $WORK/ufs-srweather-app directory:

				
					./manage_externals/checkout_externals
				
			

This will pull in all of the external dependencies and ensure that your working copy matches the versions specified in the $WORK/ufs-srweather-app/Externals.cfg. A complete list can be found at $WORK/ufs-srweather-app/Externals.cfg. This will likely take a few minutes to complete, during which the command prompt will be unavailable.

To build the executables, run the following command from the /ufs-srweather-app directory.

				
					./devbuild.sh --platform=[orion OR hercules]


				
			
Building the SRW App will take a few minutes to complete. Once it completes, you will see ‘[100%] Built target ufs-weather-model’ in the terminal and will be returned to the command prompt.

Part 3: Preparing to Run the SRW

3.1: Setting up the Environment

First, enter the Conda environment. This must be done whenever you start a new login session. To do so, use the following commands:

				
					#sets the module source (machine-dependent)
source [path to ufs-srweather-app]/etc/lmod-setup.sh [orion OR hercules]
#selects the set of modulefiles from the ufs app
module use [path to ufs-srweather-app]/modulefiles
#loads the necessary module files for the workflow (machine-dependent)
module load wflow_[orion or hercules]
#activates a conda path with the (srw_app) prefix 
conda activate srw_app

				
			

3.2: Setting Default Configs

The main config file for the UFS SRW App is config.yaml and it should be available under $WORK/ufs-srweather-app/ush/. To set up the config file, navigate to the $WORK/ufs-srweather-app/ush directory and copy the community config to config.yaml:

				
					cp config.community.yaml config.yaml
				
			

Part 4: Running the Test Cases

4.1: Overview

This tutorial uses two distinct physics suites, the GFSv16 and the RRFS_v1beta, at two different grid scales, 13km and 25km, which makes a total of four test cases. The GFSv16 suite uses the physics of NOAA’s operational global forecasting model, the Global Forecast System (GFS). The RRFS_v1beta uses the physics of the Rapid Refresh Forecast System (RRFS), NOAA’s upcoming operational short-range model, which is set to replace the High Resolution Rapid Refresh (HRRR), Rapid Refresh (RAP), and North American Mesoscale (NAM) models once it enters operational usage.

Test cases:

  • GFSv16 – 25km grid scale
  • GFSv16 – 13km grid scale
  • RRFS_v1beta – 25km grid scale
  • RRFS_v1beta – 13km grid scale

Each of the above cases should each take 30-90 minutes to complete. The first case will require continuous monitoring and task submission with rocoto, but the rest of the tasks will use cron and can be left to run on their own.

4.2: GFSv16 – 25km Grid Scale

4.2.1: Configuring the Case

Open ufs-srweather-app/ush/config.yaml

Modify config.yaml as shown below to apply new configuration settings. Only settings that should be changed or added are listed. For settings not already present in the model configuration file, they must be added under the appropriate section header as listed below. The rocoto: tasks: section will also require creating a new taskgroups header as shown below.

				
					user:
	#specifies the type of machine the SRW is running on
    MACHINE: [ORION or HERCULES]
    #specifies the account tasks will be assigned to
    ACCOUNT: epic

				
			
				
					workflow:
#name of the experiment sub-directory in expt_dirs
	EXPT_SUBDIR: GFS_v16_25km
	#the physics suite used for the forecast
	CCPP_PHYS_SUITE: FV3_GFS_v16
	#name of the grid the model uses for the forecast
	#the CONUScompact is used as it is the correct scale for HRRR input data
	PREDEF_GRID_NAME: RRFS_CONUScompact_25km
	#dates of the first and last days the forecast is run for
	#format is: '[YYYY][MM][DD][HH]'
	DATE_FIRST_CYCL: '2020081000'
	DATE_LAST_CYCL: '2020081000'
	#total forecast runtime
	FCST_LEN_HRS: 24
				
			
				
					task_get_extrn_ics:
        #name of the model providing initial conditions
        EXTRN_MDL_NAME_ICS: HRRR
        #flag for using user-provided ics
        USE_USER_STAGED_EXTRN_FILES: true
        #path to directory where ics are stored
        EXTRN_MDL_SOURCE_BASEDIR_ICS: /work2/noaa/epic/rowins-input
        #name format for ics, copy exactly as written
        EXTRN_MDL_FILES_ICS: '{yy}{jjj}{hh}00{fcst_hr:02d}00'
				
			
				
					task_get_extrn_lbcs:
	    #name of the model providing boundary conditions
	    EXTRN_MDL_NAME_LBCS: HRRR
	    #flag for using user-provided lbcs
	    USE_USER_STAGED_EXTRN_FILES: true
	    #path to directory where lbcs are stored
	    EXTRN_MDL_SOURCE_BASEDIR_LBCS: /work2/noaa/epic/rowins-input
	    #name format for lbcs, copy exactly as written
	    EXTRN_MDL_FILES_LBCS: '{yy}{jjj}{hh}00{fcst_hr:02d}00'
				
			
				
					task_plot_allvars:
	    #increment of time to plot (in hrs)
	    PLOT_FCST_INC: 1
	    #forecast start time (in hrs)
	    #default is zero, which just plots initial conditions, so set it to 1
	    PLOT_FCST_START: 1
	    #domain to plot the data over (regional or conus)
	    PLOT_DOMAINS: ["conus"]
				
			
				
					rocoto:
     tasks:
	    	#specifies rocoto tasks to run
		    taskgroups: '{{ ["parm/wflow/prep.yaml", "parm/wflow/coldstart.yaml", "parm/wflow/post.yaml", "parm/wflow/plot.yaml"]|include }}'
				
			

If you’ve done everything correctly, the config.yaml file should look like:

				
					metadata:
  description: >-
    Sample community config
user:
  RUN_ENVIR: community
  MACHINE: HERCULES
  ACCOUNT: epic
workflow:
  USE_CRON_TO_RELAUNCH: false
  EXPT_SUBDIR: "GFS_v16_25km"
  CCPP_PHYS_SUITE: FV3_GFS_v16
  PREDEF_GRID_NAME: RRFS_CONUScompact_25km
  DATE_FIRST_CYCL: '2020081000'
  DATE_LAST_CYCL: '2020081000'
  FCST_LEN_HRS: 24
  PREEXISTING_DIR_METHOD: rename
  VERBOSE: true
  COMPILER: intel
task_get_extrn_ics:
  EXTRN_MDL_NAME_ICS: HRRR
  FV3GFS_FILE_FMT_ICS: grib2
  USE_USER_STAGED_EXTRN_FILES: true
  EXTRN_MDL_SOURCE_BASEDIR_ICS: /work2/noaa/epic/rowins/derecho_tutorial_test/input_data
  EXTRN_MDL_FILES_ICS: '{yy}{jjj}{hh}00{fcst_hr:02d}00'
task_get_extrn_lbcs:
  EXTRN_MDL_NAME_LBCS: HRRR
  LBC_SPEC_INTVL_HRS: 6
  FV3GFS_FILE_FMT_LBCS: grib2
  USE_USER_STAGED_EXTRN_FILES: true
  EXTRN_MDL_SOURCE_BASEDIR_LBCS: /work2/noaa/epic/rowins/derecho_tutorial_test/input_data
  EXTRN_MDL_FILES_LBCS: '{yy}{jjj}{hh}00{fcst_hr:02d}00'
task_run_fcst:
  QUILTING: true
task_plot_allvars:
  COMOUT_REF: ""
  PLOT_FCST_INC: 1
  PLOT_FCST_START: 1
  PLOT_DOMAINS: ["conus"]
global:
  DO_ENSEMBLE: false
  NUM_ENS_MEMBERS: 2
rocoto:
  tasks:
    metatask_run_ensemble:
      task_run_fcst_mem#mem#:
        walltime: 02:00:00
    taskgroups: '{{ ["parm/wflow/prep.yaml", "parm/wflow/coldstart.yaml", "parm/wflow/post.yaml", "parm/wflow/plot.yaml"]|include }}'

				
			

As an additional step, enable reflectivity plotting in this GFSv16 testcase, as it’s a useful output to use when comparing the models to each other and to observations. To do this, navigate to ufs-srweather-app/parm and open the FV3.input.yml file. Change line #508 under the FV3_GFS_v15p2 header from lradar: null to lradar: true.

4.2.2: Running the Test Case

Now that the model configuration is set, users can begin running the model. First, generate the workflow using the following command:

				
					./generate_FV3LAM_wflow.py
				
			

This will initialize the weather model and create an experiment directory at /expt_dirs/GFS_v16_25km under your main working directory, at the same level as /ufs-srweather-app. Navigate to the experiment directory.

The experiment workflow consists of the following task groups:

  • make_grid
    • Creates the grid used by the model
  • make_orog
    • Creates the orography file (height map) for the model
  • make_sfc_climo
    • Generates surface climatology files
  • get_extrn_ics
    • Retrieves the initial conditions from the input files
  • get_extrn_lbcs
    • Retrieves the boundary conditions from the input files
  • make_ics
    • Creates the initial conditions from the retrieved data
  • make_lbcs
    • Creates the boundary conditions from the retrieved data
  • run_fcst
    • The main model run which performs physics simulations to generate output
  • run_post
  • plot_allvars
    • Plots data from the GRIB2 output files to create output images
    • Split into several individual tasks, one for each forecast hour

 

The rocotorun command described below is used to run the tasks in the experiment workflow. This function will be automated in later test cases, but for the first case, users will run the tasks manually to get a feel for how the model works.

				
					# explanation of tags:
# -w: workflow xml file, created by generate_FV3LAM_wflow.py with test-case information
# -d: workflow database file, created by rocoto to store runtime information
# -v: level of detail in output information - max is 10
# -t: this option lets specifies a specific task to run. otherwise, rocotorun will run all available tasks. (optional)

#runs all steps in the workflow that have satisfied prerequisites
rocotorun -w FV3LAM_wflow.xml -d FV3LAM_wflow.db -v 10 -t [taskname]
				
			

Users should use rocotorun to run tasks in the following order, using the -t option to run tasks one at a time. Tasks under the same step can be run simultaneously, and tasks from later steps require all tasks in previous steps to be completed. For the run_post and plot_allvars tasks, use rocotorun without the -t option to run each individual forecast hour task at the same time.

  1. make_grid, get_extrn_ics, get_extrn_lbcs
  2. make_orog
  3. make_sfc_climo
  4. make_ics, make_lbcs
  5. run_fcst
  6. run_post
  7. plot_allvars

 

To see tasks progress, users should use the rocotostat command described below:

				
					# explanation of tags:
# -w: workflow xml file, created by generate_FV3LAM_wflow.py with test-case information
# -d: workflow database file, created by rocoto to store runtime information
# -v: level of detail in output information - max is 10

#outputs the status of all workflow steps
#if the status seems to not be updating, use rocotorun to update it
rocotostat -w FV3LAM_wflow.xml -d FV3LAM_wflow.db -v 10
				
			
This command will show all tasks with the available status messages being: SUBMITTING, QUEUED, RUNNING, COMPLETE, and DEAD. SUBMITTING and QUEUED indicate that the task is getting sent to the compute nodes. Occasionally tasks can appear to be stuck at this step, but using rocotorun solves this issue. RUNNING indicates the task is running in the compute nodes. COMPLETE indicates the task is finished running. DEAD indicates the task has crashed.  Use the rocotorun and rocotostat commands to run and monitor the workflow until all tasks are completed. The bulk of the time will be taken up by make_lbcs, run_fcst, and plot_allvars tasks. The total run time varies, but it generally ranges from 60 to 90 minutes. The workflow will be preserved, even after a logout. As a reminder, you must re-enter the commands to activate the conda workflow each time you log back in. Once all workflow tests are complete, the output will be ready to retrieve.

4.2.3: Retrieving Output

The output files for the experiment are located at: $WORK/expt_dirs/[EXPT_SUBDIR]/2020081000/postprd The MSU-HPC website shares information about how to transfer data from Linux machines to your local computer. To summarize, either WinSCP or SCP in the command line can be used to transfer data. Users are advised to transfer only the “*.png” files to their local device. Advanced users may want to plot their own data in order to examine output fields not plotted by default in the SRW. These users should download the GRIB2 files and use a tool like panoply or pygrib to plot the data manually. As an example, below is the composite reflectivity output for forecast hour 20 using the same physics suite and grid scale:
Fig3. Composite reflectivity input for forecast hour 20 generated using the GFS version 16 physics suite at a 25km grid scale. The simulated reflectivity shows intense storms over eastern Iowa.

4.3: RRFS_v1beta: 25km Grid Scale

4.3.1: Configuring the Case

Once again, navigate to $WORK/ufs-srweather-app/ush and open config.yaml. For this case, create a new EXPT_SUBDIR name, change the physics suite to the RRFS_v1_beta, and enable automated submission of workflow tasks with cron. Changes are listed below by section.
				
					workflow:
	    #flag that determines whether cron will be used to automate task submission
	    USE_CRON_TO_RELAUNCH: true
	    #interval between cron attempts to launch new tasks (in minutes)
	    CRON_RELAUNCH_INTVL_MNTS: 3
	    EXPT_SUBDIR: RRFS_v1beta_25km
	    CCPP_PHYS_SUITE: FV3_RRFS_v1beta

				
			

4.3.2: Running the Case

Use ./generate_FV3LAM_wflow.py from $WORK/ufs-srweather-app/ush to generate the workflow. With cron enabled, the SRW is set-and-forget in that the rocotorun command is no longer needed to submit additional tasks. It is still good to use the rocotostat command to periodically check the workflow and ensure it’s running as it should be. Users are safe to log out at this point. Cron will continue to automatically run the workflow even while offline.

4.3.3: Retrieving Output

Use the same method (WinSCP/SCP) as in the previous case to retrieve output. Now that you have output from multiple test cases, it’s a great time to start comparing results. The radar data will especially be helpful here, as comparing the composite reflectivity output to the radar images to get a decent idea of how well the model performs. Other points, like 10m-winds, QPF, temperature, etc. can also be used to understand how well the model forecasts this event. Compare between the different model runs, and evaluate which one you think performs best. Below is composite reflectivity output generated using the same input data and grid scale for forecast hour 20:
Fig4. Composite reflectivity input for forecast hour 20 generated using the RRFS version 1 beta physics suite at a 25km grid scale. The simulated reflectivity shows intense storms over eastern Iowa.

4.4: GFSv16 – 13km Grid Scale

4.4.1: Configuring the Case

Make the following changes to config.yaml:
				
					EXPT_SUBDIR: GFS_v16_13km
CCPP_PHYS_SUITE: FV3_GFS_v16
PREDEF_GRID_NAME:RRFS_CONUScompact_13km
				
			

The case is now set up to run using GFS physics, this time at a higher resolution, meaning it will require more processing power. While you can run this case with the settings above, there’s a more efficient way to do it, especially considering the limited area of interest.

To offset the processing power increase, create a smaller regional grid centered on Iowa and use that instead of the CONUScompact grid listed above.

4.4.2: Creating a Regional Grid

First, add the grid to the list of valid parameters. To do this, open ush/valid_param_vals.yaml. Under the section valid_vals_PREDEF_GRID_NAME, add SUBCONUS_IA_13km to the comma-separated list.

Next, create the configuration for the grid. Open $WORK/ufs-srweather-app/ush/predef_grid_params.yaml. At the bottom of the file, create a new header with the grid name:
				
					"SUBCONUS_IA_13km":


				
			

Under the header, add the configuration settings for the grid. First, specify the grid generation method:

				
					"SUBCONUS_IA_13km":
        GRID_GEN_METHOD: "ESGgrid"
				
			

The Extended Schmidt Gnomic grid (ESGgrid) is the default grid type for limited area Finite Volume Cubed-Square (FV3) based models like the SRW.

Next, define the grid size and shape:

				
					"SUBCONUS_IA_13km":
        GRID_GEN_METHOD: "ESGgrid"
        ESGgrid_LON_CTR: -93.5
        ESGgrid_LAT_CTR: 42.0
        ESGgrid_DELX: 13000
        ESGgrid_DELY: 13000
        ESGgrid_NX: 160
        ESGgrid_NY: 120
        ESGgrid_PAZI: 0.0
        ESGgrid_WIDE_HALO_WIDTH: 6
				
			

ESGgrid_[LON/LAT]_CTR defines the center location of the grid in latitude and longitude coordinates, which in this case is a point in central Iowa.

ESGgrid_DEL[X/Y] defines the size of each grid cell in meters, which is 13km for this grid.

ESGgrid_N[X/Y] defines the size of the grid by the number of grid cells. This grid is 2080km east-west (13km/cell * 160 cells) and 1560km north-south (13km/cell * 120 cells).

ESGgrid_PAZI defines the rotational offset of the grid. This grid has no rotational offset.

ESGgrid_WIDE_HALO_WIDTH defines the area around the grid (in cells) used for boundary condition input. This grid uses the default halo width, which is 6.

The next step is defining some of the computational parameters:

				
					"SUBCONUS_IA_13km":
	    GRID_GEN_METHOD: "ESGgrid"
	    ESGgrid_LON_CTR: -93.5
	    ESGgrid_LAT_CTR: 42.0
	    ESGgrid_DELX: 13000
	    ESGgrid_DELY: 13000
	    ESGgrid_NX: 160
	    ESGgrid_NY: 120
	    ESGgrid_PAZI: 0.0
	    ESGgrid_WIDE_HALO_WIDTH: 6
	    DT_ATMOS: 50
	    LAYOUT_X: 4
	    LAYOUT_Y: 2
	    BLOCKSIZE: 40
				
			

DT_ATMOS defines the main time step of the model in seconds. This value needs to be set correctly based on the grid resolution, with higher resolution grids requiring shorter time steps. For this grid, the time step should be 50 seconds.

LAYOUT_[X/Y] defines the number of processes to divide the grid into in each direction. Too high of values can cause errors where the model attempts to sub-divide grid cells, and too low of values means the model runs inefficiently. The values above work well for this grid size.

BLOCKSIZE is a machine-dependent parameter that defines the amount of data that is passed into the cache at a time. For Hercules, the correct value is 40 [add Orion value]. For other machines, check what other grids of similar resolution use, and copy that value.

The Unified Post Processor (UPP) uses a Lambert Conformal grid which slightly differs from the grid produced by the UFS. The last step is defining the parameters for this second grid under the “QUILTING”: heading:

				
					"SUBCONUS_IA_13km":
        GRID_GEN_METHOD: "ESGgrid"
        ESGgrid_LON_CTR: -93.5
        ESGgrid_LAT_CTR: 42.0
        ESGgrid_DELX: 13000
        ESGgrid_DELY: 13000
        ESGgrid_NX: 160
        ESGgrid_NY: 120
        ESGgrid_PAZI: 0.0
        ESGgrid_WIDE_HALO_WIDTH: 6
        DT_ATMOS: 50
        LAYOUT_X: 4
        LAYOUT_Y: 2
        BLOCKSIZE: 40
        QUILTING:
            WRTCMP_write_groups: 1
            WRTCMP_write_tasks_per_group: 6
            WRTCMP_output_grid: "lambert_conformal"
            WRTCMP_cen_lon: -93.5
            WRTCMP_cen_lat: 42.0
            WRTCMP_stdlat1: 42.0
            WRTCMP_stdlat2: 42.0
            WRTCMP_nx: 154
            WRTCMP_ny: 114
            WRTCMP_dx: 13000
            WRTCMP_dy: 13000
            WRTCMP_lon_lwr_left: -104.54
            WRTCMP_lat_lwr_left: 35.33
				
			

WRTCMP_write_groups defines the number of process groups to use for the write component. Setting this to 1 works for most model configurations.

WRTCMP_write_tasks_per_group defines the number of processes per process group. Similar increasing the value here will potentially increase computational efficiency, but can lead to errors when attempting to divide individual grid cells.

WRTCMP_output_grid defines the grid to interpolate to, which as stated above, needs to be Lambert Conformal for UPP.

WRTCMP_cen_[lon/lat] define the center of the new grid, which should be the same as your original grid.

WRTCMP_stdlat[1/2] defines the standard latitudes for the new grid, which is a value that affects the map projection. Setting this to the central latitude works fine.

WRTCMP_n[x/y] defines the grid dimension in grid cells. Set this to ~95% of the value you used for the original grid.

WRTCMP_d[x/y] defines the grid cell size, set this to the same value as your original grid.

WRTCMP_[lon/lat]_lwr_left define the lower left corner of the new grid and need to be calculated for each grid. For smaller and higher resolution grids, it’s necessary to define the grids more precisely, but the formulas below generate a good-enough approximation for this grid scale.

These formulas essentially finding the distance from the center to the lower left corner of the grid in either direction in meters, then converting to degrees to find the latitude and longitude coordinates.

1/111111 is used as a rough conversion factor for degrees per meter latitude. To calculate degrees per meter longitude, it’s necessary to incorporate the cosine (in degrees) of the lower left latitude, as the ratio is dependent on latitude.

With that, you’re done setting up the grid. All you need to do before running the case is go to $WORK/ufs-srweather-app/ush/config.yaml and change a couple of items:

				
					PREDEF_GRID_NAME:“SUBCONUS_IA_13km”
PLOT_DOMAINS:[“regional”]
				
			

4.4.3: Running the Case

Running the case works the same as the previous cases. This case may take more time to run due to the higher grid resolution. It still should not take much longer than 60-90 minutes.

4.4.4: Retrieving Output

Use your preferred method to retrieve the output, and compare to previous cases and observed data.

4.5: RRFS_v1beta – 13km Grid Scale

Change the physics suite to FV3_RRFS_v1beta in config.yaml, change the expt_subdir name, and rerun the test case, again on the user-created grid. Let the model run, then retrieve and compare output.

Part 5: Objectives Addressed

You’ve completed the tutorial! Take your time evaluating the results, and maybe even look at the paper linked at the top of the document and compare your results to theirs. When you’re finished with your work, please complete this short survey to give us your feedback! Your answers will not just improve this tutorial, they’ll also inform future UFS development and outreach efforts. In this tutorial, you’ve learned to:
  • Install and build the UFS SRW application
  • Configure UFS SRW application test cases
  • Use user-downloaded data in the UFS SRW application
  • Run the UFS SRW application workflow across multiple test cases
  • Plot and evaluate model run result
With these skills, you should be able to run your own test cases in the UFS SRW application. Congratulations, and have fun forecasting!

Acknowledgement

This tutorial was developed under the mentorship of Alison Gregory, John Ten Hoeve, and Jose Henrique-Alves, who were an invaluable source of support. Thanks to Priya Pillai and Adam Clark for providing expertise from a technical and meteorological perspective respectively. Thanks as well to Jen Vogt, Natalie Boston, and Carolyn Emerson for reviewing the tutorial and providing feedback. This test case is based on research performed in Gallus Jr. and Harrold (2023). Thanks to the authors for providing the HRRR input data for this test case.