Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

This page details exceptions to the step-by-step grid setup instructions when using the finite volume (FV) physics grid (a.k.a. "physgrid", pg2, pg3). When generating a new model grid the workflow presented in the step-by-step guide on the parent page is still relevant, but there are several deviations from those steps that are documented here. If the physgrid becomes the default configuration for E3SM then we will likely just merge these notes into the parent page. 

Physgrid Description

The FV physics grid (physgrid) is constructed by creating equal subdivisions of the quadrilateral elements of the cube sphere, typically 2x2 or 3x3. This differs from the "spectral element" method of the GLL grid, in which points within the cube sphere elements represent nodes of continuous polynomial basis functions. When running the model with the physgrid, the dynamics calculations (i.e. advection) are still solved on the GLL grid, but the state is mapped to the physgrid for physics calculations (i.e. clouds and radiation) and the physics tendencies are then mapped back to the GLL grid. The model output is mainly on the physgrid, but certain quantities can also be output on the GLL grid for specialized diagnostics.

Generating New Grid Files

TempestRemap is still used for generating new grid "exodus" file, but an extra step is needed for using the physgrid. For a typical GLL grid the exodus mesh file defines the vertices of the elements, without any information about the internal node structure defined by the nodes and basis functions. For example, if we generate an exodus file for the "ne30" grid we get a cube sphere grid where each cube face has been divided into 30x30 elements. If we wish to use this for mapping between a GLL grid and some other grid with TempestRemap we would need to indicate the number of nodes used to define basis function nodes within each elements (this is typically 4x4 or "np4").

In order to adapt this to the physgrid we need to use an additional command to subdivided these elements into the number of FV cells we want. Similar to the example given above is that when mapping between grids we would need to indicate to TempestRemap that these "elements" are actually finite volume cells with a special flag (see mapping section below).

So if we want an ne30pg3 file, and we initially created an ne30 exodus files with this command:

${tempest_root}/bin/GenerateCSMesh --alt --res 30 --file ${output_root}/ne30.g

Then we would simply use the following command to evenly subdivide the elements into 9 cells:

${tempest_root}/bin/GenerateVolumetricMesh --in ${output_root}/ne30.g --out ${output_root}/ne30pg3.g --np 3 --uniform


Generating a Scrip File

For mapping GLL grids without TempestRemap (or plotting GLL grid data) we need to "re-interpret" the representation of data on the GLL grid into a finite volume grid. Step #2 in the step-by-step guide discusses how to do this with the "dual grid" approach. It's worth noting that this is not a visually accurate representation of the data, but the area of the FV cells produced by the dual grid are consistent with the GLL weights so that spatial sums and averages can be computed in an intuitive way. Another caveat of the dual grid method is that the generation of the scrip grid description file requires an iterative process that can can take a very long time for large grids.

An advantage of the physgrid is that we don't have to worry about this nonsense because the data is naturally represented by finite volumes. There's a simple TempestRemap command that can quickly convert from exodus to scrip file types using the following command:

${tempest_root}/bin/ConvertExodusToSCRIP --in ne30pg3.g --out ne30pg3_scrip.nc

The resulting scrip file can be used for mapping with ESMF or plotting physgrid data on the native grid. 


Mapping Files

???

${tempest_root}/bin/GenerateOverlapMesh --a ${source_mesh_file} --b ${output_root}/ne30pg3.g --out overlap_mesh.nc

???

${tempest_root}/bin/GenerateOfflineMap --in_mesh ${output_root}/ne30pg3.g --out_mesh ${source_mesh_file}  --ov_mesh overlap_mesh.nc --out_map ${output_map_file} --in_type fv --in_np 1 --out_type fv --out_np 1 --out_double --mono --volumetric

???

Domain Files

No change in the procedure for generating domain files except that the physgrid grid and mapping files described above need to be used because the physics grid is used when communicating with the coupler, which is primarily where the domain files are needed. 


Topography

???


Regional Refinement

Regionally refined grids require no special steps when using the physgrid because the regional refinement happens at the element level, and the physgrid only changes the physics column arrangement within the element. 


Steps to Interpolate Between Grids

There's nothing special about these steps, just thought I'd post a basic example in case someone is needing to convert between grids for comparison purposes.

In this particular example I'm interpolating from ne30np4 to ne30pg3. 

${tempest_root}/bin/GenerateOverlapMesh --a ${grid_data_root}/ne30.g --b ${grid_data_root}/ne30pg3.g --out ${grid_output_root}/tmp_overlap_mesh.nc

${tempest_root}/bin/GenerateOfflineMap --in_mesh ${grid_data_root}/ne30.g --out_mesh ${grid_data_root}/ne30pg3.g --ov_mesh ${grid_data_root}/tmp_overlap_mesh.nc --in_type cgll --in_np 4 --out_type fv --out_np 1 --out_double --mono --out_map ${grid_data_root}/tmp_mapping_weights.nc

ncremap -4 -m ${grid_data_root}/tmp_mapping_weights.nc ${input_file_name} ${output_file_name}


  • No labels