Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

As part of the efforts in the CMDV project, interfaces to integrate the MOAB unstructured mesh library with the TempestRemap remapping tool have been undertaken. Detailed information on the algorithmic and implementation aspects of this effort have been written in a manuscript submitted to Geoscientific Model Development [1]. This work has led to the development of a new offline remapping tool called mbtempest, which exposes the functionality to compute the supermesh or intersection mesh between two unstructured source and target component grids, in addition to using this supermesh for computing the remapping weights to project solutions between the grids. This functionality is part of the critical worflow with E3SM, where the generated remapping weights in the offline step are consumed by MCT at runtime to seamlessly transfer solution data between components (atm↔ocn, atm↔lnd, etc).

...

Partitioning meshes with the "inferred" strategy for better performance

A more recent update /wiki/spaces/ED/pages/2208170181 to the mbpart tool is to use the concept of inferred partitions such that the geometric locality on the source and target grids are preserved as much as possible to minimize communication at runtime during intersection mesh computation. This strategy has been shown to provide considerable speedup in the intersection mesh computation, and is now our preferred partitioning strategy in offline workflows, especially when one of the grids has topological holes (OCN mesh). In order to generate the inferred partitions, we usually choose the target mesh as the primary partition and the source mesh as the secondary partition. Then, the source mesh partitions are "inferred" based on the target mesh partition RCB tree. The commands to generate the inferred source partitions are shown below.

...

The first -l option specifies the source grid to load, and the second -l option specifies the target grid to load. mbtempest can load Exodus, .nc and h5m files in serial, while the parallel I/O is optimized only for the native HDF5 format (h5m) at the moment. So we recommend converting meshes to this format and pre-partitioning using the instructions above. The option -i specifies the output file for the intersection mesh that is computed in MOAB using the advancing front algorithm in parallel. Due to the distributed nature of the mesh in parallel runs, the h5m file cannot be directly used with TempestRemap's GenerateOfflineMap tool. However, if mbtempest is run in serial (first case above) or in parallel, the intersection mesh in Exodus mesh format can be written out using TempestRemap underneath by performing a MPI_Gather on the root process. This process does not scale due to the aggregation of the entire mesh and associated metadata for this process and we recommend using the fully parallel mbtempest workflow directly if possible.

...

The environment settings for running mbtempest on comps are listed below, and stored in the file: /compyfs/software/mbtempest.envs.sh for reference. 
This is version 5e41106dc9a2 from MOAB (>5.3.0) and version 72df14282a2e9 from tempestremap (>2.1.0)

Code Block
module load cmake/3.11.4 intel/19.0.3 mvapich2/2.3.1 pnetcdf/1.9.0 mkl/2019u3 metis/5.1.0
export MPI_DIR=/share/apps/mvapich2/2.3.1/intel/19.0.3
export METIS_DIR=/share/apps/metis/5.1.0
export EIGEN3_DIR=/share/apps/eigen3/3.3.7/include/eigen3
export HDF5_DIR=/share/apps/netcdf-MPI/intel/19.0.5/mvapich2/2.3.2
export NETCDF_DIR=/share/apps/netcdf-MPI/intel/19.0.5/mvapich2/2.3.2
export PNETCDF_DIR=/share/apps/pnetcdf/1.9.0/intel/19.0.3/mvapich2/2.3.1
export ZOLTAN_DIR=/compyfs/software/zoltan/3.83/intel/19.0.3
export TEMPESTREMAP_DIR=/compyfs/software/tempestremap/intel/19.0.3
export MOAB_DIR=/compyfs/software/moab/intel/19.0.3

...

  1. For the NE30 case, let us generate the CS mesh of required resolution using mbtempest.
    1. Command: $MOAB_DIR/bin/mbtempest -t 0 -r 30 -f outCSMesh30.nc
    2. Here, the type = 0 ( -t 0) specifies that we want to generate a CS grid, with element resolution = 30x30x6 (using -r 30).
  2. Next, convert the NetCDF nc file format to a MOAB format, and in the process, also add some metadata for DoF numbering for the SE grid.
    1. Command: $MOAB_DIR/bin/mbconvert  -B -i GLOBAL_DOFS -r 4 outCSMesh30.nc outCSMesh30.h5m
    2. Here, the GLOBAL_DOFS is the tag that stores the DoF numbering for SE grid of order 4. The input "*.nc" mesh and output "*.h5m" mesh is specified as arguments for the format conversion.
  3. The next step is to pre-partition the h5m file so that the map generation can be computed in parallel. In this particular example, we will use Zoltan partitioner to generate 128 parts.
    1. Command: $MOAB_DIR/bin/mbpart 128 -z RCB outCSMesh30.h5m outCSMesh30_p128.h5m
  4. Now that we have the ATM grid generated, let us perform a similar conversion on the OCN MPAS file. The MPAS nc file already exists and we will use this input file and convert it to a MOAB h5m file. During this process, unwanted edges and variables are not converted since the mbtempest mapping workflow only requires the actual mesh for computation of the overlap.
    1. Command: $MOAB_DIR/bin/mbconvert -O "variable=" -O "no_edges"  -O "NO_MIXED_ELEMENTS"  oEC60to30v3_60layer.170905.nc oEC60to30v3_60layer.170905.h5m
  5. Similar to the CS grid case, let us now pre-partition the grid to 128 parts using the Zoltan Recursive-Bisection algorithm.
    1. Command: $MOAB_DIR/bin/mbpart 128 -z RCB oEC60to30v3_60layer.170905.h5m oEC60to30v3_60layer.170905_p128.h5m
  6. As mentioned above, better performance can be achieved by using the "inferred" partitioning strategy with Zoltan.
    1. Command: $MOAB_DIR/bin/mbpart 128 -z RCB -b --scale_sphere -p 2 oEC60to30v3_60layer.170905.h5m oEC60to30v3_60layer.170905_p128.h5m --inferred outCSMesh30.h5m
    2. Rename outCSMesh30_inferred.h5m to outCSMesh30_p128.h5m to be consistent
    3. The above command will generate oEC60to30v3_60layer.170905_p128.h5m and outCSMesh30_p128.h5m meshes, which are optimized in term of geometric locality for parallel runs on 128 processes.
  7. We now have fully partitioned MOAB meshes for the CS and MPAS grids (either from step (3)+(5) or from (6)), and all required inputs for mbtempest is available. Invoke the mbtempest command in parallel to generate the remapping weights after specifying the source and target grids, along with discretization detail specifications.
    1. Command: srun -n 64 $MOAB_DIR/bin/mbtempest -t 5 -w -l outCSMesh30_p128.h5m -l oEC60to30v3_60layer.170905_p128.h5m -m cgll -o 4 -g GLOBAL_DOFS -m fv -o 1 -g GLOBAL_ID -i intx_ne30_oEC60to30v3.h5m -f mapSEFV-NE30.nc
    2. The particular example above runs on 64 processes, and takes the pre-partitioned input grids outCSMesh30_p128.h5m and oEC60to30v3_60layer.170905_p128.h5m for CS and MPAS respectively
    3. We also specify that the source discretization method is Spectral Element (SE) with continuous representation of DoFs on the element interfaces and the target discretization on MPAS grid is Finite Volume (fv). This option is specified using the -m input parameter, whose default is fv.
    4. The order of the discretization is then specified using the -o options for input and output models. In the above case, we have SE order = 4 and FV order = 1.
    5. Next, we also need to specify the tags in the mesh that contain the source and target global DoF numbers that are stored on their corresponding elements. This will dictate the ordering of the mapping weight matrix that is written out to file.
    6. The final set of argument specifies that the output map file is to be written out to mapSEFV-NE30.h5m for the NE30 case in parallel.
  8. At the end of this workflow, we now have a SCRIP file (mapSEFV-NE30.nc) containing the weights to compute a solution projection from an input CS NE30 grid with SE(4) discretization to an output MPAS grid with FV(1) discretization.
  9. Exercise: rerun step (7) with source discretization specification: `-m fv -o 1 -g GLOBAL_ID`. This results in a FV-FV map between the NE30 grid and the OCN MPAS grid. You can also switch the order of -l arguments to generate the weights in the reverse direction here i.e., switch source and target grids/discretization specifications for mbtempest.

...

If steps (a)-(g) pass successfully, the MOAB libraries and tools, along with interfaces for TempestRemap will be installed in $INSTALL_PREFIX/moab directory. The offline remapping weight computation tool, mbtempest, will also be installed during this process and can then be used standalone to generate the weight files as needed.

References

1 Mahadevan, V. S., Grindeanu, I., Jacob, R., and Sarich, J.: Improving climate model coupling through a complete mesh representation: a case study with E3SM (v1) and MOAB (v5.x), Geosci. Model Dev. Discuss., https://doi.org/10.5194/gmd-2018-280, in review, 2018.