Offline remapping workflow with mbtempest

As part of the efforts in the CMDV project, interfaces to integrate the MOAB unstructured mesh library with the TempestRemap remapping tool have been undertaken. Detailed information on the algorithmic and implementation aspects of this effort have been written in a manuscript submitted to Geoscientific Model Development [1]. This work has led to the development of a new offline remapping tool called mbtempest, which exposes the functionality to compute the supermesh or intersection mesh between two unstructured source and target component grids, in addition to using this supermesh for computing the remapping weights to project solutions between the grids. This functionality is part of the critical worflow with E3SM, where the generated remapping weights in the offline step are consumed by MCT at runtime to seamlessly transfer solution data between components (atm↔ocn, atm↔lnd, etc).

Table of Contents

Using the mbtempest tool

The mbtempest tool exposes the algorithms and interfaces to invoke TempestRemap through MOAB to generate remapping weights for combinations of discretizations and unstructured meshes defined on a source grid to a target grid. Most of the options supported by TempestRemap tools are provided in this unified interface with one key difference: the entire workflow makes use of MPI parallelism. This implies that the overlap mesh computation that would normally use the GenerateOverlapMesh tool in TempestRemap would be replaced by the MOAB's parallel implementation of an advancing front intersection computation. This intersection mesh can subsequently be written out to file as an intermediate step or used by mbtempest to generate the remapping weights needed to project a solution field from source to target component grid. Prescriptions to preserve conservation of scalar/flux data and to impose monotonicity constraints are also available as options to pass to the mbtempest tool.

[bash]> tools/mbtempest -h
Usage: mbtempest --help | [options] 
Options: 
  -h [--help]       : Show full help text
  -t [--type] <int> : Type of mesh (default=CS; Choose from [CS=0, RLL=1, ICO=2, OVERLAP_FILES=3, OVERLAP_MEMORY=4, OVERLAP_MOAB=5])
  -r [--res] <int>  : Resolution of the mesh (default=5)
  -d [--dual]       : Output the dual of the mesh (generally relevant only for ICO mesh)
  -w [--weights]    : Compute and output the weights using the overlap mesh (generally relevant only for OVERLAP mesh)
  -c [--noconserve] : Do not apply conservation to the resultant weights (relevant only when computing weights)
  -v [--volumetric] : Apply a volumetric projection to compute the weights (relevant only when computing weights)
  -n [--monotonic] <int>: Ensure monotonicity in the weight generation
  -l [--load] <arg> : Input mesh filenames (a source and target mesh)
  -o [--order] <int>: Discretization orders for the source and target solution fields
  -m [--method] <arg>: Discretization method for the source and target solution fields
  -g [--global_id] <arg>: Tag name that contains the global DoF IDs for source and target solution fields
  -f [--file] <arg> : Output remapping weights filename
  -i [--intx] <arg> : Output TempestRemap intersection mesh filename

Generating component grids with mbtempest

The mbtempest tool can be used to generate CS (cubed-sphere), RLL (lat/lon), ICO (triangular) and polygonal (MPAS-like) meshes through appropriate invocation with the -t [--type] argument in combination with -r [--res] option. Note that these mesh generation workflows are only run in serial and a handoff to TempestRemap through its public API is done internally. Some examples for these runs are provided below.

Cubed-Sphere meshes

[bash]> tools/mbtempest -t 0 -r 120 -f outputCSMesh.exo
Creating TempestRemap Mesh object ...
=========================================================
..Generating mesh with resolution [120]
..Writing mesh to file [outputCSMesh.exo] 
Nodes per element
..Block 1 (4 nodes): 86400
..Mesh generator exited successfully
=========================================================
[LOG] Time taken to create Tempest mesh: max = 0.0571213, avg = 0.0571213
[bash]> tools/mbsize outputCSMesh.exo
File outputCSMesh.exo:
   type  count   total                            minimum                            average                                rms                            maximum                           std.dev.
------- ------ ------- ---------------------------------- ---------------------------------- ---------------------------------- ---------------------------------- ----------------------------------
   Quad  86400      13                         0.00012195                         0.00014544                         0.00014602                         0.00017133                         1.3029e-05
1D Side 345600 4.2e+03                          0.0092562                            0.01218                           0.012217                            0.01309                         0.00094931
 Vertex  86402

RLL meshes

[bash]> tools/mbtempest -t 1 -r 50 -f outputRLLMesh.exo
Creating TempestRemap Mesh object ...
longitude_edges = [0, 3.6, 7.2, 10.8, 14.4, 18, 21.6, 25.2, 28.8, 32.4, 36, 39.6, 43.2, 46.8, 50.4, 54, 57.6, 61.2, 64.8, 68.4, 72, 75.6, 79.2, 82.8, 86.4, 90, 93.6, 97.2, 100.8, 104.4, 108, 111.6, 115.2, 118.8, 122.4, 126, 129.6, 133.2, 136.8, 140.4, 144, 147.6, 151.2, 154.8, 158.4, 162, 165.6, 169.2, 172.8, 176.4, 180, 183.6, 187.2, 190.8, 194.4, 198, 201.6, 205.2, 208.8, 212.4, 216, 219.6, 223.2, 226.8, 230.4, 234, 237.6, 241.2, 244.8, 248.4, 252, 255.6, 259.2, 262.8, 266.4, 270, 273.6, 277.2, 280.8, 284.4, 288, 291.6, 295.2, 298.8, 302.4, 306, 309.6, 313.2, 316.8, 320.4, 324, 327.6, 331.2, 334.8, 338.4, 342, 345.6, 349.2, 352.8, 356.4, 360]
latitude_edges = [-90, -86.4, -82.8, -79.2, -75.6, -72, -68.4, -64.8, -61.2, -57.6, -54, -50.4, -46.8, -43.2, -39.6, -36, -32.4, -28.8, -25.2, -21.6, -18, -14.4, -10.8, -7.2, -3.6, 0, 3.6, 7.2, 10.8, 14.4, 18, 21.6, 25.2, 28.8, 32.4, 36, 39.6, 43.2, 46.8, 50.4, 54, 57.6, 61.2, 64.8, 68.4, 72, 75.6, 79.2, 82.8, 86.4, 90]
..Generating mesh with resolution [100, 50]
..Longitudes in range [0, 360]
..Latitudes in range [-90, 90]

..Writing mesh to file [outputRLLMesh.exo] 
Nodes per element
..Block 1 (4 nodes): 5000
..Mesh generator exited successfully
=========================================================
[LOG] Time taken to create Tempest mesh: max = 0.016014, avg = 0.016014
[bash]> tools/mbsize outputRLLMesh.exo 
File outputRLLMesh.exo:
   type count   total                            minimum                            average                                rms                            maximum                           std.dev.
------- ----- ------- ---------------------------------- ---------------------------------- ---------------------------------- ---------------------------------- ----------------------------------
   Quad  5000      13                         0.00012384                          0.0025112                          0.0027889                          0.0039426                          0.0012132
1D Side 20000   1e+03                                  0                           0.051401                           0.054405                           0.062822                           0.017829
 Vertex  4902

ICO meshes

[bash]> tools/mbtempest -t 2 -r 50 -f outputICOMesh.exo
Creating TempestRemap Mesh object ...
------------------------------------------------------------
Generating Mesh.. Done
Writing Mesh to file
..Mesh size: Nodes [25002] Elements [50000]
..Nodes per element
....Block 1 (3 nodes): 50000
..Done
[LOG] Time taken to create Tempest mesh: max = 0.0272096, avg = 0.0272096
[bash]> tools/mbsize outputICOMesh.exo 
File outputICOMesh.exo:
   type  count   total                            minimum                            average                                rms                            maximum                           std.dev.
------- ------ ------- ---------------------------------- ---------------------------------- ---------------------------------- ---------------------------------- ----------------------------------
    Tri  50000      13                         0.00023314                          0.0002513                         0.00025155                         0.00027497                         1.1294e-05
1D Side 150000 3.6e+03                           0.022143                           0.024179                           0.024216                           0.027274                          0.0013384
 Vertex  25002

ICO-Dual (polygonal) meshes

[bash]> tools/mbtempest -t 2 -r 50 -d -f outputICODMesh.exo
Creating TempestRemap Mesh object ...
------------------------------------------------------------
Generating Mesh.. Done
Writing Mesh to file
..Mesh size: Nodes [50000] Elements [25002]
..Nodes per element
....Block 1 (6 nodes): 25002
..Done
[LOG] Time taken to create Tempest mesh: max = 0.0774914, avg = 0.0774914

Note on converting Exodus meshes to parallel h5m format

The meshes generated with TempestRemap are output in the Exodus format, which is not in the natively parallel (optimized), I/O format used by MOAB. So for cases with high resolution meshes (such as NE120 or greater i.e., NE256/NE512/NE1024) that need to be used for computation of remapping weights, it is important that a preprocessing step is performed to partition the mesh in h5m format in order to minimize the mbtempest execution time. Such a workflow can be implemented as below.

Convert Exodus mesh to MOAB HDF5 (h5m) format

mpiexec -n 4 tools/mbconvert -g -o "PARALLEL=WRITE_PART" -O "PARALLEL=BCAST_DELETE" -O "PARTITION=TRIVIAL" -O "PARALLEL_RESOLVE_SHARED_ENTS" outputCSMesh.exo outputCSMesh.h5m

Partitioning meshes for parallel runs

MOAB supports generating partitions for the mesh using either Zoltan or Metis. The mbpart tool can be utilized to generate the PARALLEL_PARTITION tag in MOAB to enable parallel runs and computation of intersection/remapping weights in considerably reduced times. Note that for the MOAB mbtempest workflows, we prefer Zoltan as our primary partitioner since there is an ongoing NGD effort with Dr. Karen Devine (Karen Devine (Unlicensed)), who is the PI of Zoltan partitioning library. However, for simplicity, Metis partitioner would work as well even if computational scaling is slightly sub-optimal.  

To partition MOAB H5M meshes for running on say 1024 processes, the following arguments can be used in the mbpart tool.

METIS (RCB)  : tools/mbpart 1024 -m ML_RB -i 1.005 input_mesh.h5m output_mesh_p1024_mRB.h5m
METIS (KWay) : tools/mbpart 1024 -m ML_KWAY input_mesh.h5m output_mesh_p1024_mKWAY.h5m
Zoltan (RCB) : tools/mbpart 1024 -z RCB -i 1.002 input_mesh.h5m output_mesh_p1024_zRCB.h5m
Zoltan (PHG) : tools/mbpart 1024 -z PHG input_mesh.h5m output_mesh_p1024_zPHG.h5m

Now the mesh files outputCSMesh_p1024_*.h5m file can be used in parallel as an input for either a source or target grid. Similar procedure can also be performed on other exodus meshes to make them parallel aware in order to be used with mbtempest.

Partitioning meshes with the "inferred" strategy for better performance

A more /wiki/spaces/ED/pages/2208170181 to the mbpart tool is to use the concept of inferred partitions such that the geometric locality on the source and target grids are preserved as much as possible to minimize communication at runtime during intersection mesh computation. This strategy has been shown to provide considerable speedup in the intersection mesh computation, and is now our preferred partitioning strategy in offline workflows, especially when one of the grids has topological holes (OCN mesh). In order to generate the inferred partitions, we usually choose the target mesh as the primary partition and the source mesh as the secondary partition. Then, the source mesh partitions are "inferred" based on the target mesh partition RCB tree. The commands to generate the inferred source partitions are shown below.

Inferred Partitioner Usage
Usage: tools/mbpart #parts -z RCB -b --scale_sphere -p 2 target_input_mesh.h5m target_output_mesh.h5m --inferred source_input_mesh.h5m
Example: tools/mbpart 1024 -z RCB -b --scale_sphere -p 2 ocean.oEC60to30v3.h5m ocean.oEC60to30v3.p1024.h5m --inferred NE1024.h5m

The inferred source partition mesh is written out to source_input_mesh_inferred.h5m. In the example above, the NE1024.h5m mesh is partitioned using the inferred strategy and written out to NE1024_inferred.h5m.

Note that only Zoltan interface in mbpart supports this strategy, and hence becomes a required dependency for MOAB to enable better partitioning for remapping.

Generating intersection meshes

Once we have a source and target grid that is to be used to compute the remapping weights, mbtempest can be used in a similar fashion to GenerateOverlapMesh/GenerateOfflineMap in TempestRemap or the ESMF_RegridWeightGen tool with ESMF to create the field projection weights. Note that unlike ESMF, there is no need to create a dual of the higher-order Spectral Element (SE) mesh using the TempestRemap workflow. Some examples with different options are provided below.

1. mpiexec -n 1 tools/mbtempest -t 5 -l outputCSMesh.h5m -l outputICOMesh.h5m -i moab_intx_cs_ico.h5m
2. mpiexec -n 16 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_p16.h5m -l outputICOMesh_p16.h5m  -i moab_intx_atm_ico.h5m
3. mpiexec -n 128 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_p128.h5m -l $MOAB_SRC_DIR/MeshFiles/unittest/recMeshOcn_p128.h5m  -i moab_intx_atm_ocn.h5m 

The first -l option specifies the source grid to load, and the second -l option specifies the target grid to load. mbtempest can load Exodus, .nc and h5m files in serial, while the parallel I/O is optimized only for the native HDF5 format (h5m) at the moment. So we recommend converting meshes to this format and pre-partitioning using the instructions above. The option -i specifies the output file for the intersection mesh that is computed in MOAB using the advancing front algorithm in parallel. Due to the distributed nature of the mesh in parallel runs, the h5m file cannot be directly used with TempestRemap's GenerateOfflineMap tool. However, if mbtempest is run in serial (first case above) or in parallel, the intersection mesh in Exodus mesh format can be written out using TempestRemap underneath by performing a MPI_Gather on the root process. This process does not scale due to the aggregation of the entire mesh and associated metadata for this process and we recommend using the fully parallel mbtempest workflow directly if possible.

Generating remapping weights in parallel

Once we have a source and target grid that is to be used to compute the remapping weights, mbtempest can be used in a similar fashion to GenerateOverlapMesh/GenerateOfflineMap in TempestRemap to create the field projection weights. The remapping weights can be generated after computing the intersection mesh by specifying the -w flag.

1. Serial (default: FV-FV): mpiexec -n 1 tools/mbtempest -t 5 -l outputCSMesh.h5m -l outputICOMesh.h5m -f moab_mbtempest_remap_csico_fvfv.nc -i moab_intx_file2.exo -w 
2. Parallel (explicit: FV-FV): mpiexec -n 64 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_T.h5m -l $MOAB_SRC_DIR/MeshFiles/unittest/recMeshOcn.h5m  -f moab_mbtempest_remap_fvfv.nc -i moab_intx_file2.h5m -w -m fv -m fv 
3. Parallel (SE-FV): mpiexec -n 128 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_T.h5m -l $MOAB_SRC_DIR/MeshFiles/unittest/recMeshOcn.h5m  -f moab_mbtempest_remap_sefv.nc -i moab_intx_file2.h5m -w -m cgll -o 4 -g GLOBAL_DOFS -m fv -o 1 -g GLOBAL_ID

These commands will generate the remapping weights by computing the intersection mesh through advancing front intersection algorithm in MOAB and then using TempestRemap to generate the weights in parallel. The computed matrix weights are then written out in parallel in the h5m format (specified through option -f). The user can also specify the field discretization type and order by using the -m and -o options. Currently, the -m option can take fv, cgll, dgll as valid options and -o is any non-zero, positive integer representing the order of the spectral or FV discretization for climate problems.

Additionally, the GLOBAL_ID tag is used as an identifier to uniquely represent degrees-of-freedom (DoF) for element-based discretizations such as FV or vertex-based lower order discretizations such as FD. These options can be specified with -g options for mbtempest. However, complex discretizations should provide the unique DoF numbering in special element-based MOAB tags. In the above example (3), the GLOBAL_DOFS tag represents the SE DoF ids for the source mesh and the GLOBAL_ID tag represents that DoF ids for the target mesh, since we are computing remapping weights from SE to FV.

Note on DoF IDs for SE meshes

Consistently generating the global DoF ids for a spectral mesh in parallel, through MOAB is non-trivial since it requires information about all shared edges and a fixed traversal path that is independent of the parallel partitioning schema. While the actual ID assignments for DoFs are easy in serial, the assignment in parallel depend on how the elements are ordered on the partitions, and may not match the numbering in HOMME. Hence, the generated mapping weights may need a different permutation to be applied before passing it back to HOMME within E3SM. 

However, for standard Cubed-Sphere SE meshes, users can rely on an implementation in MOAB that uses TempestRemap to generate a CS mesh of a given resolution and use a numbering scheme that is consistent with HOMME to get the global DoF IDs. This temporary CS mesh is generated redundantly on every process only for the sake of computing the global IDs, out of which the relevant DoF numbering for SE GLL points are then stored as a user-specified MOAB tag. Once this is done, the rest of the workflow works seamlessly. This invalidates the need for the user to do any additional steps to compute remapping weights between CS-SE meshes and other lower-order discretizations. 

For RRM grids or non-standard SE grids, the recommended workflow is to run HOMME to get the resolution of the grid required to generate the mapping file, and then store the DoF IDs as a GLOBAL_DOFS tag in MOAB, before being written to disk. This eliminates the need to get a consistent match when generating the weights in the MOAB mbtempest offline workflow since the tag in the file will be loaded in parallel along with the mesh during the setup phase of the tool.

Update: An extension to the mbconvert tool was implemented so as to add metadata information while converting an exodus mesh to MOAB h5m format. An example execution with options to write out the DoF IDs into a "GLOBAL_DOFS" tag looks like below.

tools/mbconvert -B -i GLOBAL_DOFS -r 4 outOCTGridHOMME.g outOCTGridHOMME.h5m

Here, the option -B indicates that TempestRemap is used to load the input mesh, option -i indicates the global DoF tag name and -r specifies the SE order of the field discretization. Using the outOCTGridHOMME.h5m file, now mbtempest can be launched without any other workflow changes.

Converting parallel h5m file to SCRIP file for E3SM

NOTE: In older versions of MOAB (< 5.2.1), you have to write out the map file in parallel using h5m format and then use the h5mtoscrip tool to convert to .nc file. However, in MOAB v5.2.1 and later, we can directly write out the remap weights in the nc file format, in parallel.

Machines with mbtempest tool pre-installed

Cori: Updated sources of TempestRemap and MOAB have been pre-installed along with all required dependencies on some of the standard machines. The MOAB installation is available on Cori within the E3SM project space: /project/projectdirs/e3sm/software/moab  and the corresponding TempestRemap installation is at /project/projectdirs/e3sm/software/tempestremap. The workflow for generating the offline maps using these installed tools is described below. 

Note that for the mbtempest stack to run cleanly on Cori, you may have to set the following environment variable:

csh:

setenv HDF5_USE_FILE_LOCKING FALSE

bash:

export HDF5_USE_FILE_LOCKING=FALSE

Anvil:

Prepare environment for Intel-v18 compiler:

source /lcrc/soft/climate/moab/anvil/intel18/anvil_env.sh 
MOAB_DIR=/lcrc/soft/climate/moab/anvil/intel18
TEMPESTREMAP_DIR=/lcrc/soft/climate/tempestremap/anvil/intel18

Chrysalis: 

Prepare environment for Intel compiler:

source /lcrc/soft/climate/moab/chrysalis/intel/chrys_intel.env 
MOAB_DIR=/lcrc/soft/climate/moab/chrysalis/intel
TEMPESTREMAP_DIR=/lcrc/soft/climate/tempestremap/chrysalis/intel

Note: the serial executables need to be launched with mpiexec -np 1, if on login node.  Otherwise you may encounter an error as shown below. 

> mbpart -h
Abort(1091087) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(136): 
MPID_Init(950).......: 
MPIR_pmi_init(168)...: PMI2_Job_GetId returned 14 )

A recommended best practice is to use an interactive session on a compute node, which can be requested using the following command.

    srun -N 1 -t 10 --pty bash

Compy:

After several iterations to get the dependency stack correct with respect to compatible parallel HDF5 and NetCDF installations by sysadmins on Compy, the other required dependencies were installed successfully and verified/tested to run without issues. Now mbtempest tool is accessible from $MOAB_DIR (see below).

The environment settings for running mbtempest on comps are listed below, and stored in the file: /compyfs/software/mbtempest.envs.sh for reference. 
This is version 5e41106dc9a2 from MOAB (>5.3.0) and version 72df14282a2e9 from tempestremap (>2.1.0)

module load cmake/3.11.4 intel/19.0.3 mvapich2/2.3.1 pnetcdf/1.9.0 mkl/2019u3 metis/5.1.0
export MPI_DIR=/share/apps/mvapich2/2.3.1/intel/19.0.3
export METIS_DIR=/share/apps/metis/5.1.0
export EIGEN3_DIR=/share/apps/eigen3/3.3.7/include/eigen3
export HDF5_DIR=/share/apps/netcdf-MPI/intel/19.0.5/mvapich2/2.3.2
export NETCDF_DIR=/share/apps/netcdf-MPI/intel/19.0.5/mvapich2/2.3.2
export PNETCDF_DIR=/share/apps/pnetcdf/1.9.0/intel/19.0.3/mvapich2/2.3.1
export ZOLTAN_DIR=/compyfs/software/zoltan/3.83/intel/19.0.3
export TEMPESTREMAP_DIR=/compyfs/software/tempestremap/intel/19.0.3
export MOAB_DIR=/compyfs/software/moab/intel/19.0.3

Ex: mbtempest workflow for generating ne30 offline maps

In this section the workflow for generating offline maps for the ne30 case is shown. This can be used as a template to generate maps between ATM-OCN for any resolution combination of CS and ICOD meshes. First define the environment variables for MOAB_DIR based on the installation on your local folder, or use a pre-installed version that is available on Cori at MOAB_DIR=/cfs/cdirs/e3sm/software/moab.

  1. For the NE30 case, let us generate the CS mesh of required resolution using mbtempest.
    1. Command: $MOAB_DIR/bin/mbtempest -t 0 -r 30 -f outCSMesh30.nc
    2. Here, the type = 0 ( -t 0) specifies that we want to generate a CS grid, with element resolution = 30x30x6 (using -r 30).
  2. Next, convert the NetCDF nc file format to a MOAB format, and in the process, also add some metadata for DoF numbering for the SE grid.
    1. Command: $MOAB_DIR/bin/mbconvert  -B -i GLOBAL_DOFS -r 4 outCSMesh30.nc outCSMesh30.h5m
    2. Here, the GLOBAL_DOFS is the tag that stores the DoF numbering for SE grid of order 4. The input "*.nc" mesh and output "*.h5m" mesh is specified as arguments for the format conversion.
  3. The next step is to pre-partition the h5m file so that the map generation can be computed in parallel. In this particular example, we will use Zoltan partitioner to generate 128 parts.
    1. Command: $MOAB_DIR/bin/mbpart 128 -z RCB outCSMesh30.h5m outCSMesh30_p128.h5m
  4. Now that we have the ATM grid generated, let us perform a similar conversion on the OCN MPAS file. The MPAS nc file already exists and we will use this input file and convert it to a MOAB h5m file. During this process, unwanted edges and variables are not converted since the mbtempest mapping workflow only requires the actual mesh for computation of the overlap.
    1. Command: $MOAB_DIR/bin/mbconvert -O "variable=" -O "no_edges"  -O "NO_MIXED_ELEMENTS"  oEC60to30v3_60layer.170905.nc oEC60to30v3_60layer.170905.h5m
  5. Similar to the CS grid case, let us now pre-partition the grid to 128 parts using the Zoltan Recursive-Bisection algorithm.
    1. Command: $MOAB_DIR/bin/mbpart 128 -z RCB oEC60to30v3_60layer.170905.h5m oEC60to30v3_60layer.170905_p128.h5m
  6. As mentioned above, better performance can be achieved by using the "inferred" partitioning strategy with Zoltan.
    1. Command: $MOAB_DIR/bin/mbpart 128 -z RCB -b --scale_sphere -p 2 oEC60to30v3_60layer.170905.h5m oEC60to30v3_60layer.170905_p128.h5m --inferred outCSMesh30.h5m
    2. Rename outCSMesh30_inferred.h5m to outCSMesh30_p128.h5m to be consistent
    3. The above command will generate oEC60to30v3_60layer.170905_p128.h5m and outCSMesh30_p128.h5m meshes, which are optimized in term of geometric locality for parallel runs on 128 processes.
  7. We now have fully partitioned MOAB meshes for the CS and MPAS grids (either from step (3)+(5) or from (6)), and all required inputs for mbtempest is available. Invoke the mbtempest command in parallel to generate the remapping weights after specifying the source and target grids, along with discretization detail specifications.
    1. Command: srun -n 64 $MOAB_DIR/bin/mbtempest -t 5 -w -l outCSMesh30_p128.h5m -l oEC60to30v3_60layer.170905_p128.h5m -m cgll -o 4 -g GLOBAL_DOFS -m fv -o 1 -g GLOBAL_ID -i intx_ne30_oEC60to30v3.h5m -f mapSEFV-NE30.nc
    2. The particular example above runs on 64 processes, and takes the pre-partitioned input grids outCSMesh30_p128.h5m and oEC60to30v3_60layer.170905_p128.h5m for CS and MPAS respectively
    3. We also specify that the source discretization method is Spectral Element (SE) with continuous representation of DoFs on the element interfaces and the target discretization on MPAS grid is Finite Volume (fv). This option is specified using the -m input parameter, whose default is fv.
    4. The order of the discretization is then specified using the -o options for input and output models. In the above case, we have SE order = 4 and FV order = 1.
    5. Next, we also need to specify the tags in the mesh that contain the source and target global DoF numbers that are stored on their corresponding elements. This will dictate the ordering of the mapping weight matrix that is written out to file.
    6. The final set of argument specifies that the output map file is to be written out to mapSEFV-NE30.h5m for the NE30 case in parallel.
  8. At the end of this workflow, we now have a SCRIP file (mapSEFV-NE30.nc) containing the weights to compute a solution projection from an input CS NE30 grid with SE(4) discretization to an output MPAS grid with FV(1) discretization.
  9. Exercise: rerun step (7) with source discretization specification: `-m fv -o 1 -g GLOBAL_ID`. This results in a FV-FV map between the NE30 grid and the OCN MPAS grid. You can also switch the order of -l arguments to generate the weights in the reverse direction here i.e., switch source and target grids/discretization specifications for mbtempest.

Building your own version of the mbtempest tool locally

In order to build the MOAB-TempestRemap stack with parallel MPI launch support, we suggest the following list of commands. First define an installation prefix directory where the stack of library, includes and tools will be installed. Let us call this as the $INSTALL_PREFIX environment variable.

Dependencies and pre-requisites

Before getting started, for your architecture of choice, whether that is your laptop or a LCF machine, create a list of following compatible environment variables that can be used to build the stack.

  1. MPI-enabled C, C++, and Fortran compiler wrappers that are exported in the local environment as $CC, $CXX, and $FC.
  2. Next, verify installations of dependent libraries such as $HDF5_DIR and $NETCDF_DIR that have been compiled with MPI support using the $CC, $CXX, $FC compilers.
  3. Get Eigen3 package from the webpage and untar to the $INSTALL_PREFIX/eigen3 directory with the following command
    1. Download: wget https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz  OR  curl https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz -O
    2. Move: mv eigen-eigen* $INSTALL_PREFIX/eigen3
    3. export EIGEN3_DIR=$INSTALL_PREFIX/eigen3

Dependencies and pre-requisites from an existing E3SM use case

It is recommended to use the same dependent libraries as a regular E3SM case. 
E3SM cases save the environment in files like  .env_mach_specific.sh in the case folder. That is a good environment to start building your tempestremap, MOAB or Zoltan dependencies. Or maybe they are already built on your machine.  That environment is created from config_machines.xml or config_compiler.xml files, and these change all the time, as new releases, use cases and tests become available. Problems can appear for MOAB's mbtempest if the HDF5 library that netcdf4 is built on does not have a good MPI support. Or if (gasp!) hdf5 is built in serial. Then you are limited on building mbtempest without parallel support, which means you are better off by just running tempestremap in serial. Do not bother with building MOAB. 
On compy, the netcdf used for E3SM is built with serial hdf5, so it cannot be used for MOAB. This is why on compy we have a separate netcdf, built with parallel hdf5. 

Build

To get the entire (MOAB-TempestRemap) stack working correctly, we need to find parallel-enabled dependency installations for HDF5 and NetCDF that are built with MPI library support for the current architecture

  1. TempestRemap

    1. Clone repository: `git clone https://github.com/ClimateGlobalChange/tempestremap.git tempestremap`
    2. Create build dir: cd tempestremap && mkdir build
    3. Generate configure script: autoreconf -fi
    4. Go to build dir: cd build
    5. Configure: ../configure --prefix=$INSTALL_PREFIX/tempestremap --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC CXX=$CXX CXXFLAGS="-g -O2"
    6. Build and install: make all && make install

      At the end of this series of steps, the TempestRemap libraries and tools (GenerateCSMesh, GenerateICOMesh, GenerateOverlapMesh, GenerateOfflineMap among others) will be installed in $INSTALL_PREFIX/tempestremap directory.
  2. MOAB

    1. Clone repository: `git clone https://bitbucket.org/fathomteam/moab.git moab`
    2. Checkout feature branch: git checkout master
    3. Create build dir: cd moab && mkdir build
    4. Generate configure script: autoreconf -fi
    5. Go to build dir: cd build
    6. Configure: ../configure --prefix=$INSTALL_PREFIX/moab --with-mpi --with-tempestremap=$INSTALL_PREFIX/tempestremap --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC FC=$FC F77=$FC CXX=$CXX CXXFLAGS="-g -O2" --with-eigen3=$EIGEN3_DIR
    7. Build and install: make all && make install

      If steps (a)-(g) pass successfully, the MOAB libraries and tools, along with interfaces for TempestRemap will be installed in $INSTALL_PREFIX/moab directory. The offline remapping weight computation tool, mbtempest, will also be installed during this process and can then be used standalone to generate the weight files as needed.

A simpler, consolidated build process

Or to combine both the builds together, we recommend users to go ahead with a consolidated configuration process for MOAB, which combines the configuration for TempestRemap as part of the MOAB configuration. Notice the --download-tempestremap=master option below in the configure line that instructs MOAB to clone the master branch of TempestRemap and build the dependency with $HDF5_DIR and $NETCDF_DIR specified by the user along with consistent compiler options.

MOAB and TempestRemap

a. Clone repository: `git clone https://bitbucket.org/fathomteam/moab.git moab` 
b. Checkout feature branch: git checkout master
c. Create build dir: cd moab && mkdir build
d. Generate configure script: autoreconf -fi
e. Go to build dir: cd build
f. Configure: ../configure --prefix=$INSTALL_PREFIX/moab --with-mpi --download-tempestremap=master --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC FC=$FC F77=$FC CXX=$CXX CXXFLAGS="-g -O2" --with-eigen3=$EIGEN3_DIR
g. Build and install: make all && make install

If steps (a)-(g) pass successfully, the MOAB libraries and tools, along with interfaces for TempestRemap will be installed in $INSTALL_PREFIX/moab directory. The offline remapping weight computation tool, mbtempest, will also be installed during this process and can then be used standalone to generate the weight files as needed.

References

1 Mahadevan, V. S., Grindeanu, I., Jacob, R., and Sarich, J.: Improving climate model coupling through a complete mesh representation: a case study with E3SM (v1) and MOAB (v5.x), Geosci. Model Dev. Discuss., https://doi.org/10.5194/gmd-2018-280, in review, 2018.