Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

These commands will generate the remapping weights by computing the intersection mesh through advancing front intersection algorithm in MOAB and then using TempestRemap to generate the weights in parallel. The computed matrix weights are then written out in parallel in the h5m format (specified through option -f). 

Note on converting Exodus meshes to parallel h5m format

The meshes generated with TempestRemap are output in the Exodus format, which is not the natively parallel (optimized) I/O format in MOAB. So for cases with high resolution meshes (such as NE512 or NE1024) that need to be used for computation of remapping weights, it is important that a preprocessing step is performed to partition the mesh in h5m format in order to minimize the mbtempest execution. Such a workflow can be implemented as below.

First convert the mesh from Exodus to HDF5 (h5m)

Code Block
mpiexec -n 4 tools/mbconvert -g -o "PARALLEL=WRITE_PART" -O "PARALLEL=BCAST_DELETE" -O "PARTITION=TRIVIAL" -O "PARALLEL_RESOLVE_SHARED_ENTS" outputCSMesh.exo outputCSMesh.h5m

Next partition the mesh with Zoltan or Metis

Code Block
METIS (RCB)  : tools/mbpart 1024 -m ML_RB -i 1.005 outputCSMesh.h5m outputCSMesh_p1024_mRB.h5m
METIS (KWay) : tools/mbpart 1024 -m ML_KWAY outputCSMesh.h5m outputCSMesh_p1024_mKWAY.h5m
Zoltan (RCB) : tools/mbpart 1024 -z RCB -i 1.005 outputCSMesh.h5m outputCSMesh_p1024_zRCB.h5m
Zoltan (PHG) : tools/mbpart 1024 -z PHG outputCSMesh.h5m outputCSMesh_p1024_zPHG.h5m

Now the mesh files outputCSMesh_p1024_*.h5m file can be used in parallel as an input for either a source or target grid. Similar procedure can also be performed on other exodus meshes to make them parallel aware in order to be used with mbtempest.

Note on DoF IDs for SE meshes

Currently, there is no consistent way to generate the global DoF ids for a spectral mesh in parallel, through MOAB. While the actual ID assignments for DoFs are easy in serial, the assignment in parallel could be dependent on the partitions and may not even match the numbering in HOMME. Hence, the generated mapping weights may need a different permutation to be applied before passing it back to HOMME within E3SM. So, the recommended workflow for such a case is to run HOMME to get the resolution of the grid required to generate the mapping file, and then store the DoF IDs as a GLOBAL_DOFS tag in MOAB, before being written to disk. This eliminates the need to get a consistent match when generating the weights in the MOAB mbtempest offline workflow.

...