Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

As part of the efforts in the CMDV project, interfaces to integrate the MOAB unstructured mesh library with the TempestRemap remapping tool have been undertaken. Detailed information on the algorithmic and implementation aspects of this effort have been written in a manuscript submitted to Geoscientific Model Development [1]. This work has led to the development of a new offline remapping tool called mbtempest, which exposes the functionality to compute the supermesh or intersection mesh between two unstructured source and target component grids, in addition to using this supermesh for computing the remapping weights to project solutions between the grids. This functionality is part of the critical worflow with E3SM, where the generated remapping weights in the offline step are consumed by MCT at runtime to seamlessly transfer solution data between components (atm↔ocn, atm↔lnd, etc).

...

Partitioning meshes with the "inferred" strategy for better performance

A more recent update /wiki/spaces/ED/pages/2208170181 to the mbpart tool is to use the concept of inferred partitions such that the geometric locality on the source and target grids are preserved as much as possible to minimize communication at runtime during intersection mesh computation. This strategy has been shown to provide considerable speedup in the intersection mesh computation, and is now our preferred partitioning strategy in offline workflows, especially when one of the grids has topological holes (OCN mesh). In order to generate the inferred partitions, we usually choose the target mesh as the primary partition and the source mesh as the secondary partition. Then, the source mesh partitions are "inferred" based on the target mesh partition RCB tree. The commands to generate the inferred source partitions are shown below.

...

The environment settings for running mbtempest on comps are listed below, and stored in the file: /compyfs/software/mbtempest.envs.sh for reference. 
This is version 5e41106dc9a2 from MOAB (>5.3.0) and version 72df14282a2e9 from tempestremap (>2.1.0)

Code Block
module load cmake/3.11.4 intel/19.0.3 mvapich2/2.3.1 pnetcdf/1.9.0 mkl/2019u3 metis/5.1.0
export MPI_DIR=/share/apps/mvapich2/2.3.1/intel/19.0.3
export METIS_DIR=/share/apps/metis/5.1.0
export EIGEN3_DIR=/share/apps/eigen3/3.3.7/include/eigen3
export HDF5_DIR=/share/apps/netcdf-MPI/intel/19.0.5/mvapich2/2.3.2
export NETCDF_DIR=/share/apps/netcdf-MPI/intel/19.0.5/mvapich2/2.3.2
export PNETCDF_DIR=/share/apps/pnetcdf/1.9.0/intel/19.0.3/mvapich2/2.3.1
export ZOLTAN_DIR=/compyfs/software/zoltan/3.83/intel/19.0.3
export TEMPESTREMAP_DIR=/compyfs/software/tempestremap/intel/19.0.3
export MOAB_DIR=/compyfs/software/moab/intel/19.0.3

...

  1. MPI-enabled C, C++, and Fortran compiler wrappers that are exported in the local environment as $CC, $CXX, and $FC.
  2. Next, verify installations of dependent libraries such as $HDF5_DIR and $NETCDF_DIR that have been compiled with MPI support using the $CC, $CXX, $FC compilers.
  3. Get Eigen3 package from the webpage and untar to the $INSTALL_PREFIX/eigen3 directory with the following command
    1. Download: wget https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz  OR  curl https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz -O
    2. Move: mv eigen-eigen* $INSTALL_PREFIX/eigen3
    3. export EIGEN3_DIR=$INSTALL_PREFIX/eigen3

Dependencies and pre-requisites from an existing E3SM use case

It is recommended to use the same dependent libraries as a regular E3SM case. 
E3SM cases save the environment in files like  .env_mach_specific.sh in the case folder. That is a good environment to start building your tempestremap, MOAB or Zoltan dependencies. Or maybe they are already built on your machine.  That environment is created from config_machines.xml or config_compiler.xml files, and these change all the time, as new releases, use cases and tests become available. Problems can appear for MOAB's mbtempest if the HDF5 library that netcdf4 is built on does not have a good MPI support. Or if (gasp!) hdf5 is built in serial. Then you are limited on building mbtempest without parallel support, which means you are better off by just running tempestremap in serial. Do not bother with building MOAB. 
On compy, the netcdf used for E3SM is built with serial hdf5, so it cannot be used for MOAB. This is why on compy we have a separate netcdf, built with parallel hdf5. 

Build

To get the entire (MOAB-TempestRemap) stack working correctly, we need to find parallel-enabled dependency installations for HDF5 and NetCDF that are built with MPI library support for the current architecture

...

If steps (a)-(g) pass successfully, the MOAB libraries and tools, along with interfaces for TempestRemap will be installed in $INSTALL_PREFIX/moab directory. The offline remapping weight computation tool, mbtempest, will also be installed during this process and can then be used standalone to generate the weight files as needed.

References

1 Mahadevan, V. S., Grindeanu, I., Jacob, R., and Sarich, J.: Improving climate model coupling through a complete mesh representation: a case study with E3SM (v1) and MOAB (v5.x), Geosci. Model Dev. Discuss., https://doi.org/10.5194/gmd-2018-280, in review, 2018.