Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

As part of the efforts in the CMDV project, interfaces to integrate the MOAB unstructured mesh library with the TempestRemap remapping tool have been undertaken. Detailed information on the algorithmic and implementation aspects of this effort have been written in a manuscript submitted to Geoscientific Model Development [1]. This work has led to the development of a new offline remapping tool called mbtempest, which exposes the functionality to compute the supermesh or intersection mesh between two unstructured source and target component grids, in addition to using this supermesh for computing the remapping weights to project solutions between the grids. This functionality is part of the critical worflow with E3SM, where the generated remapping weights in the offline step are consumed by MCT at runtime to seamlessly transfer solution data between components (atm↔ocn, atm↔lnd, etc).

...

Cori: Updated sources of TempestRemap and MOAB have been pre-installed along with all required dependencies on some of the standard machines. The MOAB installation is available on Cori within the E3SM project space: `/project/projectdirs/acme/software/moab` and the corresponding TempestRemap installation is at `/project/projectdirs/acme/software/tempestremap`tempestremap. The workflow for generating the offline maps using these installed tools is described below. 

Anvil: Soon..

Building your own version of the mbtempest tool

In order to build the MOAB-TempestRemap stack with parallel MPI launch support, we suggest the following list of commands. First define an installation prefix directory where the stack of library, includes and tools will be installed. Let us call this as the $INSTALL_PREFIX environment variable.

Dependencies and pre-requisites

Before getting started, for your architecture of choice, whether that is your laptop or a LCF machine, create a list of following compatible environment variables that can be used to build the stack.

  1. MPI-enabled C, C++, and Fortran compiler wrappers that are exported in the local environment as $CC, $CXX, and $FC.
  2. Next, verify installations of dependent libraries such as $HDF5_DIR and $NETCDF_DIR that have been compiled with MPI support using the $CC, $CXX, $FC compilers.
  3. Get Eigen3 package from the webpage and untar to the $INSTALL_PREFIX/eigen3 directory with the following command
    1. Download: wget https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz  OR  curl https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz -O
    2. Move: mv eigen-eigen* $INSTALL_PREFIX/eigen3
    3. export EIGEN3_DIR=$INSTALL_PREFIX/eigen3

Build

To get the entire (MOAB-TempestRemap) stack working correctly, we need to find parallel-enabled dependency installations for HDF5 and NetCDF that are built with MPI library support for the current architecture

TempestRemap

...

MOAB

...

A simpler, consolidated build process

Or to combine both the builds together, we recommend users to go ahead with a consolidated configuration process for MOAB, which combines the configuration for TempestRemap as part of the MOAB configuration. Notice the --download-tempestremap=master option below in the configure line that instructs MOAB to clone the master branch of TempestRemap and build the dependency with $HDF5_DIR and $NETCDF_DIR specified by the user along with consistent compiler options.

MOAB and TempestRemap

a. Clone repository: `git clone https://bitbucket.org/fathomteam/moab.git moab` 
b. Checkout feature branch: git checkout vijaysm/tempest-master-API
c. Create build dir: cd moab && mkdir build
d. Generate configure script: autoreconf -fi
e. Go to build dir: cd build
f. Configure: ../configure --prefix=$INSTALL_PREFIX/moab --with-mpi --download-tempestremap=master --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC FC=$FC F77=$FC CXX=$CXX CXXFLAGS="-g -O2" --with-eigen3=$EIGEN3_DIR
g. Build and install: make all && make install

If steps (a)-(g) pass successfully, the MOAB libraries and tools, along with interfaces for TempestRemap will be installed in $INSTALL_PREFIX/moab directory. The offline remapping weight computation tool, mbtempest, will also be installed during this process and can then be used standalone to generate the weight files as needed.

mbtempest workflow for generating offline maps

In this section the workflow for generating offline maps for the ne30 case is shown. This can be used as a template to generate maps between ATM-OCN for any resolution combination of CS and ICOD meshes. First define the environment variables for MOAB_DIR based on the installation on your local folder or use a pre-installed version that is available on Cori at `export MOAB_DIR=/project/projectdirs/acme/software/moab`.

...

  1. $MOAB_DIR/bin/mbtempest -t 0 -r 30 -f outCSMesh30_mb.nc
  2. Here, the type=0 (-t 0) specifies that we want a CS grid, with element resolution = 30x30x6 (using -r 30).

...

  1. $MOAB_DIR/bin/mbconvert  -B -i GLOBAL_DOFS -r 4 outCSMesh30.nc outCSMesh30.h5m
  2. Here, the GLOBAL_DOFS is the tag that stores the DoF numbering for SE grid of order 4. The input *.nc mesh and output *.h5m mesh is specified as arguments for the format conversion.

...

  1. $MOAB_DIR/bin/mbpart 1024 -m ML_RB outCSMesh30.h5m outCSMesh30_1024.h5m

...

  1. $MOAB_DIR/bin/mbconvert -O "variable=" -O "no_edges" oEC60to30v3_60layer.170905.nc oEC60to30v3_60layer.170905.h5m

...

  1. $MOAB_DIR/bin/mbpart 1024 -m ML_RB oEC60to30v3_60layer.170905.h5m oEC60to30v3_60layer.170905_1024.h5m

...

  1. mpiexec -n 16 $MOAB_DIR/bin/mbtempest -t 5 -w -l outCSMesh30_1024.h5m -l oEC60to30v3_60layer.170905_1024.h5m -m cgll -o 4 -m fv -o 1 -g GLOBAL_DOFS -g GLOBAL_ID -f mapSEFV-NE30.h5m
  2. The particular example above runs on 16 processes, and takes the pre-partitioned input grids outCSMesh30_1024.h5m and oEC60to30v3_60layer.170905_1024.h5m for CS and MPAS respectively
  3. We also specify that the source discretization method is SE with continuous representation of DoFs on the element interfaces and the target discretization on MPAS grid is Finite Volume (`fv`). This option is specified using the -m input parameter, whose default is fv.
  4. The order of the discretization is then specified using the -o options for input and output models. In the above case, we have SE order = 4 and FV order = 1.
  5. Next, we also need to specify the tags in the mesh that contain the source and target global DoF numbers that are stored on their corresponding elements. This will dictate the ordering of the mapping weight matrix that is written out to file.
  6. The final set of argument specifies that the output map file is to be written out to mapSEFV-NE30.h5m for the NE30 case in parallel.

...

  1. $MOAB_DIR/bin/h5mtoscrip -d 2 -w mapSEFV-NE30.h5m -s mapSEFV-NE30.nc
  2. The -w argument takes the input map file in h5m format and the -s argument takes the output SCRIP filename.

...

mbtempest workflow for generating offline maps

In this section the workflow for generating offline maps for the ne30 case is shown. This can be used as a template to generate maps between ATM-OCN for any resolution combination of CS and ICOD meshes. First define the environment variables for MOAB_DIR based on the installation on your local folder or use a pre-installed version that is available on Cori at export MOAB_DIR=/project/projectdirs/acme/software/tempestremap.

  1. For the NE30 case, let us generate the CS mesh of required resolution using mbtempest.
    1. $MOAB_DIR/bin/mbtempest -t 0 -r 30 -f outCSMesh30_mb.nc
    2. Here, the type=0 ( -t 0) specifies that we want a CS grid, with element resolution = 30x30x6 (using -r 30).
  2. Next, convert the NetCDF nc file format to a MOAB format, and in the process, also add some metadata for DoF numbering for the SE grid.
    1. $MOAB_DIR/bin/mbconvert  -B -i GLOBAL_DOFS -r 4 outCSMesh30.nc outCSMesh30.h5m
    2. Here, the GLOBAL_DOFS is the tag that stores the DoF numbering for SE grid of order 4. The input "*.nc" mesh and output "*.h5m" mesh is specified as arguments for the format conversion.
  3. The next step is to pre-partition the h5m file so that the map generation can be computed in parallel. In this particular example, we will use Metis partitioner to generate 1024 parts.
    1. $MOAB_DIR/bin/mbpart 1024 -m ML_RB outCSMesh30.h5m outCSMesh30_1024.h5m
  4. Now that we have the ATM grid generated, let us perform a similar conversion to the OCN MPAS file. The MPAS nc file already exists and we will use this input file and convert it to a MOAB h5m file. During this process, unwanted edges and variables are not converted since the mbtempest mapping workflow only requires the actual mesh for computation of the overlap.
    1. $MOAB_DIR/bin/mbconvert -O "variable=" -O "no_edges" oEC60to30v3_60layer.170905.nc oEC60to30v3_60layer.170905.h5m
  5. Similar to the CS grid case, let us now pre-partition the grid to 1024 parts using the Metis Recursive-Bisection algorithm.
    1. $MOAB_DIR/bin/mbpart 1024 -m ML_RB oEC60to30v3_60layer.170905.h5m oEC60to30v3_60layer.170905_1024.h5m
  6. We now have fully partitioned MOAB meshes for the CS and MPAS grids and all required inputs for mbtempest is available. Invoke the mbtempest command in parallel to generate the remapping weights after specifying the source and target grids, along with discretization detail specifications.
    1. mpiexec -n 16 $MOAB_DIR/bin/mbtempest -t 5 -w -l outCSMesh30_1024.h5m -l oEC60to30v3_60layer.170905_1024.h5m -m cgll -o 4 -m fv -o 1 -g GLOBAL_DOFS -g GLOBAL_ID -f mapSEFV-NE30.h5m
    2. The particular example above runs on 16 processes, and takes the pre-partitioned input grids outCSMesh30_1024.h5m and oEC60to30v3_60layer.170905_1024.h5m for CS and MPAS respectively
    3. We also specify that the source discretization method is Spectral Element (SE) with continuous representation of DoFs on the element interfaces and the target discretization on MPAS grid is Finite Volume (fv). This option is specified using the -m input parameter, whose default is fv.
    4. The order of the discretization is then specified using the -o options for input and output models. In the above case, we have SE order = 4 and FV order = 1.
    5. Next, we also need to specify the tags in the mesh that contain the source and target global DoF numbers that are stored on their corresponding elements. This will dictate the ordering of the mapping weight matrix that is written out to file.
    6. The final set of argument specifies that the output map file is to be written out to mapSEFV-NE30.h5m for the NE30 case in parallel.
  7. Now that we have the parallel remapping weights generated in h5m format, we need to re-convert it back to a SCRIP nc file in order to be consumed by E3SM. For this purpose, we use a special "serial" tool to convert the h5m file to SCRIP.
    1. $MOAB_DIR/bin/h5mtoscrip -d 2 -w mapSEFV-NE30.h5m -s mapSEFV-NE30.nc
    2. The -w argument takes the input map file in h5m format and the -s argument takes the output SCRIP filename.
  8. At the end of this workflow, we now have a SCRIP file containing the weights to compute a solution projection from an input CS NE30 grid with SE(4) discretization to an output MPAS grid with FV(1) discretization.

Building your own version of the mbtempest tool locally

In order to build the MOAB-TempestRemap stack with parallel MPI launch support, we suggest the following list of commands. First define an installation prefix directory where the stack of library, includes and tools will be installed. Let us call this as the $INSTALL_PREFIX environment variable.

Dependencies and pre-requisites

Before getting started, for your architecture of choice, whether that is your laptop or a LCF machine, create a list of following compatible environment variables that can be used to build the stack.

  1. MPI-enabled C, C++, and Fortran compiler wrappers that are exported in the local environment as $CC, $CXX, and $FC.
  2. Next, verify installations of dependent libraries such as $HDF5_DIR and $NETCDF_DIR that have been compiled with MPI support using the $CC, $CXX, $FC compilers.
  3. Get Eigen3 package from the webpage and untar to the $INSTALL_PREFIX/eigen3 directory with the following command
    1. Download: wget https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz  OR  curl https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz -O
    2. Move: mv eigen-eigen* $INSTALL_PREFIX/eigen3
    3. export EIGEN3_DIR=$INSTALL_PREFIX/eigen3

Build

To get the entire (MOAB-TempestRemap) stack working correctly, we need to find parallel-enabled dependency installations for HDF5 and NetCDF that are built with MPI library support for the current architecture

  1. TempestRemap

    1. Clone repository: `git clone https://github.com/ClimateGlobalChange/tempestremap.git tempestremap`
    2. Create build dir: cd tempestremap && mkdir build
    3. Generate configure script: autoreconf -fi
    4. Go to build dir: cd build
    5. Configure: ../configure --prefix=$INSTALL_PREFIX/tempestremap --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC CXX=$CXX CXXFLAGS="-g -O2"
    6. Build and install: make all && make install

      At the end of this series of steps, the TempestRemap libraries and tools (GenerateCSMesh, GenerateICOMesh, GenerateOverlapMesh, GenerateOfflineMap among others) will be installed in $INSTALL_PREFIX/tempestremap directory.
  2. MOAB

    1. Clone repository: `git clone https://bitbucket.org/fathomteam/moab.git moab`
    2. Checkout feature branch: git checkout master
    3. Create build dir: cd moab && mkdir build
    4. Generate configure script: autoreconf -fi
    5. Go to build dir: cd build
    6. Configure: ../configure --prefix=$INSTALL_PREFIX/moab --with-mpi --with-tempestremap=$INSTALL_PREFIX/tempestremap --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC FC=$FC F77=$FC CXX=$CXX CXXFLAGS="-g -O2" --with-eigen3=$EIGEN3_DIR
    7. Build and install: make all && make install

      If steps (a)-(g) pass successfully, the MOAB libraries and tools, along with interfaces for TempestRemap will be installed in $INSTALL_PREFIX/moab directory. The offline remapping weight computation tool, mbtempest, will also be installed during this process and can then be used standalone to generate the weight files as needed.

A simpler, consolidated build process

Or to combine both the builds together, we recommend users to go ahead with a consolidated configuration process for MOAB, which combines the configuration for TempestRemap as part of the MOAB configuration. Notice the --download-tempestremap=master option below in the configure line that instructs MOAB to clone the master branch of TempestRemap and build the dependency with $HDF5_DIR and $NETCDF_DIR specified by the user along with consistent compiler options.

MOAB and TempestRemap

a. Clone repository: `git clone https://bitbucket.org/fathomteam/moab.git moab` 
b. Checkout feature branch: git checkout master
c. Create build dir: cd moab && mkdir build
d. Generate configure script: autoreconf -fi
e. Go to build dir: cd build
f. Configure: ../configure --prefix=$INSTALL_PREFIX/moab --with-mpi --download-tempestremap=master --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC FC=$FC F77=$FC CXX=$CXX CXXFLAGS="-g -O2" --with-eigen3=$EIGEN3_DIR
g. Build and install: make all && make install

If steps (a)-(g) pass successfully, the MOAB libraries and tools, along with interfaces for TempestRemap will be installed in $INSTALL_PREFIX/moab directory. The offline remapping weight computation tool, mbtempest, will also be installed during this process and can then be used standalone to generate the weight files as needed.

References

1 Mahadevan, V. S., Grindeanu, I., Jacob, R., and Sarich, J.: Improving climate model coupling through a complete mesh representation: a case study with E3SM (v1) and MOAB (v5.x), Geosci. Model Dev. Discuss., https://doi.org/10.5194/gmd-2018-280, in review, 2018.