Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In order to build the MOAB-TempestRemap stack with parallel MPI launch support, we suggest the following list of commands. First define an installation prefix directory where the stack of library, includes and tools will be installed. Let us call this as the $INSTALL_PREFIX environment variable.

Dependencies and pre-requisites

Before getting started, for your architecture of choice, whether that is your laptop or a LCF machine, create a list of following compatible environment variables that can be used to build the stack.

  1. MPI-enabled C, C++, and Fortran compiler wrappers that are exported in the local environment as $CC, $CXX, and $FC.
  2. Next, verify installations of dependent libraries such as $HDF5_DIR and $NETCDF_DIR that have been compiled with MPI support using the $CC, $CXX, $FC compilers.
  3. Get Eigen3 package from the webpage and untar to the $INSTALL_PREFIX/eigen3 directory with the following command
      a.
      1. Download: wget https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz  OR  curl https://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz -O
      b.
      1. Move: mv eigen-eigen* $INSTALL_PREFIX/eigen3
      2. export EIGEN3_DIR=$INSTALL_PREFIX/eigen3

    Build

    To get the entire (MOAB-TempestRemap) stack working correctly, we need to find parallel-enabled dependency installations for HDF5 and NetCDF that are built with MPI library support for the current architecture

    1. TempestRemap


      a. Clone repository: `git clone https://github.com/ClimateGlobalChange/tempestremap.git tempestremap`
      b. Create build dir: cd tempestremap && mkdir build
      c. Generate configure script: autoreconf -fi
      d. Go to build dir: cd build
      e. Configure: ../configure --prefix=$INSTALL_PREFIX/tempestremap --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC CXX=$CXX CXXFLAGS="-g -O2"
      f. Build and install: make all && make install

      At the end of this series of steps, the TempestRemap libraries and tools (GenerateCSMesh, GenerateICOMesh, GenerateOverlapMesh, GenerateOfflineMap among others) will be installed in $INSTALL_PREFIX/tempestremap directory.

    2. MOAB

      a. Clone repository: `git clone https://bitbucket.org/fathomteam/moab.git moab` 
      b. Checkout feature branch: git checkout vijaysm/tempest-master-API
      c. Create build dir: cd moab && mkdir build
      d. Generate configure script: autoreconf -fi
      e. Go to build dir: cd build
      f. Configure: ../configure --prefix=$INSTALL_PREFIX/moab --with-mpi --with-tempestremap=$INSTALL_PREFIX/tempestremap --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC FC=$FC F77=$FC CXX=$CXX CXXFLAGS="-g -O2" --with-eigen3=$INSTALL$EIGEN3_PREFIX/eigen3DIR
      g. Build and install: make all && make install

      If steps (a)-(g) pass successfully, the MOAB libraries and tools, along with interfaces for TempestRemap will be installed in $INSTALL_PREFIX/moab directory. The offline remapping weight computation tool, mbtempest, will also be installed during this process and can then be used standalone to generate the weight files as needed.

    Using the mbtempest tool

    The mbtempest tool exposes the algorithms and interfaces to invoke TempestRemap through MOAB to generate remapping weights for combinations of discretizations and unstructured meshes defined on a source grid to a target grid. Most of the options supported by TempestRemap tools are provided in this unified interface with one key difference: the entire workflow makes use of MPI parallelism. This implies that the overlap mesh computation that would normally use the GenerateOverlapMesh tool in TempestRemap would be replaced by the MOAB's parallel implementation of an advancing front intersection computation. This intersection mesh can subsequently be written out to file as an intermediate step or used by mbtempest to generate the remapping weights needed to project a solution field from source to target component grid. Prescriptions to preserve conservation of scalar/flux data and to impose monotonicity constraints are also available as options to pass to the mbtempest tool.

    ...

    A simpler, consolidated build process

    Or to combine both the builds together, we recommend users to go ahead with a consolidated configuration process for MOAB, which combines the configuration for TempestRemap as part of the MOAB configuration. Notice the --download-tempestremap=master option below in the configure line that instructs MOAB to clone the master branch of TempestRemap and build the dependency with $HDF5_DIR and $NETCDF_DIR specified by the user along with consistent compiler options.

    MOAB and TempestRemap

    a. Clone repository: `git clone https://bitbucket.org/fathomteam/moab.git moab` 
    b. Checkout feature branch: git checkout vijaysm/tempest-master-API
    c. Create build dir: cd moab && mkdir build
    d. Generate configure script: autoreconf -fi
    e. Go to build dir: cd build
    f. Configure: ../configure --prefix=$INSTALL_PREFIX/moab --with-mpi --download-tempestremap=master --with-netcdf=$NETCDF_DIR --with-hdf5=$HDF5_DIR CC=$CC FC=$FC F77=$FC CXX=$CXX CXXFLAGS="-g -O2" --with-eigen3=$EIGEN3_DIR
    g. Build and install: make all && make install

    If steps (a)-(g) pass successfully, the MOAB libraries and tools, along with interfaces for TempestRemap will be installed in $INSTALL_PREFIX/moab directory. The offline remapping weight computation tool, mbtempest, will also be installed during this process and can then be used standalone to generate the weight files as needed.

    Using the mbtempest tool

    The mbtempest tool exposes the algorithms and interfaces to invoke TempestRemap through MOAB to generate remapping weights for combinations of discretizations and unstructured meshes defined on a source grid to a target grid. Most of the options supported by TempestRemap tools are provided in this unified interface with one key difference: the entire workflow makes use of MPI parallelism. This implies that the overlap mesh computation that would normally use the GenerateOverlapMesh tool in TempestRemap would be replaced by the MOAB's parallel implementation of an advancing front intersection computation. This intersection mesh can subsequently be written out to file as an intermediate step or used by mbtempest to generate the remapping weights needed to project a solution field from source to target component grid. Prescriptions to preserve conservation of scalar/flux data and to impose monotonicity constraints are also available as options to pass to the mbtempest tool.

    Code Block
    [bash]> tools/mbtempest -h
    Usage: mbtempest --help | [options] 
    Options: 
      -h [--help]       : Show full help text
      -t [--type] <int> : Type of mesh (default=CS; Choose from [CS=0, RLL=1, ICO=2, OVERLAP_FILES=3, OVERLAP_MEMORY=4, OVERLAP_MOAB=5])
      -r [--res] <int>  : Resolution of the mesh (default=5)
      -d [--dual]       : Output the dual of the mesh (generally relevant only for ICO mesh)
      -w [--weights]    : Compute and output the weights using the overlap mesh (generally relevant only for OVERLAP mesh)
      -c [--noconserve] : Do not apply conservation to the resultant weights (relevant only when computing weights)
      -v [--volumetric] : Apply a volumetric projection to compute the weights (relevant only when computing weights)
      -n [--monotonic] <int>: Ensure monotonicity in the weight generation
      -l [--load] <arg> : Input mesh filenames (a source and target mesh)
      -o [--order] <int>: Discretization orders for the source and target solution fields
      -m [--method] <arg>: Discretization method for the source and target solution fields
      -g [--global_id] <arg>: Tag name that contains the global DoF IDs for source and target solution fields
      -f [--file] <arg> : Output remapping weights filename
      -i [--intx] <arg> : Output TempestRemap intersection mesh filename

    ...

    Code Block
    [bash]> tools/mbtempest -t 2 -r 50 -d -f outputICODMesh.exo
    Creating TempestRemap Mesh object ...
    ------------------------------------------------------------
    Generating Mesh.. Done
    Writing Mesh to file
    ..Mesh size: Nodes [50000] Elements [25002]
    ..Nodes per element
    ....Block 1 (6 nodes): 25002
    ..Done
    [LOG] Time taken to create Tempest mesh: max = 0.0774914, avg = 0.0774914

    Generating intersection meshes

    Once we have a source and target grid that is to be used to compute the remapping weights, mbtempest can be used in a similar fashion to GenerateOverlapMesh/GenerateOfflineMap in TempestRemap or the ESMF_RegridWeightGen tool with ESMF to create the field projection weights. Note that unlike ESMF, there is no need to create a dual of the higher-order Spectral Element (SE) mesh using the TempestRemap workflow. Some examples with different options are provided below.

    Code Block
    1. mpiexec -n 1 tools/mbtempest -t 5 -l outputCSMesh.exo -l outputICOMesh.exo -i moab_intx_cs_ico.exo
    2. mpiexec -n 16 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_T.h5m -l outputICOMesh.exo  -i moab_intx_atm_ico.h5m
    3. mpiexec -n 128 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_T.h5m -l $MOAB_SRC_DIR/MeshFiles/unittest/recMeshOcn.h5m  -i moab_intx_atm_ocn.h5m 

    ...

    : Nodes [50000] Elements [25002]
    ..Nodes per element
    ....Block 1 (6 nodes): 25002
    ..Done
    [LOG] Time taken to create Tempest mesh: max = 0.0774914, avg = 0.0774914

    Note on converting Exodus meshes to parallel h5m format[#convertexodus]

    The meshes generated with TempestRemap are output in the Exodus format, which is not the natively parallel (optimized) I/O format in MOAB. So for cases with high resolution meshes (such as NE512 or NE1024) that need to be used for computation of remapping weights, it is important that a preprocessing step is performed to partition the mesh in h5m format in order to minimize the mbtempest execution. Such a workflow can be implemented as below.

    First convert the mesh from Exodus to HDF5 (h5m)

    Code Block
    mpiexec -n 4 tools/mbconvert -g -o "PARALLEL=WRITE_PART" -O "PARALLEL=BCAST_DELETE" -O "PARTITION=TRIVIAL" -O "PARALLEL_RESOLVE_SHARED_ENTS" outputCSMesh.exo outputCSMesh.h5m

    Next partition the mesh with Zoltan or Metis

    Code Block
    METIS (RCB)  : tools/mbpart 1024 -m ML_RB -i 1.005 outputCSMesh.h5m outputCSMesh_p1024_mRB.h5m
    METIS (KWay) : tools/mbpart 1024 -m ML_KWAY outputCSMesh.h5m outputCSMesh_p1024_mKWAY.h5m
    Zoltan (RCB) : tools/mbpart 1024 -z RCB -i 1.005 outputCSMesh.h5m outputCSMesh_p1024_zRCB.h5m
    Zoltan (PHG) : tools/mbpart 1024 -z PHG outputCSMesh.h5m outputCSMesh_p1024_zPHG.h5m

    Now the mesh files outputCSMesh_p1024_*.h5m file can be used in parallel as an input for either a source or target grid. Similar procedure can also be performed on other exodus meshes to make them parallel aware in order to be used with mbtempest.

    Generating intersection meshes

    Once we have a source and target grid that is to be used to compute the remapping weights, mbtempest can be used in a similar fashion to GenerateOverlapMesh/GenerateOfflineMap in  in TempestRemap or the the ESMF_RegridWeightGen tool  tool with ESMF to create the field projection weights. Note that unlike ESMF, there is no need to create a dual of the higher-order Spectral Element (SE) mesh using the TempestRemap workflow. Some examples with different options are provided below.

    Code Block
    1. mpiexec -n 1 tools/mbtempest -t 5 -l outputCSMesh.exo -l outputICOMesh.exo -f moab_mbtempest_remap_csico_fvfv.h5ml outputICOMesh.exo -i moab_intx_file2cs_ico.exo -w 
    2. mpiexec -n 216 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_T.h5m -l $MOAB_SRC_DIR/MeshFiles/unittest/recMeshOcn.h5moutputICOMesh.exo  -fi moab_mbtempestintx_remapatm_fvfvico.h5m -i
    moab_intx_file2.exo -w -m fv -m fv 
    3. mpiexec -n 4128 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_T.h5m -l $MOAB_SRC_DIR/MeshFiles/unittest/recMeshOcn.h5m  -f moab_mbtempest_remap_sefv.h5m  -i moab_intx_file2.exo -w -m cgll -m fv -o 4 -o 1 -g GLOBAL_DOFS -g GLOBAL_ID

    These commands will generate the remapping weights by computing the intersection mesh through advancing front intersection algorithm in MOAB and then using TempestRemap to generate the weights in parallel. The computed matrix weights are then written out in parallel in the h5m format (specified through option -f). 

    Note on converting Exodus meshes to parallel h5m format

    The meshes generated with TempestRemap are output in the Exodus format, which is not the natively parallel (optimized) I/O format in MOAB. So for cases with high resolution meshes (such as NE512 or NE1024) that need to be used for computation of remapping weights, it is important that a preprocessing step is performed to partition the mesh in h5m format in order to minimize the mbtempest execution. Such a workflow can be implemented as below.

    First convert the mesh from Exodus to HDF5 (h5m)

    Code Block
    mpiexec -n 4 tools/mbconvert -g -o "PARALLEL=WRITE_PART" -O "PARALLEL=BCAST_DELETE" -O "PARTITION=TRIVIAL" -O "PARALLEL_RESOLVE_SHARED_ENTS" outputCSMesh.exo outputCSMesh.h5m

    Next partition the mesh with Zoltan or Metis

    Code Block
    METIS (RCB)  : tools/mbpart 1024 -m ML_RB -i 1.005 outputCSMesh.h5m outputCSMesh_p1024_mRB.h5m
    METIS (KWay) : tools/mbpart 1024 -m ML_KWAY outputCSMesh.h5m outputCSMesh_p1024_mKWAY.h5m
    Zoltan (RCB) : tools/mbpart 1024 -z RCB -i 1.005 outputCSMesh.h5m outputCSMesh_p1024_zRCB.h5m
    Zoltan (PHG) : tools/mbpart 1024 -z PHG outputCSMesh.h5m outputCSMesh_p1024_zPHG.h5m

    ...

    atm_ocn.h5m 

    The first -l option specifies the source grid to load, and the second -l option specifies the target grid to load. mbtempest can load Exodus, .nc and h5m files in serial, while the parallel I/O is generally optimized only for the native HDF5 format (h5m). So we recommend converting meshes to this format and pre-partitioning using the instructions above.

    Generating remapping weights in parallel

    Once we have a source and target grid that is to be used to compute the remapping weights, mbtempest can be used in a similar fashion to GenerateOverlapMesh/GenerateOfflineMap in TempestRemap or the ESMF_RegridWeightGen tool with ESMF to create the field projection weights. Note that unlike ESMF, there is no need to create a dual of the higher-order Spectral Element (SE) mesh using the TempestRemap workflow. Some examples with different options are provided below.

    Code Block
    1. mpiexec -n 1 tools/mbtempest -t 5 -l outputCSMesh.exo -l outputICOMesh.exo -f moab_mbtempest_remap_csico_fvfv.h5m -i moab_intx_file2.exo -w 
    2. mpiexec -n 64 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_T.h5m -l $MOAB_SRC_DIR/MeshFiles/unittest/recMeshOcn.h5m  -f moab_mbtempest_remap_fvfv.h5m -i moab_intx_file2.exo -w -m fv -m fv 
    3. mpiexec -n 128 tools/mbtempest -t 5 -l $MOAB_SRC_DIR/MeshFiles/unittest/wholeATM_T.h5m -l $MOAB_SRC_DIR/MeshFiles/unittest/recMeshOcn.h5m  -f moab_mbtempest_remap_sefv.h5m -i moab_intx_file2.exo -w -m cgll -m fv -o 4 -o 1 -g GLOBAL_DOFS -g GLOBAL_ID

    These commands will generate the remapping weights by computing the intersection mesh through advancing front intersection algorithm in MOAB and then using TempestRemap to generate the weights in parallel. The computed matrix weights are then written out in parallel in the h5m format (specified through option -f). 

    Note on DoF IDs for SE meshes

    ...