Singularity container
The E3SM model can be run within the E3SM Singularity container. This page describes how to install Singularity on your Linux laptop/workstation, build or download the E3SM container, and run the E3SM model within the container.
Note: the container is limited to the resources of the single node or workstation it runs on so keep your case sizes in mind.
If you have a Mac/Windows machine, use the Docker container
Install Singularity
Linux: Install Singularity as described at Singularity Installation.
Singularity on supported machines
Singularity is already available on some of E3SM’s supported machines.
Anvil: module load singularity/3.5.2 (earlier versions also available).
Theta: https://www.alcf.anl.gov/support-center/theta/singularity-theta
Cooley (part of ALCF) https://www.alcf.anl.gov/support-center/cooley/singularity-cooley
Download the E3SM container
The latest version of the E3SM container can be downloaded from e3sm.sif.
Example: Run the container on Anvil
Assume you are in your home directory and have already cloned the code into a directory call E3SM
Also do mkdir -p $HOME/projects/e3sm/cesm-inputdata
(this directory is assumed to exist by the machine entry above. Input data will be downloaded here. (could get large, THE CONTAINER DOES NOT HAVE INPUT DATA)).
[lukasz@blueslogin4 ~]$ module load singularity
[lukasz@blueslogin4 ~]$ srun --pty -p acme-small -t 01:00:00 /bin/bash
(after hear you are on the interactive node)[lukasz@b566]$ singularity shell --hostname singularity e3sm.sif
(after here you are inside the container)Singularity> cd E3SM/cime/scripts/
Singularity> ./create_newcase --case singularity.A_WCYCL1850.ne4_oQU240.baseline --compset A_WCYCL1850 --res ne4_oQU240
Singularity> cd singularity.A_WCYCL1850.ne4_oQU240.baseline/
Singularity> ./case.setup
Singularity> ./case.build
Singularity> ./case.submit
There is no batch system so ./case.submit launches an interactive job and you get to watch the output from mpirun and will have to wait until the model finishes. So instead launch it like this:
Singularity> ./case.submit >& case.out &
You can then cd to the rundir and look at the log files as they are made or do other things inside the container. If you exit the container, the job will not finish.
Advanced: Rebuilding the E3SM container
If you would like to make any changes to the container, you can build it from scratch. The E3SM Singularity definition file is available at e3sm.def. After editing the definition file, build a new container:
sudo singularity build e3sm.sif e3sm.def
It may take up to an hour to create a new container.
Advanced: Add singularity to older code.
You can run older code with singularity by adding the right machine config entries.
Add a corresponding machine
element to cime_config/machines/config_machines.xml:
<machine MACH="singularity">
<DESC>Singularity container</DESC>
<NODENAME_REGEX>singularity</NODENAME_REGEX>
<OS>LINUX</OS>
<COMPILERS>gnu</COMPILERS>
<MPILIBS>mpich</MPILIBS>
<CIME_OUTPUT_ROOT>$ENV{HOME}/projects/e3sm/scratch</CIME_OUTPUT_ROOT>
<DIN_LOC_ROOT>$ENV{HOME}/projects/e3sm/cesm-inputdata</DIN_LOC_ROOT>
<DIN_LOC_ROOT_CLMFORC>$ENV{HOME}/projects/e3sm/ptclm-data</DIN_LOC_ROOT_CLMFORC>
<DOUT_S_ROOT>$ENV{HOME}/projects/e3sm/scratch/archive/$CASE</DOUT_S_ROOT>
<BASELINE_ROOT>$ENV{HOME}/projects/e3sm/baselines/$COMPILER</BASELINE_ROOT>
<CCSM_CPRNC>$CCSMROOT/tools/cprnc/build/cprnc</CCSM_CPRNC>
<GMAKE>make</GMAKE>
<GMAKE_J>16</GMAKE_J>
<TESTS>e3sm_developer</TESTS>
<BATCH_SYSTEM>none</BATCH_SYSTEM>
<SUPPORTED_BY>lukasz at uchicago dot edu</SUPPORTED_BY>
<MAX_TASKS_PER_NODE>16</MAX_TASKS_PER_NODE>
<MAX_MPITASKS_PER_NODE>16</MAX_MPITASKS_PER_NODE>
<mpirun mpilib="default">
<executable>mpirun</executable>
<arguments>
<arg name="num_tasks"> -launcher fork -hosts localhost -np {{ total_tasks }}</arg>
</arguments>
</mpirun>
<module_system type="none"/>
<RUNDIR>$ENV{HOME}/projects/e3sm/scratch/$CASE/run</RUNDIR>
<EXEROOT>$ENV{HOME}/projects/e3sm/scratch/$CASE/bld</EXEROOT>
<environment_variables>
<env name="E3SM_SRCROOT">$SRCROOT</env>
</environment_variables>
<environment_variables mpilib="mpi-serial">
<env name="NETCDF_PATH">/usr/local/packages/netcdf-serial</env>
<env name="PATH">/usr/local/packages/cmake/bin:/usr/local/packages/hdf5-serial/bin:/usr/local/packages/netcdf-serial/bin:$ENV{PATH}</env>
<env name="LD_LIBRARY_PATH">/usr/local/packages/szip/lib:/usr/local/packages/hdf5-serial/lib:/usr/local/packages/netcdf-serial/lib</env>
</environment_variables>
<environment_variables mpilib="!mpi-serial">
<env name="NETCDF_PATH">/usr/local/packages/netcdf-parallel</env>
<env name="PNETCDF_PATH">/usr/local/packages/pnetcdf</env>
<env name="HDF5_PATH">/usr/local/packages/hdf5-parallel</env>
<env name="PATH">/usr/local/packages/cmake/bin:/usr/local/packages/mpich/bin:/usr/local/packages/hdf5-parallel/bin:/usr/local/packages/netcdf-parallel/bin:/usr/local/packages/pnetcdf/bin:$ENV{PATH}</env>
<env name="LD_LIBRARY_PATH">/usr/local/packages/mpich/lib:/usr/local/packages/szip/lib:/usr/local/packages/hdf5-parallel/lib:/usr/local/packages/netcdf-parallel/lib:/usr/local/packages/pnetcdf/lib</env>
</environment_variables>
</machine>
Then add a corresponding compiler
element to cime_config/machines/config_compilers.xml:
<compiler COMPILER="gnu" MACH="singularity">
<HDF5_PATH> $ENV{HDF5_PATH}</HDF5_PATH>
<NETCDF_PATH> $(NETCDF_PATH)</NETCDF_PATH>
<PNETCDF_PATH> $(PNETCDF_PATH)</PNETCDF_PATH>
<ADD_SLIBS> $(shell $(NETCDF_PATH)/bin/nf-config --flibs) -lblas -llapack</ADD_SLIBS>
</compiler>
OR you can create entries for the container in separate config files in the $HOME/.cime directory. See http://esmci.github.io/cime/versions/master/html/users_guide/cime-config.html#cime-user-config-directory
At this point you can run the container.
For more details on how to create a new case and run it, please refer to E3SM Quick Start