...
As in previous versions, you can access the environment as usual by sourcing an activation script:
Acme1:
source /p/user_pub/e3sm_unified/envs/load_latest_e3sm_unified_acme1.sh
Andes:
source /ccs/proj/cli115/software/e3sm-unified/load_latest_e3sm_unified_andes.sh
...
source /share/apps/E3SM/conda_envs/load_latest_e3sm_unified_compy.sh
Dane:
Frontier:
source /ccs/proj/cli115/software/e3sm-unified/load_latest_e3sm_unified_frontier.sh
...
source /lus/grand/projects/E3SMinput/soft/e3sm-unified/load_latest_e3sm_unified_polaris.sh
Ruby:
Details
The new version has been deployed on all supported machines: Acme1, Andes, Anvil, Chicoma, Chrysalis, Compy, Dane, Frontier, Perlmutter and , ALCF Polaris (not to be confused with the E3SM Polaris software) and Ruby.
Note: We encourage users at OLCF to use Andes, rather than Frontier, for processing and analysis.
On 6 8 machines (Anvil, Chicoma, Chrysalis, Compy, Dane, Frontier, Perlmutter and PerlmutterRuby) there are 6 packages of interest -- ESMF, ILAMB, MOAB, NCO, TempestExtremes and TempestRemap -- that have been built with Spack using system compilers and MPI libraries. When you load E3SM-Unified on a compute node, you will have access to these versions, which can be run in parallel and which will typically run more efficiently than their counterparts in conda packages.
...