This page documents how to run SCREAMv1 on supported machines.
Notes:
cori-intel optimized build has an internal compiler error (in shoc_assumed_pdf.cpp?). Avoid this by building in debug mode. Or run with gnu compiler on cori if you want to build optimized.
perlmutter optimized build yields corrupted answers (really hot planet). Avoid this by building in debug mode.
ne120 fails with OOM(?) errors right now when SPA is active. Delete that proc in namelist_scream.xml.
For perlmutter: you need to use the gnugpu compiler and set
--gpu-bind=none
inenv_batch.xml
:<directives compiler="gnugpu"> <directive> --gpus-per-task=1</directive> <directive> --gpu-bind=none</directive>
Step 1: Environment:
The SCREAMv1 build requires a handful of python libraries. It also requires a ton of other dependencies, but CIME handles these automatically. Here’s what you need to do on each machine:
NERSC (cori-knl, perlmutter): nothing to do - everything is available by default
Summit: just
module load python
LLNL machines (quartz, syrah): Create a conda environment with the needed packages:
conda create -n scream_v1_build pyyaml pylint psutil
Once this is done (one time for a given machine) you can activate the environment:
conda activate scream_v1_build
Step 2: Define convenience variables
In order to provide commands below which should work for everyone on all supported machines, we create some user-specific variables below. Change these as needed:
export CODE_ROOT=~/gitwork/scream/ #or wherever you cloned the scream repo export COMPSET=F2010-SCREAMv1 #or whatever compset you want export RES=ne4_ne4 #or whatever resolution you want export PECOUNT=16x1 # Number of MPIs by number of threads. Should be divisible by node size export CASE_NAME=${COMPSET}.${RES}.${PECOUNT}.test1 #name for your simulation. export QUEUE=pdebug #whatever the name of your debug or batch queue is export COMPILER=intel #which compiler to use (can be omitted on some machines)
Resolution options:
Resolution | Grid name (aka $RES) | |
---|---|---|
ne4 | ne4_ne4 | |
ne30 | ne30_ne30 | |
ne120 | ne120_r0125_oRRS18to6v3 | |
ne256 | ||
ne512 | ne512_r0125_oRRS18to6v3 | |
ne1024 |
Suggested PECOUNTs (not necessarily performant, just something to get started). Note that EAMv1 currently uses something like 0.04 to 0.07GB/element, so make sure you don’t add more elements/node than you have memory for.
ne4 (max = 96) | ne30 (max = 5,400) | ne120 (max = 86,400) | ne256 (max = 393,216) | ne512 (max = 1,572,864) | ne1024 (max = 6,291,456) | |
---|---|---|---|---|---|---|
cori-knl (68 cores/node; 96+16 GB/node) | 16x1 | 675x1 | 4096x1 | |||
perlmutter (64 cores/node; 4 GPUs/node; 256 GB/node) | 12x1 | |||||
syrah (16 cores/node; 64 GB/node) | 32x1 | 160x1 | 1600x1 | |||
quartz (36 cores/node; 128 GB/node) | 72x1 | 180x1 | 1800x1 | |||
summit (8 cores/node?; 6 GPUs/node; 512+96 GB/node) | 256x1 | 4096x1 |
Available compilers are listed in $CODE_ROOT/cime_config/machines/config_compilers.xml. Options for the various machines are listed below. CIME also puts simulations in a different scratch directory on each machine. Figuring out where output will go can be confusing, so defaults are listed below.
Location where CIME puts run: | available compilers | |
---|---|---|
cori-knl | /global/cscratch1/sd/${USER}/e3sm_scratch/cori-knl/ | intel, gnu |
perlmutter | /pscratch/sd/{first-letter-of-username/${USER}/e3sm_scratch/perlmutter/ | gnugpu, nvidiagpu, gnu, nvidia |
syrah | /p/lustre2/${USER}/e3sm_scratch/syrah/ | intel |
quartz | /p/lustre2/${USER}/e3sm_scratch/quartz/ | intel |
summit | /autofs/nccs-svm1_home1/$USER/ for $CASE and /gpfs/alpine/cli115/proj-shared/${USER}/e3sm_scratch/ for run stuff | gnugpu, ibmgpu, pgigpu, gnu, ibm, pgi |
Step 3. Create the Case
From the location you want to build+run the model, issue:
${CODE_ROOT}/cime/scripts/create_newcase --case ${CASE_NAME} --compset ${COMPSET} --res ${RES} --pecount ${PECOUNT} --compiler ${COMPILER} --walltime 00:30:00 --queue ${QUEUE}
** For PM: Also specify --compiler gnugpu
and --project e3sm_g
Then cd ${CASE_NAME}
Step 4: Change CIME settings (if Desired)
You shouldn’t need to change anything to run, but some things you may want to change are:
./xmlchange ATM_NCPL=288 ./xmlchange DEBUG=TRUE #debug rather than optimized build. ./xmlchange JOB_QUEUE=pdebug #debug if on cori or perlmutter ./xmlchange JOB_WALLCLOCK_TIME=0:30:00 ./xmlchange STOP_OPTION=ndays #how long to run for ./xmlchange STOP_N=1 ./xmlchange HIST_OPTION=ndays #how often to write cpl.hi files ./xmlchange HIST_N=1 ./xmlchange NTASKS=675 #change how many MPI tasks to use ./xmlchange PIO_NETCDF_FORMAT="64bit_data"
The point of these changes are (respectively):
change the atm timestep to 288 steps per day (300 sec). This needs to be done via ATM_NCPL or else the land model will get confused about how frequently it is coupling with land
compile in debug mode. Will run 10x slower but will provide better error messages. And doesn’t run in any other mode on some machines.
change the default queue and wallclock from the standard queue with 1 hr walltime to debug queue and its max of 30 min walltime to get through the queue faster. Note that the format for summit wallclock limits is hh:mm instead of hh:mm:ss on other machines.
change the default length of the run from just a few steps to 1 day (or whatever you choose). This change is made in both env_run.xml and env_test.xml because case.submit seems to grab for one or the other file according to confusing rules - easier to just change both.
HIST_OPTION and HIST_N set the frequency of coupler snapshots (cpl.hi) files, which are useful for figuring out whether SCREAM is getting or giving bad data from/to the surface models
NTASKS is the number of MPI tasks to use (which sets the number of nodes submit asks for). You can also set this via --pecount setting in create_newcase.
Changing the PIO_NETCDF_FORMAT to 64bit_data is needed at very high resolutions to avoid exceeding max variables size limits.
Step 5: Configure the case
You need to issue
./case.setup
to create namelist_scream.xml, where most SCREAM settings are set.
Step 6: Change SCREAM settings
As of , this is done by modifying namelist_scream.xml either by hand or by using the atmchange
function which now comes bundled when you create a case. Explore namelist_scream.xml for variables you might want to change (but you shouldn’t have to change anything to run).
if you want to run with non-hydrostatic dycore:
Change to tstep_type=9 (or run
./atmchange tstep_type=9
in case directory)Change to theta_hydrostatic_mode=False (or run
./atmchange theta_hydrostatic_mode=False
)
To modify what output gets written, change
./data/scream_output.yaml
file under the run/data/ directorySome bugs are affected by chunk length for vectorization, which is handled by “pack” size in v1. Pack size can be tweaked by editing the cmake machine file for the current machine (components/scream/cmake/machine-files/$machine.cmake).
Step 6: Config/Compile/Run
Now setup, build, and run. You can setup and build before most or all of the above customization.
./case.build ./case.submit
You can check it’s progress via squeue -u <username>
on LLNL and NERSC systems. Use jobstat -u <username>
or bjobs
on Summit. Model output will be in the run subdirectory.
** Bonus Content: How to run at an unsupported resolution! **
Make the following ./xmlchange commands:
./xmlchange ATM_NCPL . Note that this is the number of atm timesteps per day so the model timestep (commonly known as dtime) is 86400 sec/ATM_NCPL. See the dycore settings link at the end of this section for guidance on choosing dtime.
In namelist_scream.xml
, make the following changes:
Change Vertical__Coordinate__Filename to use the initial condition file for your new resolution
Change Filename under Initial__Conditions → Physics__GLL subsection to also use that new initial condition file
Change SPA__Remap__File to use one appropriate to map ne30 to your new resolution
Change se_ne as appropriate
change se_tstep and nu_top (recommended defaults for these and dtime are given in the awesome table on the EAM's HOMME Dycore Recommended Settings (THETA) page