Generate, Regrid, and Split Climatologies (climo files) with ncclimo

Overview

During the ACME era (c. 2015-16), we conducted extensive evaluation of AMWG, UV-CDAT, and NCO codes for generating and regridding climatology files (see here), and determined that NCO provides the most correct answers, has the best metadata, and is fastest. (These conclusions did not consider any Xarray or cloud-based methodologies, and so perhaps should be revisited.) The NCO climatology and regridding software ncclimo and ncremap, respectively, have evolved, and maintained back-compatibility so they now support all component models of E3SM (and CESM), up to and including EAMxx and MALI.

In climatology generation mode, the NCO operator ncclimo ingests "raw" data consisting of interannual sets of files, each containing sub-daily (diurnal), daily, monthly, or yearly averages, and from these produces climatological daily, monthly, seasonal, and/or annual means. Alternatively, in timeseries reshaping (aka “splitter”) mode, ncclimo will subset and temporally split the input timeseries into per-variable files spanning the entire period. ncclimo will optionally regrid (by calling ncremap) all output files in either mode. The primary ncremap documentation is here, and delves into regridding options in much great depth than the below instructions for climatology generation.

Prerequisites

ncclimo requires and comes with NCO version 4.6.0 and later.  Since early 2018, the preferred way to obtain NCO for E3SM analysis is with the E3SM-Unified Conda package, which installs numerous analysis packages in a platform-independent manner and, as importantly, allows you to skip reading the rest of this paragraph. Those who need only NCO, or who wish to avoid Conda, should read-on. The newest versions of NCO are installed on all major DOE supercomputers in C. Zender’s home directory (usually ~zender/[bin,lib]), and semi-recent versions are sometimes available as machine modules (e.g., module load nco). This is site-specific and not under my (CZ's) control. Follow these directions on the NCO homepage to install on your own machines/directories. It can be as easy as apt-get install nco, dnf install nco, or conda install -c conda-forge nco, or you can build/install from scratch with configure;make;make install

Timeseries Reshaping mode, aka Splitting

ncclimo will reshape input files that are a series of snapshots of all model variables into outputs that are continuous timeseries of each individual variable taken from all input files. Timeseries to be reshaped (split) often come with hard-to-predict names, e.g., because the number of days or months in a file, or timesteps per day or month may all vary. Thus ncclimo in splitter mode requires the user to supply the input filenames. ncclimo will not construct input filenames itself in splitter mode (unlike monthly or annual climo generation mode). ncclimo will employ timeseries reshaping mode if it receives the --split switch (recommended for clarity) or the --ypf_max option described below. In addition, it must receive a list of files through a pipe to stdin, or, alternatively, placed as positional arguments (after the last command-line option), or if neither of these is done and caseid is specified and the temporal resolution is monthly, it will automatically generate the filenames (identically to climatology mode), or if none of the previous four is done and no caseid is specified, it will assume that all *.nc files in drc_in constitute the input file list. These examples invoke reshaping mode in the five possible ways (choose your poison):

# Sample Abbreviations
drc_in=~zender/data/ne30/raw
map_fl=${DATA}/maps/map_ne30/map_ne30pg2_to_cmip6_180x360_nco.20200901.nc
# Splitter Input Mode #1: Read input filename list from file
ls $drc_in/elm.h0.201[34].nc > input_list
ncclimo -P elm --split --yr_srt=2013 --yr_end=2014 --var=TBOT,FNIR --map=$map_fl --drc_out=$drc_out < input_list
# Splitter Input Mode #2: Pipe input filenames to stdin
cd $drc_in
ls $drc_in/eam.h0.201[34].nc | ncclimo -P eam --split --yr_srt=2013 --yr_end=2014 --var=FSNT,AODVIS --map=$map_fl --drc_out=$drc_out
# Splitter Input Mode #3: Append filenames positional arguments
ncclimo -P eam --split --var=FSNT,AODVIS --yr_srt=2013 --yr_end=2014 --map=$map_fl --drc_out=$drc_out $drc_in/eam.h0.0[012]??.nc
# Splitter Input Mode #4: Automatically generate monthly input filenames
ncclimo -P mpaso --split --var=timeMonthly_avg_activeTracers_temperature --yr_srt=2013 --yr_end=2014 --drc_in=$drc_in --map=$map_fl --drc_out=$drc_out

Splitter Input Mode #5: Ingest entire directory (be sure the directory contains only files to be split!)
ncclimo -P eam --split --var=T,Q,RH --yr_srt=2013 --yr_end=2014 --drc_in=$drc_in --map=$map_fl --drc_out=$drc_out

The output is a collection of per-variable timeseries such as FSNT_YYYYMM_YYYYMM.nc, AODVIS_YYYYMM_YYYYMM.nc, etc. The output is split into segments each containing no more than ypf_max (default 50) years-per-file, e.g., FSNT_000101_005012.nc, FSNT_005101_009912.nc, FSNT_010001_014912.nc, etc. Change the maximum number of years-per-output-file with the --ypf_max=ypf_max option. 

ncclimo can (as of NCO 4.9.4) reshape timeseries with temporal resolution shorter than one-month, aka, high-frequency timeseries. For E3SM, this typically means timeseries with daily or finer (e.g., hourly) resolution, such as is often output in EAM/ELM h1-h4 datasets. EAM/ELM output these datasets with a fixed number of records (i.e., timesteps) per file. For example, fifteen daily timesteps or 24 hourly timesteps per file. A primary difficulty in processing such datasets is that their boundaries often do not coincide with the desired analysis interval, which might start and end on even boundaries of a month or year. Aligning timeseries to even month or year boundaries requires extra processing logic which users must invoke by setting the climatology mode option to high frequency splitting (hfs), i.e., --clm_md=hfs:

cd $drc_in;ls *.eam.h1.000?? > ~/input_list
ncclimo --clm_md=hfs --var=PRECT --ypf=1 --yr_srt=2013 --yr_end=2014 --map=map.nc --drc_out=${drc_out} < ~/input_list

The output of the above would be three files each containing the values of PRECT for exactly one year, no matter what the time resolution or the boundaries of the input. Omitting the --clm_md=hfs option for high-frequency timeseries would result in output segments not evenly aligned on year boundaries.

Climatology generation mode (produce monthly, seasonal, and annual climatologies from monthly-mean input data)

A common task for ncclimo is to produce climatological monthly, seasonal, and annual-means from an interannual series of monthly-mean input files with commands like these:

ncclimo -P eam -s $yr_srt -e $yr_end -c $caseid -i $drc_in -o $drc_out # EAM/CAM/CAM-SE
ncclimo -P eam -v FSNT -s $yr_srt -e $yr_end -c $caseid -i $drc_in -o $drc_out # EAM subset
ncclimo -P elm -s $yr_srt -e $yr_end -c $caseid -i $drc_in -o $drc_out # ELM/ALM/CLM

Each option can be accessed by a handful of long-option synonyms to suit users' tastes. With long options the first example above may be rewritten as

ncclimo --prc_typ=eam --start=$yr_srt --end=$yr_end --case=$caseid --input=$drc_in --output=$drc_out

Note that -P eam above, is not necessary since the default processing type is EAM. However, it is a good habit to specify the component model (if any) to ncclimo since ncremap may require this information in the regridding step. When invoked without options ncclimo outputs a handy table of all available options, their long-option synonyms, and some examples. NCO documentation here describes the full meaning of all options. The most common options are:

-a, --dec_md: The “December mode” option determines the start and end months of the climatology and the type of NH winter seasonal average. Valid arguments are sdd (default, or synonyms jfd and JFD) and scd (or synonyms djf and DJF). scd stands for seasonally continuous December. The first month used will be Dec of the year before the start year you specify with -s. The last month is November of the end year specified with -e.In SCD-mode the Northern Hemisphere winter seasonal climatology will be computed with sets of the three consecutive months December, January, and February (DJF) where the calendar year of the December months is always one less than the calendar year of January and February. sdd stands for seasonally discontinuous December. The first month used will be Jan of the specified start year. The last month is December of the specified end year. In SDD-mode the Northern Hemisphere winter seasonal climatology will be computed with sets of the three non-consecutive months January, February, and December (JFD) from each calendar year. (Prior to NCO 4.9.4, released in August 2020, the default was scd not sdd.)

-C, --clm_md: Climatology mode. Either mth (default, indicates monthly-mean input files), hfc for high-frequency-climo (i.e., diurnal cycles from sub-daily input files), hfs for high-frequency-splitter (i.e., concatenation of sub-daily input files), or ann (for annual-mean input files). 

-c, --caseid: The case ID or simulation name for automatically generating input filenames (in monthly climo mode). For input files like famipc5_ne30_v0.3_00001.cam.h0.1980-01.nc, specify -c famipc5_ne30_v0.3_00001. The .cam. and .h0. bits are added to the filenames internally by default, and can be modified via the -m mdl_nm and -h hst_nm switches if needed. In high-frequency mode the --caseidoption is optional (since the user provides all the input filenames). If provided, it is used to rename the output filenames (much the same as the --fml_nm option). 

-e, --yr_end: End year (example: 2000). Unless the optional flag -a scdis specified, the last month used will be Dec of the specified end year. If -a scd is specified, the last month will be Nov of the specified end year.

-h, --hst_nm: History file volume that separates the model name from the date in the input file name. Default is h0.  Other common values are h1 and h

-i, --drc_in: Directory containing all netCDF files to be used for input.

-m, --mdl_nm: Model name. Default is eam. This is similar to, though subtly different than, the -P option (which is preferred to -m). This is literally the string name used in the history tape output files. Other options are cam, clm2, cism, cice, mpasi, mpaso, pop. The best example of when to use -m is for ELM, where -m clm2 informs ncclimo that the ELM input files have “clm2” (not “elm”) in their names. Similarly, use -m cam when EAM files have "cam" (not “eam") in their filename.

-o, --drc_out: Directory where computed native grid climo files will be placed. Regridded climos will also be placed here unless a separate directory for them is specified with -O (NB: capital "O") 

-O, --drc_rgr: Directory where regridded climo files will be placed.

-P, --prc_typ: Processing type. As of ~2020, use -P to indicate the component model that produced the data, and -m solely to modify the model name string for input files. The standard processing type assumed is EAM files, so -P eam is a no-op. However, processing EAMxx files requires -P eamxx. Other processing types that should always be indicated are clm, cpl (for coupler files), elm, mali, mpasseaice, and mpasocean.

-s, --yr_srt: Start year (example: 1980). By default, the first month used will be Jan of this start year. If -a scd is specified, the first month used will be Dec of the year before this specified start year (example Dec 1979 to allow for temporally contiguous DJF climos).

-v, --var: Variable list (comma-separated) to subset, e.g., FSNT,AODVIS,PREC.? (yes, regular expressions work so this expands to PRECC,PRECL,PRECSC,PRECSL)

MPAS considerations

MPAS ocean and ice models have their own (non-CESM'ish) naming convention that guarantees output files have the same names for all simulations. By default ncclimo analyzes the timeSeriesStatsMonthly analysis member (AM) output (tell @Charlie Zender if you want options for other AM output). ncclimo recognizes input files as being MPAS-style when invoked with -m mpaso or -m mpasseaice (or synonyms) like this:

ncclimo -P mpasocean -s 1980 -e 1983 -i $drc_in -o $drc_out
ncclimo -P mpasseaice -s 1980 -e 1983 -i $drc_in -o $drc_out

Some data are best evaluated with custom-defined seasons, e.g., JFM instead of DJF, or two-month seasons such as FM or ON. ncclimo supports up to eleven (and counting) seasons, although by default it only computes MAM, JJA, SON, and DJF. As of NCO 4.6.8, use the --seasons (or --csn) option to specify additional or alternate seasons:

ncclimo -P mpasseaice --seasons=jfm,jas,ann -s 1980 -e 1983 -i $drc_in -o $drc_out

The climatological annual mean, ANN, is also computed automatically when MAM, JJA, SON, and DJF are all requested (which is the default, so ANN is always computed by default). Use –-seasons=none to completely turn-off seasonal and annual-mean climatologies.

MPAS climos are unaware of missing values until/unless the input files are "fixed". We recommend that the person who produces the simulation annotate all floating point variables with the appropriate _FillValue prior to invoking ncclimo. Run something like this once in the history file directory:

for fl in ls hist.* ; do
ncatted -O -t -a _FillValue,,o,d,-9.99999979021476795361e+33 ${fl}
done

If/when MPAS generates the _FillValue attributes itself, this step can and should be skipped (MPAS developers: please let @Charlie Zender know when MPAS “fixes” this “feature”). All other ncclimo features like regridding (below) are invoked identically for MPAS as for EAM/ELM users although under-the-hood ncremap (if invoked) specially pre-processes (dimension permutation, metadata annotation) MPAS data.

High Frequency climos (produce monthly, seasonal, and annual climatological means of diurnal cycles from diurnally resolved input data)

As of NCO 4.9.4 (September, 2020), ncclimo can produce climatologies that retain the diurnal cycle resolution provided by the input data. These “high frequency climos” are useful for characterizing the diurnal cycle of processes typically retained in EAM/ELM h1-h4 history files, high-frequency observational analyses (e.g., MERRA2, ERA5), and similar data. In all respects except two, high frequency climo features are invoked and controlled by the same options as traditional climo generation from monthly mean input. The most significant difference is that the user must supply the filenames of high-frequency input data via any of the four methods outlined above for splitting. High-frequency climo input dataset names are too complex for ncclimo to automatically generate (as it does for monthly-mean input), so one must supply the names via standard input, positional arguments, or filename globbing, or directory location exactly as for splitter mode described above. The second difference is that the user must supply the --clm_md=hfc option to tell ncclimo to operate in climo-generation rather than splitter mode:

ncclimo -P eam --clm_md=hfc --yr_srt=1 --yr_end=250 --var=FSNT,AODVIS --map=$map_fl --drc_out=$drc_out < input_list
ls cam.h0.0[012]?? | ncclimo -P eam --clm_md=hfc --yr_srt=1 --yr_end=250 --var=FSNT,AODVIS --map=$map_fl --drc_out=$drc_out
ncclimo -P eam --clm_md=hfc --var=FSNT,AODVIS --yr_srt=1 --yr_end=250 --map=$map_fl --drc_out=$drc_out $drc_in/eam.h4.0[012]??.nc
ncclimo -P eam --clm_md=hfc --var=T,Q,RH --yr_srt=1 --yr_end=250 --drc_in=$drc_in --map=$map_fl --drc_out=$drc_out

In high-frequency mode, ncclimo automatically determines the number of timesteps per day (which must be an integer >= 1). In high-frequency mode the --caseid option is optional since the user provides all the input filenames. If provided, caseid is used to rename the output filenames (similar to the --fml_nm option).

Annual climos (produce climotological means from annual-mean input data)

Not all model or observed history files are created as monthly means. To create a climatological annual mean from a series of annual mean inputs (such as from a land ice model), select ncclimo's annual climatology mode with the -C ann (or --clm_md=ann) option:

ncclimo -P cism --clm_md=ann -h h -c caseid -s 1851 -e 1900 -i $drc_in -o $drc_out

The options -m mdl_nm and -h hst_nm (that default to "eam" and "h0", respectively) tell ncclimo how to construct the input filenames. The above formula names the files caseid.cism.h.1851-01-01-00000.nccaseid.cism.h.1852-01-01-00000.nc, and so on. Annual climatology mode produces a single output file (or two if regridding is selected), and in all other respects behaves the same as monthly climatology mode.

Daily climos (interannual day-of-year statistics from multi-year daily-to-diurnally-resoluved input data)

High frequency timeseries are often available as daily means. To create a climatological daily mean from a series of daily mean inputs, select daily mode with the --clm_md=dly option. What is computed? In daily mode, ncclimo produces 365 output files, each consisting of the interannual average of the given day-of-year. If the input data are at sub-daily resolution (e.g., 8 timesteps per day for three hourly data), then by default all timesteps in a day are averaged into the daily output.

Daily output timeseries are often divided amongst files with hard-to-predict names, e.g., because the number of days in a file, days in a month, and timesteps in a day may all vary. Thus ncclimo in daily mode requires the user to supply the input filenames. ncclimo will not construct input filenames itself in daily mode (unlike monthly or annual mode). Use the same filename input methods that the splitter mode (above) accepts. Putting it all together, produce a daily climatology with something like

cd ${DATA}/ne30/raw;ls h1.nc | ncclimo -P eam --clm_md=dly --job_nbr=8 --caseid=famipc5_ne30_v0.3_00007 --yr_srt=2001 --yr_end=2009 --var=PRECT,TREFHT --drc_out=${DATA}/ne30/clm

The --job_nbr option tells how many days to independently compute at one time on a given node. See the NCO  manual for more details.

Regridding (climos and other files)

Regridding is a standalone operation carried out by ncremap. See the full ncremap documentation for examples of standalone operation (including MPAS!). When given the --map (or -r) option, ncclimo calls ncremap during climatology generation to produce climatology files on both the native and desired analysis grids. Only the ncremap features most relevant to ncclimo are described here. Regridding while producing climos is virtually free, because it is performed on idle nodes/cores after the monthly climatologies have been computed and while the seasonal climatologies are being computed. (This “load-balancing” can save half an hour on ne120 datasets.) To regrid, simply pass the desired mapfile name with, e.g., --map=${DATA}/maps/map_ne120np4_to_fv257x512_aave.20150901.nc

Specifying -O $drc_rgr (NB: uppercase letter "O") causes ncclimo to place the regridded files in the directory ${drc_rgr}. These files have the same names as the native grid climos from which they were derived. There is no namespace conflict because they are in separate directories. Until ~2020, symbolic links to their AMWG filenames were also created by default. This can be manually enabled with --amwg_lnk. If -O $drc_rgr is not specified, ncclimo places all regridded files in the native grid climo output directory specified by -o $drc_out (NB: lowercase letter "o") . To avoid namespace conflicts when both climos are stored in the same directory, the names of the regridded files are affixed with the destination geometry string derived from the mapfile, e.g., _climo_fv257x512_bilin.nc.

ncclimo -P eam -c famipc5_ne30_v0.3_00003 -s 1980 -e 1983 -i $drc_in -o $drc_out
ncclimo -P eam -c famipc5_ne30_v0.3_00003 -s 1980 -e 1983 -i $drc_in -o $drc_out -r $map_file
ncclimo -P eam -c famipc5_ne30_v0.3_00003 -s 1980 -e 1983 -i $drc_in -o $drc_out -r $map_file -O $drc_rgr

The above commands perform a climatology without regridding, then with regridding (all climos stored in ${drc_out}), then with regridding and storing regridded files separately (in ${drc_rgr}). Paths specified by $drc_in, $drc_out, and $drc_rgr may be relative or absolute. An alternative to regridding during climatology generation is to manually regrid afterwards with ncremap, which has more specialized features built-in for regridding. To use ncremap to regrid a climatology in $drc_out and place the results in $drc_rgr, use something like

ncremap --map=map.nc -I $drc_out -O $drc_rgr
ls $drc_out/climo | ncremap --map=map.nc -O $drc_rgr

ncremap supports sub-gridscale (SGS) regridding. Though designed for ELM and MPAS-Seaice, this feature is configurable for other SGS datasets as well. In sub-grid mode, ncremap ensures that regridding conserves fields that may represent only a fraction of the entire gridcell. The sub-gridscale fraction represented by each field is contained in a separate variable (settable with --sgs_frc , defaults to landfrac). SGS mode eases regridding of datasets (e.g., from ELM, CLM, CICE, MPAS-Seaice) that output data normalized to a gridcell fraction rather than to its entire extent. SGS mode automatically derives new binary masks (--sgs_msk, defaults to landmask) and allows for additional normalization (--sgs_nrm). Specific flavors of SGS can be selected (with -P elm, or -P clm, -P cice, or -P mpasseaice). These ensure regridded datasets recreate the idiosyncratic units (e.g., %, km2) employed by raw ELM, CLM, CICE, and MPAS-Seaice model output. 

ncremap -P elm --map=map_ne30pg2_to_cmip6_180x360_nco.20200901.nc elm_in.nc output.nc
ncremap -P elm -s src_grd.nc -d 1x1.nc elm_data.nc output.nc
ncremap -P mpasseaice --map=map_oEC60to30v3_to_cmip6_180x360_aave.20181001.nc mpasseaice_in.nc output.nc
ncremap -P mpasseaice -s src_grd.nc -d 1x1.nc mpasseaice_data.nc output.nc

Full documentation on SGS mode is here: http://nco.sf.net/nco.html#sgs.

Coupled Simulations

ncclimo works on all E3SM component models, including the coupler. It can simultaneously generate climatologies for a coupled run, where climatologies mean both native and regridded monthly, seasonal, and annual averages. Here are template commands to fully climatologize and regrid a coupled simulation:

caseid=v2.LR.historical_0101
drc_in=/lcrc/group/e3sm/ac.forsyth2/E3SMv2/v2.LR.historical_0101/archive
map_atm=${DATA}/maps/map_ne30pg2_to_cmip6_180x360_nco.20200901.nc
map_lnd=$map_atm
map_ocn=${DATA}/maps/map_EC30to60E2r2_to_cmip6_180x360_aave.20220301.nc
map_ice=$map_ocn

ncclimo -P eam -p mpi -c $caseid -s 2 -e 5 -i $drc_in/atm/hist -r $map_atm -o ${DATA}/e3sm/atm
ncclimo -P elm -c $caseid -s 2 -e 5 -i $drc_in/lnd/hist -r $map_lnd -o ${DATA}/e3sm/lnd
ncclimo -P mpasocean -p mpi -s 2 -e 5 -i $drc_in/ocn/hist -r $map_ocn -o ${DATA}/e3sm/ocn
ncclimo -P mpasseaice -s 2 -e 5 -i $drc_in/ice/hist -r $map_ice -o ${DATA}/e3sm/ice

The atmosphere and ocean model output is significantly larger than the land and ice model output. These commands recognize that by using different parallelization strategies that may be required, depending on the RAM fatness of the analysis nodes, as explained below. MPAS models do not utilize the $caseid option. They use their own naming convention. By default, ncclimo processes the MPAS hist.am.timeSeriesStatsMonthly analysis members.

Extended climos

ncclimo can re-use previous work and produce extended (i.e., longer duration) climatologies by combining two previously computed climatologies (this is called the binary method), or by computing a new climatology from raw monthly model output and then combining that with a previously computed climatology (this is called the incremental method). Producing an extended climatology by the incremental method requires specifying (with -S and -s, respectively) the start years of the previously computed and current climo, and (with -e) the end year of the current climo. Producing an extended climatology by the binary method requires specifying both the start years (with -S and -s) and end years (with -E and -e) of both pre-computed climatologies. The presence of the -E option signifies to ncclimo to employ the binary (not incremental) method.

Following are two examples of computing extended climatologies using the binary method (i.e., both input climatologies are already computed using the normal methods above). If both input climatologies are in the same directory in which the output (extended) climatology is to be stored, then the number of required options is minimal:

caseid=somelongname
drc_in=/scratch1/scratchdirs/zender/e3sm/${caseid}/atm
ncclimo -P eam -c ${caseid} -S 10 -E 20 -s 21 -e 50 -i ${drc_in}

When no output directory is specified, ncclimo’s internal logic automatically places the extended climo in the input climo directory. Files are not overwritten because the extended climos have different filenames than the input climos. The next example utilizes the directory structure and options of coupled E3SM simulations. The extra options (compared to the idealized example above) supply important information. The input climos were generated in seasonally discontiguous December (sdd) mode so the extended climatology must also be generated with the -a sdd option (or else ncclimo will not find the pre-computed input files). The input directory for the first pre-computed input climatology is specified with -x. The second pre-computed input climatology is specified with the usual -i option. A new output directory for the extended climos is specified with -X (NB: all of these single letter options have long-option name synonyms as described above.)

caseid=20161117.beta0.A_WCYCL1850S.ne30_oEC_ICG.edison
drc_ntv=/scratch2/scratchdirs/golaz/ACME_simulations/20161117.beta0.A_WCYCL1850S.ne30_oEC_ICG.edison/pp/clim # Native
drc_rgr=/scratch2/scratchdirs/golaz/ACME_simulations/20161117.beta0.A_WCYCL1850S.ne30_oEC_ICG.edison/pp/clim_rgr # Regridded
ncclimo -P eam -a sdd -c ${caseid} -S 41 -E 50 -x ${drc_ntv}/0041-0050 -s 51 -e 60 -i ${drc_ntv}/0051-0060 -X ${drc_ntv}/0041-0060
ncclimo -P eam -a sdd -c ${caseid} -S 41 -E 50 -x ${drc_rgr}/0041-0050 -s 51 -e 60 -i ${drc_rgr}/0051-0060 -X ${drc_rgr}/0041-0060

The extended native and regridded climatologies are produced with virtually the same command (only the input and output directories differ). No mapping file or regridding option is necessary to produce an extended climatology from two input regridded climatologies. ncclimo does not care whether the input climos are native-grid or are already regridded. So long as the regridded climatologies are already available, it makes more sense to re-use them rather than to perform a second regridding. While ncclimo can generate and regrid an extended climatology from native-grid inputs in one command, doing so involves more command-line options and it is generally simpler to follow the above procedure. Ask @Charlie Zender  if you would like help customizing ncclimo for other such workflows. Producing extended climatologies via the binary method consumes much less memory than producing normal or incremental climatologies. The binary method simply computes weighted averages of each input variable. Hence the maximum RAM required is approximately only three times the size of the largest input variable. This is trivial compared to the total input file size, hence the extended climos may be computed with background parallelism, the default in ncclimo. The -p mpi option is never necessary for producing extended climos using the binary method. As you might imagine, the combination of low memory overhead and re-use of previously regridded climos means that producing extended regridded climos via the binary method is extremely fast compared to computing normal climos.

Memory Considerations

It is important to employ the optimal ncclimo  parallelization strategy for your computer hardware resources. Select from the three available choices with the -p par_typ option. The options are serial mode (-p nil or -p serial), background mode parallelism (-p bck), and MPI parallelism (-p mpi). The default is background mode parallelism, which is appropriate for lower resolution (e.g., ne30L72) simulations on most nodes at high-performance computer centers. Use (or at least start with) serial mode on personal laptops/workstations. Serial mode requires twelve times less RAM than the parallel modes, and is much less likely to deadlock or cause OOM (out-of-memory) conditions on your personal computer. If the available RAM (+swap) is < 12*4*sizeof(monthly input file), then try serial mode first (12 is the optimal number of parallel processes for monthly climos, the computational overhead is a factor of four). EAMv1 ne30np4L72 output is about ~1 GiB per month so each month requires about 4 GiB of RAM. EAMv1 ne30np4L72 output (with LINOZ) is about ~10 GB/month so each month requires ~40 GiB RAM. EAMv1 ne120np4L72 output is about ~12 GB/month so each month requires ~48 GB RAM. The computer does not actually use all this memory at one time, and many kernels compress RAM usage to below what top reports, so the actual physical usage is hard to pin-down, but may be a factor of 2.5-3.0 (rather than a factor of four) times the size of the input file. For instance, a 16 GB MacBookPro will successfully run an ne30L30 climatology (that requests 48 GB RAM) in background mode, but the laptop will be slow and unresponsive for other uses until it finishes the climos. Experiment a bit and choose the parallelization option that works best for you. 

Serial mode, as its name implies, uses one core at a time for climos, and computes sequentially the monthly then seasonal then annual climatologies. Serial mode means that climos are performed serially, but regridding will employ OMP threading on platforms that support it, and use up to 16 cores. By design each month and each season are independent of the others, so all months can be computed in parallel, then each season can be computed in parallel (using monthly climatologies), then the annual average can be computed. Background parallelization mode exploits this parallelism and executes the climos in parallel as background processes on a single node, so that twelve cores are simultaneously employed for monthly climatologies, four for seasonal, and one for annual. The optional regridding will employ up to two cores per process. MPI parallelism executes the climatologies on different nodes so that up to (optimally) twelve nodes are employed performing monthly climos. The full memory of each node is available for each individual climo. The optional regridding will employ up to eight cores per node. MPI mode or Background mode on a big memory queue must be used to process ne120L72 climos on some, but not all, DOE computers. For example, attempting an ne120np4L72 climo in background mode on a 96 GB compute node will fail due to OOM. (OOM errors do not produce useful return codes so if your climo processes die without printing useful information, the cause may be OOM). However the same climo will succeed if executed on a single 512 GB node. Or MPI mode can be used for any climatology. The same ne120np4L72 climo will also finish blazingly fast in background mode on a 512 GB compute node, so MPI mode is unnecessary on the beefiest nodes. In general, the fatter the memory, the better the performance. 

This implementation of parallelism for climatology generation once had relatively poor granularity. Meaning that nodes using background or parallel mode always computed 12 monthly climatologies simultaneously, and nodes using serial mode always computed only 1 climatology at a time, and there was no granularity in between these extremes. The -j $job_nbr option (also in ncremap) allows the user to specify the exact granularity to match the node's resources. Here $job_nbr specifies the maximum number of simultaneous climo tasks (averaging, regridding) to send to a node at one time. The default value of job_nbr is 12 for monthly climatologies in both MPI and Background parallelism modes. This can be over-ridden to improve granularity. For example, if --job_nbr=4 is explicitly requested, then the 12 monthly climos will be computed in three sequential batches of four months each. ncclimo automatically sets job_nbr to the number of nodes available when working in splitter (not climo) mode, so invoking ncclimo with four nodes in splitter mode means each of the four nodes will receive one splitter task. In Background mode job_nbr defaults to 12 and if job_nbr is explicitly specified, say with --job_nbr=4, then four months are computed simultaneously on the host node. Some nodes, e.g., your personal workstation, are underpowered for 12 climo tasks yet overpowered for 1 task, and so benefit from improved granularity.

Climos on Single Compute Nodes at LCFs

The basic approach above (running the script from a standard terminal window) works well for small cases, yet can be unpleasantly slow on login nodes of LCFs and for longer or higher resolution (e.g., ne120) climatologies. As a baseline, generating a climatology of 5 years of ne30pg2 (~1x1 degree) EAM output with ncclimo takes 1-2 minutes. To make things a bit faster at LCFs, you can ask for your own dedicated node (note this approach does not make sense except on supercomputers that have a job-control queue). On Perlmutter do this via:

srun -A e3sm --constraint=cpu --nodes=1 --time=00:30:00 --qos=debug --job-name=ncclimo --pty bash

Acquiring a dedicated node is useful for any calculation you want to do quickly, not just creating climo files though it does burn through our computing allocation so be prudent. This command returns a prompt once a nodes is assigned (the prompt is returned in your home directory so you may then have to cd to the location you mean to run from). At that point you can simply invoke ncclimo as described above. It will be faster because you are not sharing the node with other people. Again, ne30pg2L72 climos only require < 2 minutes, so the 30 minutes requested in the example is excessive and conservative. Tune-it with experience. Here is the meaning of each flag used:

-A: Name of the account to charge for time used
--constraint: Queue name
--nodes=1: Number of nodes to request. ncclimo will use multiple cores per node.
--time: How long to keep this dedicated node for
--pty bash: Submit in interactive mode = return a prompt rather than running a program

Climos on Multiple Nodes at LCFs

The above parallel approaches will fail when a single node lacks enough RAM (plus swap) to store all twelve monthly input files, plus extra RAM for computations. One should employ MPI multinode parallelism (-p mpi) on nodes with less RAM than 12*3*sizeof(monthly input).  The longest an ne120 climo will take is less than half an hour (~25 minutes on Edison or Rhea), so the simplest method to run MPI jobs is to request 12-interactive nodes using the above commands (though remember to add -p mpi), then execute the script at the command line. It is also possible, and sometimes preferable, to request non-interactive compute nodes in a batch queue. Executing an MPI-mode climo (on machines with job scheduling and, optimally, 12 available nodes) in a batch queue can be done in 2 commands. First, write an executable file which calls the ncclimo script with appropriate arguments. We do this below by echoing to a file ~/ncclimo.pbs, but you could also open an editor and copy the stuff in quotes below into a file and save it:

echo "ncclimo -p mpi -c famipc5_ne120_v0.3_00003 -s 1980 -e 1983 -i /lustre/atlas1/cli115/world-shared/mbranst/famipc5_ne120_v0.3_00003-wget-test -o ${DATA}/ne120/clm" > ~/ncclimo.pbs

The only new argument here is "-p mpi", which tells the script to use MPI parallelism. Once this file exists, submit a 12 node, non-interactive job to execute it:

qsub -A CLI115 -V -l nodes=12 -l walltime=00:30:00 -j oe -m e -N ncclimo -o ~/ncclimo.txt ~/ncclimo.pbs

This script adds the following new flags:

"-j oe": combine output and error streams into standard error. "-m e": send email to the job submitter when the job ends "-o ~/ncclimo.txt": write all output to ~/ncclimo.txt

The above commands are meant for Rhea/Titan. The equivalent commands for Cooley and Cori are:

Notice that both Cooley/Mira (Cobalt) and Cori/Edison (SLURM) require the introductory shebang-interpreter line (#!/bin/bash) which PBS does not need. Set only the batch queue parameters mentioned above. In MPI-mode, ncclimo determines the appropriate number of tasks-per-node based on the number of nodes available and script internals (like load-balancing for regridding). Hence do not set a tasks-per-node parameter with scheduler configuration parameters as this could cause conflicts.

What does ncclimo do?

The basic idea of this script is very simple. For monthly climatologies (e.g. JAN), ncclimo passes the list of all relevant January monthly files to NCO's ncra command, which averages each variable in these monthly files over their time dimension (if it exists) or copies the value from the first month unchanged (if no time axis exists). Seasonal climos are then created by taking the average of the monthly climo files using ncra. In order to account for differing numbers of days per month, the -w flag in ncra is used, followed by the number of days in the relevant months. For example, the MAM climo is computed from: ncra -w 31,30,31 MAR_climo.nc APR_climo.nc MAY_climo.nc MAM_climo.nc (details about file names and other optimization flags have been stripped here to make the concept easier to follow). The ANN climo is then computed by doing a weighted average of the seasonal climos.

Assumptions, Approximations, and Algorithms (AAA) Employed:

A climatology embodies many algorithmic choices, and regridding from the native to the analysis grid involves still more choices. A separate method should reproduce the ncclimo and NCO answers to round-off precision if it implements the same algorithmic choices. For example, ncclimo agrees to round-off with AMWG diagnostics when making the same (sometimes questionable) choices. The most important choices have to do with converting single- to double-precision (SP and DP, respectively), treatment of missing values, generation/application of regridding weights. For concreteness and clarity we describe the algorithmic choices made in processing a EAM monthly output into a climatological annual mean (ANN) and then regridding that. Other climatologies (e.g., daily to monthly, or annual-to-climatological) involve similar choices.

ACME (and CESM) computes fields in DP and outputs history (not restart) files as monthly means in SP. The NCO climatology generator (ncclimo) processes these data in four stages. Stage N accesses input only from stage N-1, never from stage N-2 or earlier. Thus the (on-disk) files from stage N determine the highest precision achievable by stage N+1. The general principal is to perform math (addition, weighting, normalization) in DP and output results to disk in the same precision in which they were input from disk (usually SP). In Stage 1, NCO ingests Stage 0 monthly means (raw EAM output), converts SP input to DP, performs the average across all years, then converts the answer from DP to SP for storage on-disk as the climatological monthly mean. In Stage 2, NCO ingests Stage 1 climatological monthly means, converts SP input to DP, performs the average across all months in the season (e.g., DJF), then converts the answer from DP to SP for storage on-disk as the climatological seasonal mean. In Stage 3, NCO ingests Stage 2 climatological seasonal means, converts SP input to DP, performs the average across all four seasons (DJF, MAM, JJA, SON), then converts the answer from DP to SP for storage on-disk as the climatological annual mean.

Stage 2 weights each input month by its number of days (e.g., 31 for January), and Stage 3 weights each input season by its number of days (e.g., 92 for MAM). ACME runs EAM with a 365-day calendar, so these weights are independent of year and never change.

The treatment of missing values in Stages 1-3 is limited by the lack of missing value tallies provided by Stage 0 (model) output. Stage 0 records a value as missing if it is missing for the entire month, and present if the value is valid for one or more timesteps. Stage 0 does not record the missing value tally (number of valid timesteps) for each spatial point. Thus a point with a single valid timestep during a month is weighted the same in Stages 1-4 as a point with 100% valid timesteps during the month. The absence of tallies inexorably degrades the accuracy of subsequent statistics by a amount that varies in time and space. On the positive side, it significantly reduces the output size (by a factor of two) and complexity of analyzing fields that contain missing values. Due to the ambiguous nature of missing values, it is debatable whether they merit efforts to treat them more exactly.

The vast majority of fields undergo three promotion/demotion cycles between EAM and ANN. No promotion/demotion cycles occur for history fields that EAM outputs in DP rather than SP, nor for fields without a time dimension. Typically these fields are grid coordinates (e.g., longitude, latitude) or model constants (e.g., CO2 mixing ratio). NCO never performs any arithmetic on grid coordinates or non-time-varying input, regardless of whether they are SP or DP. Instead, NCO copies these fields directly from the first input file.

Stage 4 uses a mapfile to regrid climos from the native to the desired analysis grid. ACME currently uses mapfiles generated by ESMF_RegridWeightGen (ERWG). The algorithmic choices, approximations, and commands used to generate mapfiles from input gridfiles are described here. As that page describes, the input gridfiles used by ACME until ~20150901 contained flaws that effectively reduced their precision, especially at regional scales. ACME (and CESM) mapfiles continue to approximate lat/lon grids as connected by great circles. This assumption may be removed in the future. Constraints imposed by ERWG during weight-generation ensure that global integrals of fields undergoing conservative regridding are exactly conserved.

Application of weights from the mapfile to regrid the native data to the analysis grid is straightforward. Grid fields (e.g., latitude, longitude, area) are not regridded. Instead they are copied (and area is reconstructed if absent) directly from the mapfile. NCO ingests all other native grid (source) fields, converts SP to DP, and accumulates destination gridcell values as the sum of the DP weight (from the sparse matrix in the mapfile) times the (usually SP-promoted-to-DP) source values. Fields without missing values are then stored to disk in their original precision. Fields with missing values are treated (by default) with what NCO calls the "conservative" algorithm. The conservative algorithm uses all valid data from the source grid on the destination grid once and only once. Destination cells receive the weighted valid values of the source cells. This is conservative because the global integrals of the source and destination fields are equal. See the NCO documentation here for more description of the conservative and of the optional ("renormalized") algorithm.