Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Step-by-step guide

  1. Choose the test category you want to run, in most cases, it will be one of the acme-supported test categories
    1. acme_integration test suite - A substantial set of tests run nightly on "next" on a few major platforms or workstations that establishes the correctness of the ACME code base. Used for verifying merged code. The set of target machines should cover all supported compilers.
    2. acme_developer test suite - A more minimal set of tests that can be run by develpers on all supported platforms to instill a modicum of confidence that a set of changes does not break ACME before issuing a PR, without incurring the computational cost of running acme_full or acme_integration everywhere.
  2. To run a given test suite on a given machine, use the create_test command, issued from the scripts directory in your ACME source tree:
    1. cd ACME/scripts
    2. ./create_test -xml_mach machine -xml_compilercompiler -xml_category test_suite -testid test_label -testroot test_root_dir [-baselineroot acme_baseline_dir] -(compare|generate) baseline_subdir -project project_or_account
      1. machine - The ACME name for the machine that you're on, all lowercase, look here for supported machines
      2. compiler - The compiler toolset you wish to use, examples are: gnu, intel, pgi
      3. test_suite - The name of the test category you want to run, like acme_developer
      4. test_root_dir - The path where your test cases will be dumped
      5. acme_baseline_dir - You only need to specify this if you want to use a different baseline area than is specified in <ACME>/scripts/ccsm_utils/Machines/config_machines.xml for your machine
      6. baseline_subdir - The name of the baselines (usually the major release you're on) you want to use. Use -generate if this is the first time running this test on this machine OR if you want to regenerate baseline results for this test (approved BFB change); otherwise, always use -compare
      7. project_or_account - The id that lets you run batch jobs on this machine
    3. Once the tests are running, you'll want to see test results
      1. Case 1: Simple
        1. cd test_root_dir
        2. you will find a script named cs.status.(testid).(machine) . You can run this script from the test root directory to see the status of tests being run within this suite
      2. Case 2: We have a more sophisticated script called wait_for_tests that provides extra capabilites like: waiting for tests to finish, converting results to CTest and submitting to a CDash dashboard
        1. cd test_root_dir
        2. <ACME>/scripts/acme/wait_for_tests */TestStatus
    4. Example: to run the 'acme_developer' suite on Edison using the Intel compiler (assuming everything has been set up and this is not the first run), you would run:
      1. cd ACME/scripts
      2. ./create_test -xml_mach edison -xml_compiler intel -xml_category acme_developer -testid acme_dev -testroot $SCRATCH/acme_dev -compare v0.1 -project acme
      3. cd $SCRATCH/acme_dev
      4. ./cs.status.acme_dev.edison
    5. Interpreting test results:
      1. Test result output from looks like the following:

        PASS ERS.f19_g16_rx1.A.edison_intel
        PASS ERS.f19_g16_rx1.A.edison_intel.memleak
        PASS ERS.f19_g16_rx1.A.edison_intel.generate./scratch2/scratchdirs/johnson/acme-baseline-testcases
        FAIL ERS_IOP4c.f19_g16_rx1.A.edison_intel
        BFAIL ERS_IOP4c.f19_g16_rx1.A.edison_intel.generate./scratch2/scratchdirs/johnson/acme-baseline-testcases
        RUN PEA_P1_M.f45_g37_rx1.A.edison_intel.G.acme_dev
        PEND SMS.ne30_f19_g16_rx1.A.edison_intel

        Briefly, PASS means the test passed, FAIL means that it failed (and SFAIL, CFAIL, BFAIL, etc are all descriptions of failure in the script generation, configuration, and baseline generation stages), RUN means the test is running, and PEND means the test is waiting to be run.

        See the CSEG presentation on testing for more information on the testing system.

 

Filter by label

There are no items with the selected labels at this time.

  • No labels