Skip to content

Conversation

@samhatfield
Copy link
Collaborator

Currently we have the following tests in ecTrans:

  • ectrans_test_install: test whether a program can be built against ecTrans
  • ectrans_test_setup_trans0: a basic test for setup_trans0 with one fixed set of arguments
  • ectrans_test_adjoint: a high-level test of the tangent-linear/adjoint correspondence of INV_TRANS, INV_TRANSAD, DIR_TRANS, DIR_TRANSAD, all in one
  • Separate tests of tangent-linear/adjoint correspondence of INV_TRANS/AD and DIR_TRANS/AD
  • A test of tangent-linear/adjoint correspondence of GPNORM_TRANSTL/AD
  • Tests of ecTrans global and LAM transforms under many settings, using the benchmark program. These are sort of integration tests
  • transi tests

I propose to add a mechanism for unit tests, by which I mean specifically an isolated test for a single feature of one of the ecTrans subroutines. I suggest to focus on the external subroutines (at least for now). These tests will complement the tests above, and in some cases possibly replace them (e.g. we won't need ectrans_test_setup_trans0 anymore, and the benchmark program could perhaps finally be repurposed for what it was built for - benchmarking).

This draft PR gives an idea of how this could be achieved in vanilla CMake. I have followed this example. Here is how it works:

  • There is a new directory: tests/unit.
  • In this directory we have a directory for each external subroutine (currently just setup_trans0 and setup_trans).
  • In each of these, we write a plain old Fortran module with a INTEGER FUNCTION for each unit test. These all are named UNIT_TEST_*. Each function carries out a unit test and returns 0 for success or 1 for failure. They all have BIND(C) specified so they can be called from a C function linked into the final executable.
  • In the CMakeLists.txt file for this suite, we define again the tests in the corresponding Fortran file in a list.
  • Using create_test_sourcelist we create a C program which provides a callable wrapper for each of the Fortran unit tests. From this we create an executable which CTest can call.
  • We then add each unit test one by one so CTest is aware of them, using our C program as the test executable.

The above logic is repeated for each enabled precision.

All unit tests will then be visible when running ctest -R unit_test.

Here are the tests I've implemented so far:

➜  build ctest -N -R unit_test
Test project /Users/samhatfield/Work/ECMWF/ectrans/build
  Test #188: unit_test_setup_trans0_eq_regions_dp
  Test #189: unit_test_setup_trans0_eq_regions_sp
  Test #190: unit_test_setup_trans_without_setup_trans0_dp
  Test #191: unit_test_setup_trans_basic_dp
  Test #192: unit_test_setup_trans_odd_ndgl_dp
  Test #193: unit_test_setup_trans_octahedral_dp
  Test #194: unit_test_setup_trans_stretching_dp
  Test #195: unit_test_setup_trans_all_fftw_dp
  Test #196: unit_test_setup_trans_belusov_dp
  Test #197: unit_test_setup_trans_flt_dp
  Test #198: unit_test_setup_trans_without_setup_trans0_sp
  Test #199: unit_test_setup_trans_basic_sp
  Test #200: unit_test_setup_trans_odd_ndgl_sp
  Test #201: unit_test_setup_trans_octahedral_sp
  Test #202: unit_test_setup_trans_stretching_sp
  Test #203: unit_test_setup_trans_all_fftw_sp
  Test #204: unit_test_setup_trans_belusov_sp
  Test #205: unit_test_setup_trans_flt_sp

Total Tests: 18

I have implemented a "will fail" logic to handle cases that are expected to fail (e.g. calling SETUP_TRANS without SETUP_TRANS0 or initialising with an odd number of latitudes - both cases above). However this will require modifying the abort behaviour of SDL_SRLABORT: we need to somehow disable the raising of SIGABRT and use ERROR STOP instead of STOP so an error code 1 is returned. Otherwise I don't think there's any way to recover from ABORT_TRANS.

Also, for now I've disabled MPI in these tests. We obviously need to test at least some of the subroutines with MPI enabled (e.g. DIST_GRID) but I'm not sure how to do this yet without wrapping the whole suite in a for loop over 0, 1, 2 MPI tasks and having the number of tests exploding.

Anyway, any thoughts @wdeconinck @marsdeno @ddegrauwe @dhaumont @RyadElKhatibMF?

@samhatfield samhatfield added enhancement New feature or request idea 💡 labels Sep 24, 2025
@samhatfield
Copy link
Collaborator Author

Ninja doesn't like the fact that I generate the same Fortran modules twice. Need to find a way to have a precision-qualified module file emitted when each test suite is compiled.

@dhaumont
Copy link
Contributor

dhaumont commented Sep 25, 2025

Very nice initiative.
On top of what you describe, we also need to think about:

  • testing for GPU and CPU
  • test for different grid partitionning (this is probably linked to MPI)
  • test the new field-api interface
  • cover the LAM

@samhatfield
Copy link
Collaborator Author

Very nice initiative. On top of what you describe, we also need to think about:

  • testing for GPU and CPU
  • test for different grid partitionning (this is probably linked to MPI)
  • test the new field-api interface
  • cover the LAM

Thanks! Yes good points.

  • CPU and GPU should be straightforward - basically following the same logic as the precisions.
  • Different grid partitioning - you mean like different NPROMA values?
  • Yup, should be straightforward to add those tests.
  • Yeah I thought maybe you and Daan could add a tests/etrans/unit directory which mirrors the test/trans/unit one.

@samhatfield samhatfield force-pushed the unit_test branch 2 times, most recently from bd09849 to 2b76047 Compare September 26, 2025 12:51
@samhatfield
Copy link
Collaborator Author

Discussions with @marsdeno highlighted that what I've done here is not really "unit tests". A better name would be "interface tests".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request idea 💡

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants