test: Add a way to detect a test that breaks another

When running unit tests, some may have side effects which cause a
subsequent test to break. This can sometimes be seen when using 'ut dm'
or similar.

Add a new argument which allows a particular (failing) test to be run
immediately after a certain number of tests have run. This allows the
test causing the failure to be determined.

Update the documentation also.

Signed-off-by: Simon Glass <sjg@chromium.org>
This commit is contained in:
Simon Glass 2022-10-29 19:47:13 -06:00
parent 6580b61830
commit d1b4659570
6 changed files with 127 additions and 10 deletions

View File

@ -132,7 +132,7 @@ void spl_board_init(void)
int ret; int ret;
ret = ut_run_list("spl", NULL, tests, count, ret = ut_run_list("spl", NULL, tests, count,
state->select_unittests, 1, false); state->select_unittests, 1, false, NULL);
/* continue execution into U-Boot */ /* continue execution into U-Boot */
} }
} }

View File

@ -143,6 +143,75 @@ For example::
Test dm_test_rtc_reset failed 3 times Test dm_test_rtc_reset failed 3 times
Isolating a test that breaks another
------------------------------------
When running unit tests, some may have side effects which cause a subsequent
test to break. This can sometimes be seen when using 'ut dm' or similar.
You can use the `-I` argument to the `ut` command to isolate this problem.
First use `ut info` to see how many tests there are, then use a binary search to
home in on the problem. Note that you might need to restart U-Boot after each
iteration, so the `-c` argument to U-Boot is useful.
For example, let's stay that dm_test_host() is failing::
=> ut dm
...
Test: dm_test_get_stats: core.c
Test: dm_test_get_stats: core.c (flat tree)
Test: dm_test_host: host.c
test/dm/host.c:71, dm_test_host(): 0 == ut_check_delta(mem_start): Expected 0x0 (0), got 0xffffcbb0 (-13392)
Test: dm_test_host: host.c (flat tree)
Test <NULL> failed 1 times
Test: dm_test_host_dup: host.c
Test: dm_test_host_dup: host.c (flat tree)
...
You can then tell U-Boot to run the failing test at different points in the
sequence:
=> ut info
Test suites: 21
Total tests: 645
::
$ ./u-boot -T -c "ut dm -I300:dm_test_host"
...
Test: dm_test_pinctrl_single: pinmux.c (flat tree)
Test: dm_test_host: host.c
test/dm/host.c:71, dm_test_host(): 0 == ut_check_delta(mem_start): Expected 0x0 (0), got 0xfffffdb0 (-592)
Test: dm_test_host: host.c (flat tree)
Test dm_test_host failed 1 times (position 300)
Failures: 4
So it happened before position 300. Trying 150 shows it failing, so we try 75::
$ ./u-boot -T -c "ut dm -I75:dm_test_host"
...
Test: dm_test_autoprobe: core.c
Test: dm_test_autoprobe: core.c (flat tree)
Test: dm_test_host: host.c
Test: dm_test_host: host.c (flat tree)
Failures: 0
That succeeds, so we try 120, etc. until eventually we can figure out that the
problem first happens at position 82.
$ ./u-boot -T -c "ut dm -I82:dm_test_host"
...
Test: dm_test_blk_flags: blk.c
Test: dm_test_blk_flags: blk.c (flat tree)
Test: dm_test_host: host.c
test/dm/host.c:71, dm_test_host(): 0 == ut_check_delta(mem_start): Expected 0x0 (0), got 0xffffc960 (-13984)
Test: dm_test_host: host.c (flat tree)
Test dm_test_host failed 1 times (position 82)
Failures: 1
From this we can deduce that `dm_test_blk_flags()` causes the problem with
`dm_test_host()`.
Running sandbox_spl tests directly Running sandbox_spl tests directly
---------------------------------- ----------------------------------

View File

@ -8,10 +8,12 @@ Synopis
:: ::
ut [-r<runs>] [-f] [<suite> [<test>]] ut [-r<runs>] [-f] [-I<n>:<one_test>] [<suite> [<test>]]
<runs> Number of times to run each test <runs> Number of times to run each test
-f Force 'manual' tests to run as well -f Force 'manual' tests to run as well
<n> Run <one test> after <n> other tests have run
<one_test> Name of the 'one' test to run
<suite> Test suite to run, or `all` <suite> Test suite to run, or `all`
<test> Name of single test to run <test> Name of single test to run
@ -35,6 +37,13 @@ Manual tests are normally skipped by this command. Use `-f` to run them. See
See :ref:`develop/tests_writing:mixing python and c` for more information on See :ref:`develop/tests_writing:mixing python and c` for more information on
manual test. manual test.
When running unit tests, some may have side effects which cause a subsequent
test to break. This can sometimes be seen when using 'ut dm' or similar. To
fix this, select the 'one' test which breaks. Then tell the 'ut' command to
run this one test after a certain number of other tests have run. Using a
binary search method with `-I` you can quickly figure one which test is causing
the problem.
Generally all tests in the suite are run. To run just a single test from the Generally all tests in the suite are run. To run just a single test from the
suite, provide the <test> argument. suite, provide the <test> argument.

View File

@ -410,10 +410,15 @@ void test_set_state(struct unit_test_state *uts);
* then all tests are run * then all tests are run
* @runs_per_test: Number of times to run each test (typically 1) * @runs_per_test: Number of times to run each test (typically 1)
* @force_run: Run tests that are marked as manual-only (UT_TESTF_MANUAL) * @force_run: Run tests that are marked as manual-only (UT_TESTF_MANUAL)
* @test_insert: String describing a test to run after n other tests run, in the
* format n:name where n is the number of tests to run before this one and
* name is the name of the test to run. This is used to find which test causes
* another test to fail. If the one test fails, testing stops immediately.
* Pass NULL to disable this
* Return: 0 if all tests passed, -1 if any failed * Return: 0 if all tests passed, -1 if any failed
*/ */
int ut_run_list(const char *name, const char *prefix, struct unit_test *tests, int ut_run_list(const char *name, const char *prefix, struct unit_test *tests,
int count, const char *select_name, int runs_per_test, int count, const char *select_name, int runs_per_test,
bool force_run); bool force_run, const char *test_insert);
#endif #endif

View File

@ -21,6 +21,7 @@ int cmd_ut_category(const char *name, const char *prefix,
struct unit_test *tests, int n_ents, struct unit_test *tests, int n_ents,
int argc, char *const argv[]) int argc, char *const argv[])
{ {
const char *test_insert = NULL;
int runs_per_text = 1; int runs_per_text = 1;
bool force_run = false; bool force_run = false;
int ret; int ret;
@ -35,13 +36,17 @@ int cmd_ut_category(const char *name, const char *prefix,
case 'f': case 'f':
force_run = true; force_run = true;
break; break;
case 'I':
test_insert = str + 2;
break;
} }
argv++; argv++;
argc++; argc--;
} }
ret = ut_run_list(name, prefix, tests, n_ents, ret = ut_run_list(name, prefix, tests, n_ents,
argc > 1 ? argv[1] : NULL, runs_per_text, force_run); argc > 1 ? argv[1] : NULL, runs_per_text, force_run,
test_insert);
return ret ? CMD_RET_FAILURE : 0; return ret ? CMD_RET_FAILURE : 0;
} }

View File

@ -498,12 +498,29 @@ static int ut_run_test_live_flat(struct unit_test_state *uts,
*/ */
static int ut_run_tests(struct unit_test_state *uts, const char *prefix, static int ut_run_tests(struct unit_test_state *uts, const char *prefix,
struct unit_test *tests, int count, struct unit_test *tests, int count,
const char *select_name) const char *select_name, const char *test_insert)
{ {
struct unit_test *test; struct unit_test *test, *one;
int found = 0; int found = 0;
int pos = 0;
int upto;
for (test = tests; test < tests + count; test++) { one = NULL;
if (test_insert) {
char *p;
pos = dectoul(test_insert, NULL);
p = strchr(test_insert, ':');
if (p)
p++;
for (test = tests; test < tests + count; test++) {
if (!strcmp(p, test->name))
one = test;
}
}
for (upto = 0, test = tests; test < tests + count; test++, upto++) {
const char *test_name = test->name; const char *test_name = test->name;
int ret, i, old_fail_count; int ret, i, old_fail_count;
@ -534,6 +551,17 @@ static int ut_run_tests(struct unit_test_state *uts, const char *prefix,
} }
} }
old_fail_count = uts->fail_count; old_fail_count = uts->fail_count;
if (one && upto == pos) {
ret = ut_run_test_live_flat(uts, one);
if (uts->fail_count != old_fail_count) {
printf("Test %s failed %d times (position %d)\n",
one->name,
uts->fail_count - old_fail_count, pos);
}
return -EBADF;
}
for (i = 0; i < uts->runs_per_test; i++) for (i = 0; i < uts->runs_per_test; i++)
ret = ut_run_test_live_flat(uts, test); ret = ut_run_test_live_flat(uts, test);
if (uts->fail_count != old_fail_count) { if (uts->fail_count != old_fail_count) {
@ -554,7 +582,7 @@ static int ut_run_tests(struct unit_test_state *uts, const char *prefix,
int ut_run_list(const char *category, const char *prefix, int ut_run_list(const char *category, const char *prefix,
struct unit_test *tests, int count, const char *select_name, struct unit_test *tests, int count, const char *select_name,
int runs_per_test, bool force_run) int runs_per_test, bool force_run, const char *test_insert)
{ {
struct unit_test_state uts = { .fail_count = 0 }; struct unit_test_state uts = { .fail_count = 0 };
bool has_dm_tests = false; bool has_dm_tests = false;
@ -589,7 +617,8 @@ int ut_run_list(const char *category, const char *prefix,
memcpy(uts.fdt_copy, gd->fdt_blob, uts.fdt_size); memcpy(uts.fdt_copy, gd->fdt_blob, uts.fdt_size);
} }
uts.force_run = force_run; uts.force_run = force_run;
ret = ut_run_tests(&uts, prefix, tests, count, select_name); ret = ut_run_tests(&uts, prefix, tests, count, select_name,
test_insert);
/* Best efforts only...ignore errors */ /* Best efforts only...ignore errors */
if (has_dm_tests) if (has_dm_tests)