"Das U-Boot" Source Tree
at master 374 lines 14 kB view raw
1.. SPDX-License-Identifier: GPL-2.0+ 2.. Copyright 2021 Google LLC 3.. sectionauthor:: Simon Glass <sjg@chromium.org> 4 5Writing Tests 6============= 7 8This describes how to write tests in U-Boot and describes the possible options. 9 10Test types 11---------- 12 13There are two basic types of test in U-Boot: 14 15 - Python tests, in test/py/tests 16 - C tests, in test/ and its subdirectories 17 18(there are also UEFI tests in lib/efi_selftest/ not considered here.) 19 20Python tests talk to U-Boot via the command line. They support both sandbox and 21real hardware. They typically do not require building test code into U-Boot 22itself. They are fairly slow to run, due to the command-line interface and there 23being two separate processes. Python tests are fairly easy to write. They can 24be a little tricky to debug sometimes due to the voluminous output of pytest. 25 26C tests are written directly in U-Boot. While they can be used on boards, they 27are more commonly used with sandbox, as they obviously add to U-Boot code size. 28C tests are easy to write so long as the required facilities exist. Where they 29do not it can involve refactoring or adding new features to sandbox. They are 30fast to run and easy to debug. 31 32Regardless of which test type is used, all tests are collected and run by the 33pytest framework, so there is typically no need to run them separately. This 34means that C tests can be used when it makes sense, and Python tests when it 35doesn't. 36 37 38This table shows how to decide whether to write a C or Python test: 39 40===================== =========================== ============================= 41Attribute C test Python test 42===================== =========================== ============================= 43Fast to run? Yes No (two separate processes) 44Easy to write? Yes, if required test Yes 45 features exist in sandbox 46 or the target system 47Needs code in U-Boot? Yes No, provided the test can be 48 executed and the result 49 determined using the command 50 line 51Easy to debug? Yes No, since access to the U-Boot 52 state is not available and the 53 amount of output can 54 sometimes require a bit of 55 digging 56Can use gdb? Yes, directly Yes, with --gdbserver 57Can run on boards? Some can, but only if Some 58 compiled in and not 59 dependent on sandboxau 60===================== =========================== ============================= 61 62 63Python or C 64----------- 65 66Typically in U-Boot we encourage C test using sandbox for all features. This 67allows fast testing, easy development and allows contributors to make changes 68without needing dozens of boards to test with. 69 70When a test requires setup or interaction with the running host (such as to 71generate images and then running U-Boot to check that they can be loaded), or 72cannot be run on sandbox, Python tests should be used. These should typically 73NOT rely on running with sandbox, but instead should function correctly on any 74board supported by U-Boot. 75 76 77Mixing Python and C 78------------------- 79 80The best of both worlds is sometimes to have a Python test set things up and 81perform some operations, with a 'checker' C unit test doing the checks 82afterwards. This can be achieved with these steps: 83 84- Add the `UTF_MANUAL` flag to the checker test so that the `ut` command 85 does not run it by default 86- Add a `_norun` suffix to the name so that pytest knows to skip it too 87 88In your Python test use the `-f` flag to the `ut` command to force the checker 89test to run it, e.g.:: 90 91 # Do the Python part 92 host load ... 93 bootm ... 94 95 # Run the checker to make sure that everything worked 96 ut -f bootstd vbe_test_fixup_norun 97 98Note that apart from the `UTF_MANUAL` flag, the code in a 'manual' C test 99is just like any other C test. It still uses ut_assert...() and other such 100constructs, in this case to check that the expected things happened in the 101Python test. 102 103 104How slow are Python tests? 105-------------------------- 106 107Under the hood, when running on sandbox, Python tests work by starting a sandbox 108test and connecting to it via a pipe. Each interaction with the U-Boot process 109requires at least a context switch to handle the pipe interaction. The test 110sends a command to U-Boot, which then reacts and shows some output, then the 111test sees that and continues. Of course on real hardware, communications delays 112(e.g. with a serial console) make this slower. 113 114For comparison, consider a test that checks the 'md' (memory dump). All times 115below are approximate, as measured on an AMD 2950X system. Here is is the test 116in Python:: 117 118 @pytest.mark.buildconfigspec('cmd_memory') 119 def test_md(u_boot_console): 120 """Test that md reads memory as expected, and that memory can be modified 121 using the mw command.""" 122 123 ram_base = u_boot_utils.find_ram_base(u_boot_console) 124 addr = '%08x' % ram_base 125 val = 'a5f09876' 126 expected_response = addr + ': ' + val 127 u_boot_console.run_command('mw ' + addr + ' 0 10') 128 response = u_boot_console.run_command('md ' + addr + ' 10') 129 assert(not (expected_response in response)) 130 u_boot_console.run_command('mw ' + addr + ' ' + val) 131 response = u_boot_console.run_command('md ' + addr + ' 10') 132 assert(expected_response in response) 133 134This runs a few commands and checks the output. Note that it runs a command, 135waits for the response and then checks it agains what is expected. If run by 136itself it takes around 800ms, including test collection. For 1000 runs it takes 13719 seconds, or 19ms per run. Of course 1000 runs it not that useful since we 138only want to run it once. 139 140There is no exactly equivalent C test, but here is a similar one that tests 'ms' 141(memory search):: 142 143 /* Test 'ms' command with bytes */ 144 static int mem_test_ms_b(struct unit_test_state *uts) 145 { 146 u8 *buf; 147 148 buf = map_sysmem(0, BUF_SIZE + 1); 149 memset(buf, '\0', BUF_SIZE); 150 buf[0x0] = 0x12; 151 buf[0x31] = 0x12; 152 buf[0xff] = 0x12; 153 buf[0x100] = 0x12; 154 run_command("ms.b 1 ff 12", 0); 155 ut_assert_nextline("00000030: 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................"); 156 ut_assert_nextline("--"); 157 ut_assert_nextline("000000f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12 ................"); 158 ut_assert_nextline("2 matches"); 159 ut_assert_console_end(); 160 161 ut_asserteq(2, env_get_hex("memmatches", 0)); 162 ut_asserteq(0xff, env_get_hex("memaddr", 0)); 163 ut_asserteq(0xfe, env_get_hex("mempos", 0)); 164 165 unmap_sysmem(buf); 166 167 return 0; 168 } 169 MEM_TEST(mem_test_ms_b, UTF_CONSOLE); 170 171This runs the command directly in U-Boot, then checks the console output, also 172directly in U-Boot. If run by itself this takes 100ms. For 1000 runs it takes 173660ms, or 0.66ms per run. 174 175So overall running a C test is perhaps 8 times faster individually and the 176interactions are perhaps 25 times faster. 177 178It should also be noted that the C test is fairly easy to debug. You can set a 179breakpoint on do_mem_search(), which is what implements the 'ms' command, 180single step to see what might be wrong, etc. That is also possible with the 181pytest, but requires two terminals and --gdbserver. 182 183 184Why does speed matter? 185---------------------- 186 187Many development activities rely on running tests: 188 189 - 'git bisect run make qcheck' can be used to find a failing commit 190 - test-driven development relies on quick iteration of build/test 191 - U-Boot's continuous integration (CI) systems make use of tests. Running 192 all sandbox tests typically takes 90 seconds and running each qemu test 193 takes about 30 seconds. This is currently dwarfed by the time taken to 194 build all boards 195 196As U-Boot continues to grow its feature set, fast and reliable tests are a 197critical factor factor in developer productivity and happiness. 198 199 200Writing C tests 201--------------- 202 203C tests are arranged into suites which are typically executed by the 'ut' 204command. Each suite is in its own file. This section describes how to accomplish 205some common test tasks. 206 207(there are also UEFI C tests in lib/efi_selftest/ not considered here.) 208 209Add a new driver model test 210~~~~~~~~~~~~~~~~~~~~~~~~~~~ 211 212Use this when adding a test for a new or existing uclass, adding new operations 213or features to a uclass, adding new ofnode or dev_read_() functions, or anything 214else related to driver model. 215 216Find a suitable place for your test, perhaps near other test functions in 217existing code, or in a new file. Each uclass should have its own test file. 218 219Declare the test with:: 220 221 /* Test that ... */ 222 static int dm_test_uclassname_what(struct unit_test_state *uts) 223 { 224 /* test code here */ 225 226 return 0; 227 } 228 DM_TEST(dm_test_uclassname_what, UTF_SCAN_FDT); 229 230Note that the convention is to NOT add a blank line before the macro, so that 231the function it relates to is more obvious. 232 233Replace 'uclassname' with the name of your uclass, if applicable. Replace 'what' 234with what you are testing. 235 236The flags for DM_TEST() are defined in test/test.h and you typically want 237UTF_SCAN_FDT so that the devicetree is scanned and all devices are bound 238and ready for use. The DM_TEST macro adds UTF_DM automatically so that 239the test runner knows it is a driver model test. 240 241Driver model tests are special in that the entire driver model state is 242recreated anew for each test. This ensures that if a previous test deletes a 243device, for example, it does not affect subsequent tests. Driver model tests 244also run both with livetree and flattree, to ensure that both devicetree 245implementations work as expected. 246 247Example commit: c48cb7ebfb4 ("sandbox: add ADC unit tests") [1] 248 249[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/c48cb7ebfb4 250 251 252Add a C test to an existing suite 253~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 254 255Use this when you are adding to or modifying an existing feature outside driver 256model. An example is bloblist. 257 258Add a new function in the same file as the rest of the suite and register it 259with the suite. For example, to add a new mem_search test:: 260 261 /* Test 'ms' command with 32-bit values */ 262 static int mem_test_ms_new_thing(struct unit_test_state *uts) 263 { 264 /* test code here*/ 265 266 return 0; 267 } 268 MEM_TEST(mem_test_ms_new_thing, UTF_CONSOLE); 269 270Note that the MEM_TEST() macros is defined at the top of the file. 271 272Example commit: 9fe064646d2 ("bloblist: Support relocating to a larger space") [1] 273 274[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/9fe064646d2 275 276 277Add a new test suite 278~~~~~~~~~~~~~~~~~~~~ 279 280Each suite should focus on one feature or subsystem, so if you are writing a 281new one of those, you should add a new suite. 282 283Create a new file in test/ or a subdirectory and define a macro to register the 284suite. For example:: 285 286 #include <console.h> 287 #include <mapmem.h> 288 #include <dm/test.h> 289 #include <test/ut.h> 290 291 /* Declare a new wibble test */ 292 #define WIBBLE_TEST(_name, _flags) UNIT_TEST(_name, _flags, wibble_test) 293 294 /* Tetss go here */ 295 296 /* At the bottom of the file: */ 297 298 int do_ut_wibble(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]) 299 { 300 struct unit_test *tests = UNIT_TEST_SUITE_START(wibble_test); 301 const int n_ents = UNIT_TEST_SUITE_COUNT(wibble_test); 302 303 return cmd_ut_category("cmd_wibble", "wibble_test_", tests, n_ents, argc, argv); 304 } 305 306Then add new tests to it as above. 307 308Register this new suite in test/cmd_ut.c by adding to cmd_ut_sub[]:: 309 310 /* Within cmd_ut_sub[]... */ 311 312 U_BOOT_CMD_MKENT(wibble, CONFIG_SYS_MAXARGS, 1, do_ut_wibble, "", ""), 313 314and adding new help to ut_help_text[]:: 315 316 "ut wibble - Test the wibble feature\n" 317 318If your feature is conditional on a particular Kconfig, then you can use #ifdef 319to control that. 320 321Finally, add the test to the build by adding to the Makefile in the same 322directory:: 323 324 obj-$(CONFIG_$(XPL_)CMDLINE) += wibble.o 325 326Note that CMDLINE is never enabled in SPL, so this test will only be present in 327U-Boot proper. See below for how to do SPL tests. 328 329As before, you can add an extra Kconfig check if needed:: 330 331 ifneq ($(CONFIG_$(XPL_)WIBBLE),) 332 obj-$(CONFIG_$(XPL_)CMDLINE) += wibble.o 333 endif 334 335 336Example commit: 919e7a8fb64 ("test: Add a simple test for bloblist") [1] 337 338[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/919e7a8fb64 339 340 341Making the test run from pytest 342~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 343 344All C tests must run from pytest. Typically this is automatic, since pytest 345scans the U-Boot executable for available tests to run. So long as you have a 346'ut' subcommand for your test suite, it will run. The same applies for driver 347model tests since they use the 'ut dm' subcommand. 348 349See test/py/tests/test_ut.py for how unit tests are run. 350 351 352Add a C test for SPL 353~~~~~~~~~~~~~~~~~~~~ 354 355Note: C tests are only available for sandbox_spl at present. There is currently 356no mechanism in other boards to existing SPL tests even if they are built into 357the image. 358 359SPL tests cannot be run from the 'ut' command since there are no commands 360available in SPL. Instead, sandbox (only) calls ut_run_list() on start-up, when 361the -u flag is given. This runs the available unit tests, no matter what suite 362they are in. 363 364To create a new SPL test, follow the same rules as above, either adding to an 365existing suite or creating a new one. 366 367An example SPL test is spl_test_load(). 368 369 370Writing Python tests 371-------------------- 372 373See :doc:`py_testing` for brief notes how to write Python tests. You 374should be able to use the existing tests in test/py/tests as examples.