Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

xarray: port tests to kunit

Minimally rewrite the XArray unit tests to use kunit. This integrates
nicely with existing kunit tools which produce nicer human-readable output
compared to the existing machinery.

Running the xarray tests before this change requires an obscure
invocation

```
tools/testing/kunit/kunit.py run --arch arm64 --make_options LLVM=1 \
--kconfig_add CONFIG_TEST_XARRAY=y --raw_output=all nothing
```

which on failure produces

```
BUG at check_reserve:513
...
XArray: 6782340 of 6782364 tests passed
```

and exits 0.

Running the xarray tests after this change requires a simpler invocation

```
tools/testing/kunit/kunit.py run --arch arm64 --make_options LLVM=1 \
xarray
```

which on failure produces (colors omitted)

```
[09:50:53] ====================== check_reserve ======================
[09:50:53] [FAILED] param-0
[09:50:53] # check_reserve: EXPECTATION FAILED at lib/test_xarray.c:536
[09:50:53] xa_erase(xa, 12345678) != NULL
...
[09:50:53] # module: test_xarray
[09:50:53] # xarray: pass:26 fail:3 skip:0 total:29
[09:50:53] # Totals: pass:28 fail:3 skip:0 total:31
[09:50:53] ===================== [FAILED] xarray ======================
```

and exits 1.

Use of richer kunit assertions is intentionally omitted to reduce the
scope of the change.

[akpm@linux-foundation.org: fix cocci warning]
Link: https://lore.kernel.org/oe-kbuild-all/202412081700.YXB3vBbg-lkp@intel.com/
Link: https://lkml.kernel.org/r/20241205-xarray-kunit-port-v1-1-ee44bc7aa201@gmail.com
Signed-off-by: Tamir Duberstein <tamird@gmail.com>
Cc: Bill Wendling <morbo@google.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Justin Stitt <justinstitt@google.com>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Naveen N Rao <naveen@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Tamir Duberstein and committed by
Andrew Morton
c7bb5cf9 7a77edf4

+403 -287
-1
arch/m68k/configs/amiga_defconfig
··· 629 629 CONFIG_TEST_SCANF=m 630 630 CONFIG_TEST_BITMAP=m 631 631 CONFIG_TEST_UUID=m 632 - CONFIG_TEST_XARRAY=m 633 632 CONFIG_TEST_MAPLE_TREE=m 634 633 CONFIG_TEST_RHASHTABLE=m 635 634 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/apollo_defconfig
··· 586 586 CONFIG_TEST_SCANF=m 587 587 CONFIG_TEST_BITMAP=m 588 588 CONFIG_TEST_UUID=m 589 - CONFIG_TEST_XARRAY=m 590 589 CONFIG_TEST_MAPLE_TREE=m 591 590 CONFIG_TEST_RHASHTABLE=m 592 591 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/atari_defconfig
··· 606 606 CONFIG_TEST_SCANF=m 607 607 CONFIG_TEST_BITMAP=m 608 608 CONFIG_TEST_UUID=m 609 - CONFIG_TEST_XARRAY=m 610 609 CONFIG_TEST_MAPLE_TREE=m 611 610 CONFIG_TEST_RHASHTABLE=m 612 611 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/bvme6000_defconfig
··· 578 578 CONFIG_TEST_SCANF=m 579 579 CONFIG_TEST_BITMAP=m 580 580 CONFIG_TEST_UUID=m 581 - CONFIG_TEST_XARRAY=m 582 581 CONFIG_TEST_MAPLE_TREE=m 583 582 CONFIG_TEST_RHASHTABLE=m 584 583 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/hp300_defconfig
··· 588 588 CONFIG_TEST_SCANF=m 589 589 CONFIG_TEST_BITMAP=m 590 590 CONFIG_TEST_UUID=m 591 - CONFIG_TEST_XARRAY=m 592 591 CONFIG_TEST_MAPLE_TREE=m 593 592 CONFIG_TEST_RHASHTABLE=m 594 593 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/mac_defconfig
··· 605 605 CONFIG_TEST_SCANF=m 606 606 CONFIG_TEST_BITMAP=m 607 607 CONFIG_TEST_UUID=m 608 - CONFIG_TEST_XARRAY=m 609 608 CONFIG_TEST_MAPLE_TREE=m 610 609 CONFIG_TEST_RHASHTABLE=m 611 610 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/multi_defconfig
··· 692 692 CONFIG_TEST_SCANF=m 693 693 CONFIG_TEST_BITMAP=m 694 694 CONFIG_TEST_UUID=m 695 - CONFIG_TEST_XARRAY=m 696 695 CONFIG_TEST_MAPLE_TREE=m 697 696 CONFIG_TEST_RHASHTABLE=m 698 697 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/mvme147_defconfig
··· 578 578 CONFIG_TEST_SCANF=m 579 579 CONFIG_TEST_BITMAP=m 580 580 CONFIG_TEST_UUID=m 581 - CONFIG_TEST_XARRAY=m 582 581 CONFIG_TEST_MAPLE_TREE=m 583 582 CONFIG_TEST_RHASHTABLE=m 584 583 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/mvme16x_defconfig
··· 579 579 CONFIG_TEST_SCANF=m 580 580 CONFIG_TEST_BITMAP=m 581 581 CONFIG_TEST_UUID=m 582 - CONFIG_TEST_XARRAY=m 583 582 CONFIG_TEST_MAPLE_TREE=m 584 583 CONFIG_TEST_RHASHTABLE=m 585 584 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/q40_defconfig
··· 595 595 CONFIG_TEST_SCANF=m 596 596 CONFIG_TEST_BITMAP=m 597 597 CONFIG_TEST_UUID=m 598 - CONFIG_TEST_XARRAY=m 599 598 CONFIG_TEST_MAPLE_TREE=m 600 599 CONFIG_TEST_RHASHTABLE=m 601 600 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/sun3_defconfig
··· 575 575 CONFIG_TEST_SCANF=m 576 576 CONFIG_TEST_BITMAP=m 577 577 CONFIG_TEST_UUID=m 578 - CONFIG_TEST_XARRAY=m 579 578 CONFIG_TEST_MAPLE_TREE=m 580 579 CONFIG_TEST_RHASHTABLE=m 581 580 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/sun3x_defconfig
··· 576 576 CONFIG_TEST_SCANF=m 577 577 CONFIG_TEST_BITMAP=m 578 578 CONFIG_TEST_UUID=m 579 - CONFIG_TEST_XARRAY=m 580 579 CONFIG_TEST_MAPLE_TREE=m 581 580 CONFIG_TEST_RHASHTABLE=m 582 581 CONFIG_TEST_IDA=m
-1
arch/powerpc/configs/ppc64_defconfig
··· 451 451 CONFIG_TEST_SCANF=m 452 452 CONFIG_TEST_BITMAP=m 453 453 CONFIG_TEST_UUID=m 454 - CONFIG_TEST_XARRAY=m 455 454 CONFIG_TEST_MAPLE_TREE=m 456 455 CONFIG_TEST_RHASHTABLE=m 457 456 CONFIG_TEST_IDA=m
+16 -2
lib/Kconfig.debug
··· 2456 2456 config TEST_UUID 2457 2457 tristate "Test functions located in the uuid module at runtime" 2458 2458 2459 - config TEST_XARRAY 2460 - tristate "Test the XArray code at runtime" 2459 + config XARRAY_KUNIT 2460 + tristate "KUnit test XArray code at runtime" if !KUNIT_ALL_TESTS 2461 + depends on KUNIT 2462 + default KUNIT_ALL_TESTS 2463 + help 2464 + Enable this option to test the Xarray code at boot. 2465 + 2466 + KUnit tests run during boot and output the results to the debug log 2467 + in TAP format (http://testanything.org/). Only useful for kernel devs 2468 + running the KUnit test harness, and not intended for inclusion into a 2469 + production build. 2470 + 2471 + For more information on KUnit and unit tests in general please refer 2472 + to the KUnit documentation in Documentation/dev-tools/kunit/. 2473 + 2474 + If unsure, say N. 2461 2475 2462 2476 config TEST_MAPLE_TREE 2463 2477 tristate "Test the Maple Tree code at runtime or module load"
+1 -1
lib/Makefile
··· 94 94 endif 95 95 96 96 obj-$(CONFIG_TEST_UUID) += test_uuid.o 97 - obj-$(CONFIG_TEST_XARRAY) += test_xarray.o 98 97 obj-$(CONFIG_TEST_MAPLE_TREE) += test_maple_tree.o 99 98 obj-$(CONFIG_TEST_PARMAN) += test_parman.o 100 99 obj-$(CONFIG_TEST_KMOD) += test_kmod.o ··· 374 375 obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o 375 376 obj-$(CONFIG_CHECKSUM_KUNIT) += checksum_kunit.o 376 377 obj-$(CONFIG_UTIL_MACROS_KUNIT) += util_macros_kunit.o 378 + obj-$(CONFIG_XARRAY_KUNIT) += test_xarray.o 377 379 obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o 378 380 obj-$(CONFIG_HASHTABLE_KUNIT_TEST) += hashtable_test.o 379 381 obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
+386 -271
lib/test_xarray.c
··· 6 6 * Author: Matthew Wilcox <willy@infradead.org> 7 7 */ 8 8 9 - #include <linux/xarray.h> 10 - #include <linux/module.h> 9 + #include <kunit/test.h> 11 10 12 - static unsigned int tests_run; 13 - static unsigned int tests_passed; 11 + #include <linux/module.h> 12 + #include <linux/xarray.h> 14 13 15 14 static const unsigned int order_limit = 16 15 IS_ENABLED(CONFIG_XARRAY_MULTI) ? BITS_PER_LONG : 1; ··· 19 20 void xa_dump(const struct xarray *xa) { } 20 21 # endif 21 22 #undef XA_BUG_ON 22 - #define XA_BUG_ON(xa, x) do { \ 23 - tests_run++; \ 24 - if (x) { \ 25 - printk("BUG at %s:%d\n", __func__, __LINE__); \ 26 - xa_dump(xa); \ 27 - dump_stack(); \ 28 - } else { \ 29 - tests_passed++; \ 30 - } \ 23 + #define XA_BUG_ON(xa, x) do { \ 24 + if (x) { \ 25 + KUNIT_FAIL(test, #x); \ 26 + xa_dump(xa); \ 27 + dump_stack(); \ 28 + } \ 31 29 } while (0) 32 30 #endif 33 31 ··· 38 42 return xa_store(xa, index, xa_mk_index(index), gfp); 39 43 } 40 44 41 - static void xa_insert_index(struct xarray *xa, unsigned long index) 45 + static void xa_insert_index(struct kunit *test, struct xarray *xa, unsigned long index) 42 46 { 43 47 XA_BUG_ON(xa, xa_insert(xa, index, xa_mk_index(index), 44 48 GFP_KERNEL) != 0); 45 49 } 46 50 47 - static void xa_alloc_index(struct xarray *xa, unsigned long index, gfp_t gfp) 51 + static void xa_alloc_index(struct kunit *test, struct xarray *xa, unsigned long index, gfp_t gfp) 48 52 { 49 53 u32 id; 50 54 ··· 53 57 XA_BUG_ON(xa, id != index); 54 58 } 55 59 56 - static void xa_erase_index(struct xarray *xa, unsigned long index) 60 + static void xa_erase_index(struct kunit *test, struct xarray *xa, unsigned long index) 57 61 { 58 62 XA_BUG_ON(xa, xa_erase(xa, index) != xa_mk_index(index)); 59 63 XA_BUG_ON(xa, xa_load(xa, index) != NULL); ··· 79 83 return curr; 80 84 } 81 85 82 - static noinline void check_xa_err(struct xarray *xa) 86 + static inline struct xarray *xa_param(struct kunit *test) 83 87 { 88 + return *(struct xarray **)test->param_value; 89 + } 90 + 91 + static noinline void check_xa_err(struct kunit *test) 92 + { 93 + struct xarray *xa = xa_param(test); 94 + 84 95 XA_BUG_ON(xa, xa_err(xa_store_index(xa, 0, GFP_NOWAIT)) != 0); 85 96 XA_BUG_ON(xa, xa_err(xa_erase(xa, 0)) != 0); 86 97 #ifndef __KERNEL__ ··· 102 99 // XA_BUG_ON(xa, xa_err(xa_store(xa, 0, xa_mk_internal(0), 0)) != -EINVAL); 103 100 } 104 101 105 - static noinline void check_xas_retry(struct xarray *xa) 102 + static noinline void check_xas_retry(struct kunit *test) 106 103 { 104 + struct xarray *xa = xa_param(test); 105 + 107 106 XA_STATE(xas, xa, 0); 108 107 void *entry; 109 108 ··· 114 109 115 110 rcu_read_lock(); 116 111 XA_BUG_ON(xa, xas_find(&xas, ULONG_MAX) != xa_mk_value(0)); 117 - xa_erase_index(xa, 1); 112 + xa_erase_index(test, xa, 1); 118 113 XA_BUG_ON(xa, !xa_is_retry(xas_reload(&xas))); 119 114 XA_BUG_ON(xa, xas_retry(&xas, NULL)); 120 115 XA_BUG_ON(xa, xas_retry(&xas, xa_mk_value(0))); ··· 145 140 } 146 141 xas_unlock(&xas); 147 142 148 - xa_erase_index(xa, 0); 149 - xa_erase_index(xa, 1); 143 + xa_erase_index(test, xa, 0); 144 + xa_erase_index(test, xa, 1); 150 145 } 151 146 152 - static noinline void check_xa_load(struct xarray *xa) 147 + static noinline void check_xa_load(struct kunit *test) 153 148 { 149 + struct xarray *xa = xa_param(test); 150 + 154 151 unsigned long i, j; 155 152 156 153 for (i = 0; i < 1024; i++) { ··· 174 167 else 175 168 XA_BUG_ON(xa, entry); 176 169 } 177 - xa_erase_index(xa, i); 170 + xa_erase_index(test, xa, i); 178 171 } 179 172 XA_BUG_ON(xa, !xa_empty(xa)); 180 173 } 181 174 182 - static noinline void check_xa_mark_1(struct xarray *xa, unsigned long index) 175 + static noinline void check_xa_mark_1(struct kunit *test, unsigned long index) 183 176 { 177 + struct xarray *xa = xa_param(test); 178 + 184 179 unsigned int order; 185 180 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 8 : 1; 186 181 ··· 202 193 XA_BUG_ON(xa, xa_get_mark(xa, index, XA_MARK_1)); 203 194 204 195 /* Storing NULL clears marks, and they can't be set again */ 205 - xa_erase_index(xa, index); 196 + xa_erase_index(test, xa, index); 206 197 XA_BUG_ON(xa, !xa_empty(xa)); 207 198 XA_BUG_ON(xa, xa_get_mark(xa, index, XA_MARK_0)); 208 199 xa_set_mark(xa, index, XA_MARK_0); ··· 253 244 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0)); 254 245 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_1)); 255 246 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_2)); 256 - xa_erase_index(xa, index); 257 - xa_erase_index(xa, next); 247 + xa_erase_index(test, xa, index); 248 + xa_erase_index(test, xa, next); 258 249 XA_BUG_ON(xa, !xa_empty(xa)); 259 250 } 260 251 XA_BUG_ON(xa, !xa_empty(xa)); 261 252 } 262 253 263 - static noinline void check_xa_mark_2(struct xarray *xa) 254 + static noinline void check_xa_mark_2(struct kunit *test) 264 255 { 256 + struct xarray *xa = xa_param(test); 257 + 265 258 XA_STATE(xas, xa, 0); 266 259 unsigned long index; 267 260 unsigned int count = 0; ··· 300 289 xa_destroy(xa); 301 290 } 302 291 303 - static noinline void check_xa_mark_3(struct xarray *xa) 292 + static noinline void check_xa_mark_3(struct kunit *test) 304 293 { 305 294 #ifdef CONFIG_XARRAY_MULTI 295 + struct xarray *xa = xa_param(test); 296 + 306 297 XA_STATE(xas, xa, 0x41); 307 298 void *entry; 308 299 int count = 0; ··· 323 310 #endif 324 311 } 325 312 326 - static noinline void check_xa_mark(struct xarray *xa) 313 + static noinline void check_xa_mark(struct kunit *test) 327 314 { 328 315 unsigned long index; 329 316 330 317 for (index = 0; index < 16384; index += 4) 331 - check_xa_mark_1(xa, index); 318 + check_xa_mark_1(test, index); 332 319 333 - check_xa_mark_2(xa); 334 - check_xa_mark_3(xa); 320 + check_xa_mark_2(test); 321 + check_xa_mark_3(test); 335 322 } 336 323 337 - static noinline void check_xa_shrink(struct xarray *xa) 324 + static noinline void check_xa_shrink(struct kunit *test) 338 325 { 326 + struct xarray *xa = xa_param(test); 327 + 339 328 XA_STATE(xas, xa, 1); 340 329 struct xa_node *node; 341 330 unsigned int order; ··· 362 347 XA_BUG_ON(xa, xas_load(&xas) != NULL); 363 348 xas_unlock(&xas); 364 349 XA_BUG_ON(xa, xa_load(xa, 0) != xa_mk_value(0)); 365 - xa_erase_index(xa, 0); 350 + xa_erase_index(test, xa, 0); 366 351 XA_BUG_ON(xa, !xa_empty(xa)); 367 352 368 353 for (order = 0; order < max_order; order++) { ··· 379 364 XA_BUG_ON(xa, xa_head(xa) == node); 380 365 rcu_read_unlock(); 381 366 XA_BUG_ON(xa, xa_load(xa, max + 1) != NULL); 382 - xa_erase_index(xa, ULONG_MAX); 367 + xa_erase_index(test, xa, ULONG_MAX); 383 368 XA_BUG_ON(xa, xa->xa_head != node); 384 - xa_erase_index(xa, 0); 369 + xa_erase_index(test, xa, 0); 385 370 } 386 371 } 387 372 388 - static noinline void check_insert(struct xarray *xa) 373 + static noinline void check_insert(struct kunit *test) 389 374 { 375 + struct xarray *xa = xa_param(test); 376 + 390 377 unsigned long i; 391 378 392 379 for (i = 0; i < 1024; i++) { 393 - xa_insert_index(xa, i); 380 + xa_insert_index(test, xa, i); 394 381 XA_BUG_ON(xa, xa_load(xa, i - 1) != NULL); 395 382 XA_BUG_ON(xa, xa_load(xa, i + 1) != NULL); 396 - xa_erase_index(xa, i); 383 + xa_erase_index(test, xa, i); 397 384 } 398 385 399 386 for (i = 10; i < BITS_PER_LONG; i++) { 400 - xa_insert_index(xa, 1UL << i); 387 + xa_insert_index(test, xa, 1UL << i); 401 388 XA_BUG_ON(xa, xa_load(xa, (1UL << i) - 1) != NULL); 402 389 XA_BUG_ON(xa, xa_load(xa, (1UL << i) + 1) != NULL); 403 - xa_erase_index(xa, 1UL << i); 390 + xa_erase_index(test, xa, 1UL << i); 404 391 405 - xa_insert_index(xa, (1UL << i) - 1); 392 + xa_insert_index(test, xa, (1UL << i) - 1); 406 393 XA_BUG_ON(xa, xa_load(xa, (1UL << i) - 2) != NULL); 407 394 XA_BUG_ON(xa, xa_load(xa, 1UL << i) != NULL); 408 - xa_erase_index(xa, (1UL << i) - 1); 395 + xa_erase_index(test, xa, (1UL << i) - 1); 409 396 } 410 397 411 - xa_insert_index(xa, ~0UL); 398 + xa_insert_index(test, xa, ~0UL); 412 399 XA_BUG_ON(xa, xa_load(xa, 0UL) != NULL); 413 400 XA_BUG_ON(xa, xa_load(xa, ~1UL) != NULL); 414 - xa_erase_index(xa, ~0UL); 401 + xa_erase_index(test, xa, ~0UL); 415 402 416 403 XA_BUG_ON(xa, !xa_empty(xa)); 417 404 } 418 405 419 - static noinline void check_cmpxchg(struct xarray *xa) 406 + static noinline void check_cmpxchg(struct kunit *test) 420 407 { 408 + struct xarray *xa = xa_param(test); 409 + 421 410 void *FIVE = xa_mk_value(5); 422 411 void *SIX = xa_mk_value(6); 423 412 void *LOTS = xa_mk_value(12345678); ··· 437 418 XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) != -EBUSY); 438 419 XA_BUG_ON(xa, xa_cmpxchg(xa, 5, FIVE, NULL, GFP_KERNEL) != FIVE); 439 420 XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) == -EBUSY); 440 - xa_erase_index(xa, 12345678); 441 - xa_erase_index(xa, 5); 421 + xa_erase_index(test, xa, 12345678); 422 + xa_erase_index(test, xa, 5); 442 423 XA_BUG_ON(xa, !xa_empty(xa)); 443 424 } 444 425 445 - static noinline void check_cmpxchg_order(struct xarray *xa) 426 + static noinline void check_cmpxchg_order(struct kunit *test) 446 427 { 447 428 #ifdef CONFIG_XARRAY_MULTI 429 + struct xarray *xa = xa_param(test); 430 + 448 431 void *FIVE = xa_mk_value(5); 449 432 unsigned int i, order = 3; 450 433 ··· 497 476 #endif 498 477 } 499 478 500 - static noinline void check_reserve(struct xarray *xa) 479 + static noinline void check_reserve(struct kunit *test) 501 480 { 481 + struct xarray *xa = xa_param(test); 482 + 502 483 void *entry; 503 484 unsigned long index; 504 485 int count; ··· 517 494 XA_BUG_ON(xa, xa_reserve(xa, 12345678, GFP_KERNEL) != 0); 518 495 XA_BUG_ON(xa, xa_store_index(xa, 12345678, GFP_NOWAIT) != NULL); 519 496 xa_release(xa, 12345678); 520 - xa_erase_index(xa, 12345678); 497 + xa_erase_index(test, xa, 12345678); 521 498 XA_BUG_ON(xa, !xa_empty(xa)); 522 499 523 500 /* cmpxchg sees a reserved entry as ZERO */ ··· 525 502 XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, XA_ZERO_ENTRY, 526 503 xa_mk_value(12345678), GFP_NOWAIT) != NULL); 527 504 xa_release(xa, 12345678); 528 - xa_erase_index(xa, 12345678); 505 + xa_erase_index(test, xa, 12345678); 529 506 XA_BUG_ON(xa, !xa_empty(xa)); 530 507 531 508 /* xa_insert treats it as busy */ ··· 565 542 xa_destroy(xa); 566 543 } 567 544 568 - static noinline void check_xas_erase(struct xarray *xa) 545 + static noinline void check_xas_erase(struct kunit *test) 569 546 { 547 + struct xarray *xa = xa_param(test); 548 + 570 549 XA_STATE(xas, xa, 0); 571 550 void *entry; 572 551 unsigned long i, j; ··· 606 581 } 607 582 608 583 #ifdef CONFIG_XARRAY_MULTI 609 - static noinline void check_multi_store_1(struct xarray *xa, unsigned long index, 584 + static noinline void check_multi_store_1(struct kunit *test, unsigned long index, 610 585 unsigned int order) 611 586 { 587 + struct xarray *xa = xa_param(test); 588 + 612 589 XA_STATE(xas, xa, index); 613 590 unsigned long min = index & ~((1UL << order) - 1); 614 591 unsigned long max = min + (1UL << order); ··· 629 602 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 630 603 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 631 604 632 - xa_erase_index(xa, min); 605 + xa_erase_index(test, xa, min); 633 606 XA_BUG_ON(xa, !xa_empty(xa)); 634 607 } 635 608 636 - static noinline void check_multi_store_2(struct xarray *xa, unsigned long index, 609 + static noinline void check_multi_store_2(struct kunit *test, unsigned long index, 637 610 unsigned int order) 638 611 { 612 + struct xarray *xa = xa_param(test); 613 + 639 614 XA_STATE(xas, xa, index); 640 615 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL); 641 616 ··· 649 620 XA_BUG_ON(xa, !xa_empty(xa)); 650 621 } 651 622 652 - static noinline void check_multi_store_3(struct xarray *xa, unsigned long index, 623 + static noinline void check_multi_store_3(struct kunit *test, unsigned long index, 653 624 unsigned int order) 654 625 { 626 + struct xarray *xa = xa_param(test); 627 + 655 628 XA_STATE(xas, xa, 0); 656 629 void *entry; 657 630 int n = 0; ··· 678 647 } 679 648 #endif 680 649 681 - static noinline void check_multi_store(struct xarray *xa) 650 + static noinline void check_multi_store(struct kunit *test) 682 651 { 683 652 #ifdef CONFIG_XARRAY_MULTI 653 + struct xarray *xa = xa_param(test); 654 + 684 655 unsigned long i, j, k; 685 656 unsigned int max_order = (sizeof(long) == 4) ? 30 : 60; 686 657 ··· 747 714 } 748 715 749 716 for (i = 0; i < 20; i++) { 750 - check_multi_store_1(xa, 200, i); 751 - check_multi_store_1(xa, 0, i); 752 - check_multi_store_1(xa, (1UL << i) + 1, i); 717 + check_multi_store_1(test, 200, i); 718 + check_multi_store_1(test, 0, i); 719 + check_multi_store_1(test, (1UL << i) + 1, i); 753 720 } 754 - check_multi_store_2(xa, 4095, 9); 721 + check_multi_store_2(test, 4095, 9); 755 722 756 723 for (i = 1; i < 20; i++) { 757 - check_multi_store_3(xa, 0, i); 758 - check_multi_store_3(xa, 1UL << i, i); 724 + check_multi_store_3(test, 0, i); 725 + check_multi_store_3(test, 1UL << i, i); 759 726 } 760 727 #endif 761 728 } 762 729 763 730 #ifdef CONFIG_XARRAY_MULTI 764 731 /* mimics page cache __filemap_add_folio() */ 765 - static noinline void check_xa_multi_store_adv_add(struct xarray *xa, 732 + static noinline void check_xa_multi_store_adv_add(struct kunit *test, 766 733 unsigned long index, 767 734 unsigned int order, 768 735 void *p) 769 736 { 737 + struct xarray *xa = xa_param(test); 738 + 770 739 XA_STATE(xas, xa, index); 771 740 unsigned int nrpages = 1UL << order; 772 741 ··· 796 761 } 797 762 798 763 /* mimics page_cache_delete() */ 799 - static noinline void check_xa_multi_store_adv_del_entry(struct xarray *xa, 764 + static noinline void check_xa_multi_store_adv_del_entry(struct kunit *test, 800 765 unsigned long index, 801 766 unsigned int order) 802 767 { 768 + struct xarray *xa = xa_param(test); 769 + 803 770 XA_STATE(xas, xa, index); 804 771 805 772 xas_set_order(&xas, index, order); ··· 809 772 xas_init_marks(&xas); 810 773 } 811 774 812 - static noinline void check_xa_multi_store_adv_delete(struct xarray *xa, 775 + static noinline void check_xa_multi_store_adv_delete(struct kunit *test, 813 776 unsigned long index, 814 777 unsigned int order) 815 778 { 779 + struct xarray *xa = xa_param(test); 780 + 816 781 xa_lock_irq(xa); 817 - check_xa_multi_store_adv_del_entry(xa, index, order); 782 + check_xa_multi_store_adv_del_entry(test, index, order); 818 783 xa_unlock_irq(xa); 819 784 } 820 785 ··· 853 814 static unsigned long some_val_2 = 0xdeaddead; 854 815 855 816 /* mimics the page cache usage */ 856 - static noinline void check_xa_multi_store_adv(struct xarray *xa, 817 + static noinline void check_xa_multi_store_adv(struct kunit *test, 857 818 unsigned long pos, 858 819 unsigned int order) 859 820 { 821 + struct xarray *xa = xa_param(test); 822 + 860 823 unsigned int nrpages = 1UL << order; 861 824 unsigned long index, base, next_index, next_next_index; 862 825 unsigned int i; ··· 868 827 next_index = round_down(base + nrpages, nrpages); 869 828 next_next_index = round_down(next_index + nrpages, nrpages); 870 829 871 - check_xa_multi_store_adv_add(xa, base, order, &some_val); 830 + check_xa_multi_store_adv_add(test, base, order, &some_val); 872 831 873 832 for (i = 0; i < nrpages; i++) 874 833 XA_BUG_ON(xa, test_get_entry(xa, base + i) != &some_val); ··· 876 835 XA_BUG_ON(xa, test_get_entry(xa, next_index) != NULL); 877 836 878 837 /* Use order 0 for the next item */ 879 - check_xa_multi_store_adv_add(xa, next_index, 0, &some_val_2); 838 + check_xa_multi_store_adv_add(test, next_index, 0, &some_val_2); 880 839 XA_BUG_ON(xa, test_get_entry(xa, next_index) != &some_val_2); 881 840 882 841 /* Remove the next item */ 883 - check_xa_multi_store_adv_delete(xa, next_index, 0); 842 + check_xa_multi_store_adv_delete(test, next_index, 0); 884 843 885 844 /* Now use order for a new pointer */ 886 - check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2); 845 + check_xa_multi_store_adv_add(test, next_index, order, &some_val_2); 887 846 888 847 for (i = 0; i < nrpages; i++) 889 848 XA_BUG_ON(xa, test_get_entry(xa, next_index + i) != &some_val_2); 890 849 891 - check_xa_multi_store_adv_delete(xa, next_index, order); 892 - check_xa_multi_store_adv_delete(xa, base, order); 850 + check_xa_multi_store_adv_delete(test, next_index, order); 851 + check_xa_multi_store_adv_delete(test, base, order); 893 852 XA_BUG_ON(xa, !xa_empty(xa)); 894 853 895 854 /* starting fresh again */ ··· 897 856 /* let's test some holes now */ 898 857 899 858 /* hole at base and next_next */ 900 - check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2); 859 + check_xa_multi_store_adv_add(test, next_index, order, &some_val_2); 901 860 902 861 for (i = 0; i < nrpages; i++) 903 862 XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); ··· 908 867 for (i = 0; i < nrpages; i++) 909 868 XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != NULL); 910 869 911 - check_xa_multi_store_adv_delete(xa, next_index, order); 870 + check_xa_multi_store_adv_delete(test, next_index, order); 912 871 XA_BUG_ON(xa, !xa_empty(xa)); 913 872 914 873 /* hole at base and next */ 915 874 916 - check_xa_multi_store_adv_add(xa, next_next_index, order, &some_val_2); 875 + check_xa_multi_store_adv_add(test, next_next_index, order, &some_val_2); 917 876 918 877 for (i = 0; i < nrpages; i++) 919 878 XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); ··· 924 883 for (i = 0; i < nrpages; i++) 925 884 XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != &some_val_2); 926 885 927 - check_xa_multi_store_adv_delete(xa, next_next_index, order); 886 + check_xa_multi_store_adv_delete(test, next_next_index, order); 928 887 XA_BUG_ON(xa, !xa_empty(xa)); 929 888 } 930 889 #endif 931 890 932 - static noinline void check_multi_store_advanced(struct xarray *xa) 891 + static noinline void check_multi_store_advanced(struct kunit *test) 933 892 { 934 893 #ifdef CONFIG_XARRAY_MULTI 935 894 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; ··· 941 900 */ 942 901 for (pos = 7; pos < end; pos = (pos * pos) + 564) { 943 902 for (i = 0; i < max_order; i++) { 944 - check_xa_multi_store_adv(xa, pos, i); 945 - check_xa_multi_store_adv(xa, pos + 157, i); 903 + check_xa_multi_store_adv(test, pos, i); 904 + check_xa_multi_store_adv(test, pos + 157, i); 946 905 } 947 906 } 948 907 #endif 949 908 } 950 909 951 - static noinline void check_xa_alloc_1(struct xarray *xa, unsigned int base) 910 + static noinline void check_xa_alloc_1(struct kunit *test, struct xarray *xa, unsigned int base) 952 911 { 953 912 int i; 954 913 u32 id; 955 914 956 915 XA_BUG_ON(xa, !xa_empty(xa)); 957 916 /* An empty array should assign %base to the first alloc */ 958 - xa_alloc_index(xa, base, GFP_KERNEL); 917 + xa_alloc_index(test, xa, base, GFP_KERNEL); 959 918 960 919 /* Erasing it should make the array empty again */ 961 - xa_erase_index(xa, base); 920 + xa_erase_index(test, xa, base); 962 921 XA_BUG_ON(xa, !xa_empty(xa)); 963 922 964 923 /* And it should assign %base again */ 965 - xa_alloc_index(xa, base, GFP_KERNEL); 924 + xa_alloc_index(test, xa, base, GFP_KERNEL); 966 925 967 926 /* Allocating and then erasing a lot should not lose base */ 968 927 for (i = base + 1; i < 2 * XA_CHUNK_SIZE; i++) 969 - xa_alloc_index(xa, i, GFP_KERNEL); 928 + xa_alloc_index(test, xa, i, GFP_KERNEL); 970 929 for (i = base; i < 2 * XA_CHUNK_SIZE; i++) 971 - xa_erase_index(xa, i); 972 - xa_alloc_index(xa, base, GFP_KERNEL); 930 + xa_erase_index(test, xa, i); 931 + xa_alloc_index(test, xa, base, GFP_KERNEL); 973 932 974 933 /* Destroying the array should do the same as erasing */ 975 934 xa_destroy(xa); 976 935 977 936 /* And it should assign %base again */ 978 - xa_alloc_index(xa, base, GFP_KERNEL); 937 + xa_alloc_index(test, xa, base, GFP_KERNEL); 979 938 980 939 /* The next assigned ID should be base+1 */ 981 - xa_alloc_index(xa, base + 1, GFP_KERNEL); 982 - xa_erase_index(xa, base + 1); 940 + xa_alloc_index(test, xa, base + 1, GFP_KERNEL); 941 + xa_erase_index(test, xa, base + 1); 983 942 984 943 /* Storing a value should mark it used */ 985 944 xa_store_index(xa, base + 1, GFP_KERNEL); 986 - xa_alloc_index(xa, base + 2, GFP_KERNEL); 945 + xa_alloc_index(test, xa, base + 2, GFP_KERNEL); 987 946 988 947 /* If we then erase base, it should be free */ 989 - xa_erase_index(xa, base); 990 - xa_alloc_index(xa, base, GFP_KERNEL); 948 + xa_erase_index(test, xa, base); 949 + xa_alloc_index(test, xa, base, GFP_KERNEL); 991 950 992 - xa_erase_index(xa, base + 1); 993 - xa_erase_index(xa, base + 2); 951 + xa_erase_index(test, xa, base + 1); 952 + xa_erase_index(test, xa, base + 2); 994 953 995 954 for (i = 1; i < 5000; i++) { 996 - xa_alloc_index(xa, base + i, GFP_KERNEL); 955 + xa_alloc_index(test, xa, base + i, GFP_KERNEL); 997 956 } 998 957 999 958 xa_destroy(xa); ··· 1016 975 1017 976 XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5), 1018 977 GFP_KERNEL) != -EBUSY); 1019 - XA_BUG_ON(xa, xa_store_index(xa, 3, GFP_KERNEL) != 0); 978 + XA_BUG_ON(xa, xa_store_index(xa, 3, GFP_KERNEL) != NULL); 1020 979 XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5), 1021 980 GFP_KERNEL) != -EBUSY); 1022 - xa_erase_index(xa, 3); 981 + xa_erase_index(test, xa, 3); 1023 982 XA_BUG_ON(xa, !xa_empty(xa)); 1024 983 } 1025 984 1026 - static noinline void check_xa_alloc_2(struct xarray *xa, unsigned int base) 985 + static noinline void check_xa_alloc_2(struct kunit *test, struct xarray *xa, unsigned int base) 1027 986 { 1028 987 unsigned int i, id; 1029 988 unsigned long index; ··· 1059 1018 XA_BUG_ON(xa, id != 5); 1060 1019 1061 1020 xa_for_each(xa, index, entry) { 1062 - xa_erase_index(xa, index); 1021 + xa_erase_index(test, xa, index); 1063 1022 } 1064 1023 1065 1024 for (i = base; i < base + 9; i++) { ··· 1074 1033 xa_destroy(xa); 1075 1034 } 1076 1035 1077 - static noinline void check_xa_alloc_3(struct xarray *xa, unsigned int base) 1036 + static noinline void check_xa_alloc_3(struct kunit *test, struct xarray *xa, unsigned int base) 1078 1037 { 1079 1038 struct xa_limit limit = XA_LIMIT(1, 0x3fff); 1080 1039 u32 next = 0; ··· 1090 1049 XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(0x3ffd), limit, 1091 1050 &next, GFP_KERNEL) != 0); 1092 1051 XA_BUG_ON(xa, id != 0x3ffd); 1093 - xa_erase_index(xa, 0x3ffd); 1094 - xa_erase_index(xa, 1); 1052 + xa_erase_index(test, xa, 0x3ffd); 1053 + xa_erase_index(test, xa, 1); 1095 1054 XA_BUG_ON(xa, !xa_empty(xa)); 1096 1055 1097 1056 for (i = 0x3ffe; i < 0x4003; i++) { ··· 1106 1065 1107 1066 /* Check wrap-around is handled correctly */ 1108 1067 if (base != 0) 1109 - xa_erase_index(xa, base); 1110 - xa_erase_index(xa, base + 1); 1068 + xa_erase_index(test, xa, base); 1069 + xa_erase_index(test, xa, base + 1); 1111 1070 next = UINT_MAX; 1112 1071 XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(UINT_MAX), 1113 1072 xa_limit_32b, &next, GFP_KERNEL) != 0); ··· 1120 1079 XA_BUG_ON(xa, id != base + 1); 1121 1080 1122 1081 xa_for_each(xa, index, entry) 1123 - xa_erase_index(xa, index); 1082 + xa_erase_index(test, xa, index); 1124 1083 1125 1084 XA_BUG_ON(xa, !xa_empty(xa)); 1126 1085 } ··· 1128 1087 static DEFINE_XARRAY_ALLOC(xa0); 1129 1088 static DEFINE_XARRAY_ALLOC1(xa1); 1130 1089 1131 - static noinline void check_xa_alloc(void) 1090 + static noinline void check_xa_alloc(struct kunit *test) 1132 1091 { 1133 - check_xa_alloc_1(&xa0, 0); 1134 - check_xa_alloc_1(&xa1, 1); 1135 - check_xa_alloc_2(&xa0, 0); 1136 - check_xa_alloc_2(&xa1, 1); 1137 - check_xa_alloc_3(&xa0, 0); 1138 - check_xa_alloc_3(&xa1, 1); 1092 + check_xa_alloc_1(test, &xa0, 0); 1093 + check_xa_alloc_1(test, &xa1, 1); 1094 + check_xa_alloc_2(test, &xa0, 0); 1095 + check_xa_alloc_2(test, &xa1, 1); 1096 + check_xa_alloc_3(test, &xa0, 0); 1097 + check_xa_alloc_3(test, &xa1, 1); 1139 1098 } 1140 1099 1141 - static noinline void __check_store_iter(struct xarray *xa, unsigned long start, 1100 + static noinline void __check_store_iter(struct kunit *test, unsigned long start, 1142 1101 unsigned int order, unsigned int present) 1143 1102 { 1103 + struct xarray *xa = xa_param(test); 1104 + 1144 1105 XA_STATE_ORDER(xas, xa, start, order); 1145 1106 void *entry; 1146 1107 unsigned int count = 0; ··· 1166 1123 XA_BUG_ON(xa, xa_load(xa, start) != xa_mk_index(start)); 1167 1124 XA_BUG_ON(xa, xa_load(xa, start + (1UL << order) - 1) != 1168 1125 xa_mk_index(start)); 1169 - xa_erase_index(xa, start); 1126 + xa_erase_index(test, xa, start); 1170 1127 } 1171 1128 1172 - static noinline void check_store_iter(struct xarray *xa) 1129 + static noinline void check_store_iter(struct kunit *test) 1173 1130 { 1131 + struct xarray *xa = xa_param(test); 1132 + 1174 1133 unsigned int i, j; 1175 1134 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; 1176 1135 1177 1136 for (i = 0; i < max_order; i++) { 1178 1137 unsigned int min = 1 << i; 1179 1138 unsigned int max = (2 << i) - 1; 1180 - __check_store_iter(xa, 0, i, 0); 1139 + __check_store_iter(test, 0, i, 0); 1181 1140 XA_BUG_ON(xa, !xa_empty(xa)); 1182 - __check_store_iter(xa, min, i, 0); 1141 + __check_store_iter(test, min, i, 0); 1183 1142 XA_BUG_ON(xa, !xa_empty(xa)); 1184 1143 1185 1144 xa_store_index(xa, min, GFP_KERNEL); 1186 - __check_store_iter(xa, min, i, 1); 1145 + __check_store_iter(test, min, i, 1); 1187 1146 XA_BUG_ON(xa, !xa_empty(xa)); 1188 1147 xa_store_index(xa, max, GFP_KERNEL); 1189 - __check_store_iter(xa, min, i, 1); 1148 + __check_store_iter(test, min, i, 1); 1190 1149 XA_BUG_ON(xa, !xa_empty(xa)); 1191 1150 1192 1151 for (j = 0; j < min; j++) 1193 1152 xa_store_index(xa, j, GFP_KERNEL); 1194 - __check_store_iter(xa, 0, i, min); 1153 + __check_store_iter(test, 0, i, min); 1195 1154 XA_BUG_ON(xa, !xa_empty(xa)); 1196 1155 for (j = 0; j < min; j++) 1197 1156 xa_store_index(xa, min + j, GFP_KERNEL); 1198 - __check_store_iter(xa, min, i, min); 1157 + __check_store_iter(test, min, i, min); 1199 1158 XA_BUG_ON(xa, !xa_empty(xa)); 1200 1159 } 1201 1160 #ifdef CONFIG_XARRAY_MULTI 1202 1161 xa_store_index(xa, 63, GFP_KERNEL); 1203 1162 xa_store_index(xa, 65, GFP_KERNEL); 1204 - __check_store_iter(xa, 64, 2, 1); 1205 - xa_erase_index(xa, 63); 1163 + __check_store_iter(test, 64, 2, 1); 1164 + xa_erase_index(test, xa, 63); 1206 1165 #endif 1207 1166 XA_BUG_ON(xa, !xa_empty(xa)); 1208 1167 } 1209 1168 1210 - static noinline void check_multi_find_1(struct xarray *xa, unsigned order) 1169 + static noinline void check_multi_find_1(struct kunit *test, unsigned int order) 1211 1170 { 1212 1171 #ifdef CONFIG_XARRAY_MULTI 1172 + struct xarray *xa = xa_param(test); 1173 + 1213 1174 unsigned long multi = 3 << order; 1214 1175 unsigned long next = 4 << order; 1215 1176 unsigned long index; ··· 1236 1189 XA_BUG_ON(xa, xa_find_after(xa, &index, next, XA_PRESENT) != NULL); 1237 1190 XA_BUG_ON(xa, index != next); 1238 1191 1239 - xa_erase_index(xa, multi); 1240 - xa_erase_index(xa, next); 1241 - xa_erase_index(xa, next + 1); 1192 + xa_erase_index(test, xa, multi); 1193 + xa_erase_index(test, xa, next); 1194 + xa_erase_index(test, xa, next + 1); 1242 1195 XA_BUG_ON(xa, !xa_empty(xa)); 1243 1196 #endif 1244 1197 } 1245 1198 1246 - static noinline void check_multi_find_2(struct xarray *xa) 1199 + static noinline void check_multi_find_2(struct kunit *test) 1247 1200 { 1201 + struct xarray *xa = xa_param(test); 1202 + 1248 1203 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 10 : 1; 1249 1204 unsigned int i, j; 1250 1205 void *entry; ··· 1260 1211 GFP_KERNEL); 1261 1212 rcu_read_lock(); 1262 1213 xas_for_each(&xas, entry, ULONG_MAX) { 1263 - xa_erase_index(xa, index); 1214 + xa_erase_index(test, xa, index); 1264 1215 } 1265 1216 rcu_read_unlock(); 1266 - xa_erase_index(xa, index - 1); 1217 + xa_erase_index(test, xa, index - 1); 1267 1218 XA_BUG_ON(xa, !xa_empty(xa)); 1268 1219 } 1269 1220 } 1270 1221 } 1271 1222 1272 - static noinline void check_multi_find_3(struct xarray *xa) 1223 + static noinline void check_multi_find_3(struct kunit *test) 1273 1224 { 1225 + struct xarray *xa = xa_param(test); 1226 + 1274 1227 unsigned int order; 1275 1228 1276 1229 for (order = 5; order < order_limit; order++) { ··· 1281 1230 XA_BUG_ON(xa, !xa_empty(xa)); 1282 1231 xa_store_order(xa, 0, order - 4, xa_mk_index(0), GFP_KERNEL); 1283 1232 XA_BUG_ON(xa, xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT)); 1284 - xa_erase_index(xa, 0); 1233 + xa_erase_index(test, xa, 0); 1285 1234 } 1286 1235 } 1287 1236 1288 - static noinline void check_find_1(struct xarray *xa) 1237 + static noinline void check_find_1(struct kunit *test) 1289 1238 { 1239 + struct xarray *xa = xa_param(test); 1240 + 1290 1241 unsigned long i, j, k; 1291 1242 1292 1243 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1325 1272 else 1326 1273 XA_BUG_ON(xa, entry != NULL); 1327 1274 } 1328 - xa_erase_index(xa, j); 1275 + xa_erase_index(test, xa, j); 1329 1276 XA_BUG_ON(xa, xa_get_mark(xa, j, XA_MARK_0)); 1330 1277 XA_BUG_ON(xa, !xa_get_mark(xa, i, XA_MARK_0)); 1331 1278 } 1332 - xa_erase_index(xa, i); 1279 + xa_erase_index(test, xa, i); 1333 1280 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0)); 1334 1281 } 1335 1282 XA_BUG_ON(xa, !xa_empty(xa)); 1336 1283 } 1337 1284 1338 - static noinline void check_find_2(struct xarray *xa) 1285 + static noinline void check_find_2(struct kunit *test) 1339 1286 { 1287 + struct xarray *xa = xa_param(test); 1288 + 1340 1289 void *entry; 1341 1290 unsigned long i, j, index; 1342 1291 ··· 1358 1303 xa_destroy(xa); 1359 1304 } 1360 1305 1361 - static noinline void check_find_3(struct xarray *xa) 1306 + static noinline void check_find_3(struct kunit *test) 1362 1307 { 1308 + struct xarray *xa = xa_param(test); 1309 + 1363 1310 XA_STATE(xas, xa, 0); 1364 1311 unsigned long i, j, k; 1365 1312 void *entry; ··· 1385 1328 xa_destroy(xa); 1386 1329 } 1387 1330 1388 - static noinline void check_find_4(struct xarray *xa) 1331 + static noinline void check_find_4(struct kunit *test) 1389 1332 { 1333 + struct xarray *xa = xa_param(test); 1334 + 1390 1335 unsigned long index = 0; 1391 1336 void *entry; 1392 1337 ··· 1400 1341 entry = xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT); 1401 1342 XA_BUG_ON(xa, entry); 1402 1343 1403 - xa_erase_index(xa, ULONG_MAX); 1344 + xa_erase_index(test, xa, ULONG_MAX); 1404 1345 } 1405 1346 1406 - static noinline void check_find(struct xarray *xa) 1347 + static noinline void check_find(struct kunit *test) 1407 1348 { 1408 1349 unsigned i; 1409 1350 1410 - check_find_1(xa); 1411 - check_find_2(xa); 1412 - check_find_3(xa); 1413 - check_find_4(xa); 1351 + check_find_1(test); 1352 + check_find_2(test); 1353 + check_find_3(test); 1354 + check_find_4(test); 1414 1355 1415 1356 for (i = 2; i < 10; i++) 1416 - check_multi_find_1(xa, i); 1417 - check_multi_find_2(xa); 1418 - check_multi_find_3(xa); 1357 + check_multi_find_1(test, i); 1358 + check_multi_find_2(test); 1359 + check_multi_find_3(test); 1419 1360 } 1420 1361 1421 1362 /* See find_swap_entry() in mm/shmem.c */ ··· 1441 1382 return entry ? xas.xa_index : -1; 1442 1383 } 1443 1384 1444 - static noinline void check_find_entry(struct xarray *xa) 1385 + static noinline void check_find_entry(struct kunit *test) 1445 1386 { 1387 + struct xarray *xa = xa_param(test); 1388 + 1446 1389 #ifdef CONFIG_XARRAY_MULTI 1447 1390 unsigned int order; 1448 1391 unsigned long offset, index; ··· 1471 1410 xa_store_index(xa, ULONG_MAX, GFP_KERNEL); 1472 1411 XA_BUG_ON(xa, xa_find_entry(xa, xa) != -1); 1473 1412 XA_BUG_ON(xa, xa_find_entry(xa, xa_mk_index(ULONG_MAX)) != -1); 1474 - xa_erase_index(xa, ULONG_MAX); 1413 + xa_erase_index(test, xa, ULONG_MAX); 1475 1414 XA_BUG_ON(xa, !xa_empty(xa)); 1476 1415 } 1477 1416 1478 - static noinline void check_pause(struct xarray *xa) 1417 + static noinline void check_pause(struct kunit *test) 1479 1418 { 1419 + struct xarray *xa = xa_param(test); 1420 + 1480 1421 XA_STATE(xas, xa, 0); 1481 1422 void *entry; 1482 1423 unsigned int order; ··· 1513 1450 xa_destroy(xa); 1514 1451 } 1515 1452 1516 - static noinline void check_move_tiny(struct xarray *xa) 1453 + static noinline void check_move_tiny(struct kunit *test) 1517 1454 { 1455 + struct xarray *xa = xa_param(test); 1456 + 1518 1457 XA_STATE(xas, xa, 0); 1519 1458 1520 1459 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1533 1468 XA_BUG_ON(xa, xas_prev(&xas) != xa_mk_index(0)); 1534 1469 XA_BUG_ON(xa, xas_prev(&xas) != NULL); 1535 1470 rcu_read_unlock(); 1536 - xa_erase_index(xa, 0); 1471 + xa_erase_index(test, xa, 0); 1537 1472 XA_BUG_ON(xa, !xa_empty(xa)); 1538 1473 } 1539 1474 1540 - static noinline void check_move_max(struct xarray *xa) 1475 + static noinline void check_move_max(struct kunit *test) 1541 1476 { 1477 + struct xarray *xa = xa_param(test); 1478 + 1542 1479 XA_STATE(xas, xa, 0); 1543 1480 1544 1481 xa_store_index(xa, ULONG_MAX, GFP_KERNEL); ··· 1556 1489 XA_BUG_ON(xa, xas_find(&xas, ULONG_MAX) != NULL); 1557 1490 rcu_read_unlock(); 1558 1491 1559 - xa_erase_index(xa, ULONG_MAX); 1492 + xa_erase_index(test, xa, ULONG_MAX); 1560 1493 XA_BUG_ON(xa, !xa_empty(xa)); 1561 1494 } 1562 1495 1563 - static noinline void check_move_small(struct xarray *xa, unsigned long idx) 1496 + static noinline void check_move_small(struct kunit *test, unsigned long idx) 1564 1497 { 1498 + struct xarray *xa = xa_param(test); 1499 + 1565 1500 XA_STATE(xas, xa, 0); 1566 1501 unsigned long i; 1567 1502 ··· 1605 1536 XA_BUG_ON(xa, xas.xa_index != ULONG_MAX); 1606 1537 rcu_read_unlock(); 1607 1538 1608 - xa_erase_index(xa, 0); 1609 - xa_erase_index(xa, idx); 1539 + xa_erase_index(test, xa, 0); 1540 + xa_erase_index(test, xa, idx); 1610 1541 XA_BUG_ON(xa, !xa_empty(xa)); 1611 1542 } 1612 1543 1613 - static noinline void check_move(struct xarray *xa) 1544 + static noinline void check_move(struct kunit *test) 1614 1545 { 1546 + struct xarray *xa = xa_param(test); 1547 + 1615 1548 XA_STATE(xas, xa, (1 << 16) - 1); 1616 1549 unsigned long i; 1617 1550 ··· 1640 1569 rcu_read_unlock(); 1641 1570 1642 1571 for (i = (1 << 8); i < (1 << 15); i++) 1643 - xa_erase_index(xa, i); 1572 + xa_erase_index(test, xa, i); 1644 1573 1645 1574 i = xas.xa_index; 1646 1575 ··· 1671 1600 1672 1601 xa_destroy(xa); 1673 1602 1674 - check_move_tiny(xa); 1675 - check_move_max(xa); 1603 + check_move_tiny(test); 1604 + check_move_max(test); 1676 1605 1677 1606 for (i = 0; i < 16; i++) 1678 - check_move_small(xa, 1UL << i); 1607 + check_move_small(test, 1UL << i); 1679 1608 1680 1609 for (i = 2; i < 16; i++) 1681 - check_move_small(xa, (1UL << i) - 1); 1610 + check_move_small(test, (1UL << i) - 1); 1682 1611 } 1683 1612 1684 - static noinline void xa_store_many_order(struct xarray *xa, 1613 + static noinline void xa_store_many_order(struct kunit *test, struct xarray *xa, 1685 1614 unsigned long index, unsigned order) 1686 1615 { 1687 1616 XA_STATE_ORDER(xas, xa, index, order); ··· 1704 1633 XA_BUG_ON(xa, xas_error(&xas)); 1705 1634 } 1706 1635 1707 - static noinline void check_create_range_1(struct xarray *xa, 1636 + static noinline void check_create_range_1(struct kunit *test, 1708 1637 unsigned long index, unsigned order) 1709 1638 { 1639 + struct xarray *xa = xa_param(test); 1640 + 1710 1641 unsigned long i; 1711 1642 1712 - xa_store_many_order(xa, index, order); 1643 + xa_store_many_order(test, xa, index, order); 1713 1644 for (i = index; i < index + (1UL << order); i++) 1714 - xa_erase_index(xa, i); 1645 + xa_erase_index(test, xa, i); 1715 1646 XA_BUG_ON(xa, !xa_empty(xa)); 1716 1647 } 1717 1648 1718 - static noinline void check_create_range_2(struct xarray *xa, unsigned order) 1649 + static noinline void check_create_range_2(struct kunit *test, unsigned int order) 1719 1650 { 1651 + struct xarray *xa = xa_param(test); 1652 + 1720 1653 unsigned long i; 1721 1654 unsigned long nr = 1UL << order; 1722 1655 1723 1656 for (i = 0; i < nr * nr; i += nr) 1724 - xa_store_many_order(xa, i, order); 1657 + xa_store_many_order(test, xa, i, order); 1725 1658 for (i = 0; i < nr * nr; i++) 1726 - xa_erase_index(xa, i); 1659 + xa_erase_index(test, xa, i); 1727 1660 XA_BUG_ON(xa, !xa_empty(xa)); 1728 1661 } 1729 1662 1730 - static noinline void check_create_range_3(void) 1663 + static noinline void check_create_range_3(struct kunit *test) 1731 1664 { 1732 1665 XA_STATE(xas, NULL, 0); 1733 1666 xas_set_err(&xas, -EEXIST); ··· 1739 1664 XA_BUG_ON(NULL, xas_error(&xas) != -EEXIST); 1740 1665 } 1741 1666 1742 - static noinline void check_create_range_4(struct xarray *xa, 1667 + static noinline void check_create_range_4(struct kunit *test, 1743 1668 unsigned long index, unsigned order) 1744 1669 { 1670 + struct xarray *xa = xa_param(test); 1671 + 1745 1672 XA_STATE_ORDER(xas, xa, index, order); 1746 1673 unsigned long base = xas.xa_index; 1747 1674 unsigned long i = 0; ··· 1769 1692 XA_BUG_ON(xa, xas_error(&xas)); 1770 1693 1771 1694 for (i = base; i < base + (1UL << order); i++) 1772 - xa_erase_index(xa, i); 1695 + xa_erase_index(test, xa, i); 1773 1696 XA_BUG_ON(xa, !xa_empty(xa)); 1774 1697 } 1775 1698 1776 - static noinline void check_create_range_5(struct xarray *xa, 1699 + static noinline void check_create_range_5(struct kunit *test, 1777 1700 unsigned long index, unsigned int order) 1778 1701 { 1702 + struct xarray *xa = xa_param(test); 1703 + 1779 1704 XA_STATE_ORDER(xas, xa, index, order); 1780 1705 unsigned int i; 1781 1706 ··· 1794 1715 xa_destroy(xa); 1795 1716 } 1796 1717 1797 - static noinline void check_create_range(struct xarray *xa) 1718 + static noinline void check_create_range(struct kunit *test) 1798 1719 { 1799 1720 unsigned int order; 1800 1721 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 12 : 1; 1801 1722 1802 1723 for (order = 0; order < max_order; order++) { 1803 - check_create_range_1(xa, 0, order); 1804 - check_create_range_1(xa, 1U << order, order); 1805 - check_create_range_1(xa, 2U << order, order); 1806 - check_create_range_1(xa, 3U << order, order); 1807 - check_create_range_1(xa, 1U << 24, order); 1724 + check_create_range_1(test, 0, order); 1725 + check_create_range_1(test, 1U << order, order); 1726 + check_create_range_1(test, 2U << order, order); 1727 + check_create_range_1(test, 3U << order, order); 1728 + check_create_range_1(test, 1U << 24, order); 1808 1729 if (order < 10) 1809 - check_create_range_2(xa, order); 1730 + check_create_range_2(test, order); 1810 1731 1811 - check_create_range_4(xa, 0, order); 1812 - check_create_range_4(xa, 1U << order, order); 1813 - check_create_range_4(xa, 2U << order, order); 1814 - check_create_range_4(xa, 3U << order, order); 1815 - check_create_range_4(xa, 1U << 24, order); 1732 + check_create_range_4(test, 0, order); 1733 + check_create_range_4(test, 1U << order, order); 1734 + check_create_range_4(test, 2U << order, order); 1735 + check_create_range_4(test, 3U << order, order); 1736 + check_create_range_4(test, 1U << 24, order); 1816 1737 1817 - check_create_range_4(xa, 1, order); 1818 - check_create_range_4(xa, (1U << order) + 1, order); 1819 - check_create_range_4(xa, (2U << order) + 1, order); 1820 - check_create_range_4(xa, (2U << order) - 1, order); 1821 - check_create_range_4(xa, (3U << order) + 1, order); 1822 - check_create_range_4(xa, (3U << order) - 1, order); 1823 - check_create_range_4(xa, (1U << 24) + 1, order); 1738 + check_create_range_4(test, 1, order); 1739 + check_create_range_4(test, (1U << order) + 1, order); 1740 + check_create_range_4(test, (2U << order) + 1, order); 1741 + check_create_range_4(test, (2U << order) - 1, order); 1742 + check_create_range_4(test, (3U << order) + 1, order); 1743 + check_create_range_4(test, (3U << order) - 1, order); 1744 + check_create_range_4(test, (1U << 24) + 1, order); 1824 1745 1825 - check_create_range_5(xa, 0, order); 1826 - check_create_range_5(xa, (1U << order), order); 1746 + check_create_range_5(test, 0, order); 1747 + check_create_range_5(test, (1U << order), order); 1827 1748 } 1828 1749 1829 - check_create_range_3(); 1750 + check_create_range_3(test); 1830 1751 } 1831 1752 1832 - static noinline void __check_store_range(struct xarray *xa, unsigned long first, 1753 + static noinline void __check_store_range(struct kunit *test, unsigned long first, 1833 1754 unsigned long last) 1834 1755 { 1756 + struct xarray *xa = xa_param(test); 1757 + 1835 1758 #ifdef CONFIG_XARRAY_MULTI 1836 1759 xa_store_range(xa, first, last, xa_mk_index(first), GFP_KERNEL); 1837 1760 ··· 1848 1767 XA_BUG_ON(xa, !xa_empty(xa)); 1849 1768 } 1850 1769 1851 - static noinline void check_store_range(struct xarray *xa) 1770 + static noinline void check_store_range(struct kunit *test) 1852 1771 { 1853 1772 unsigned long i, j; 1854 1773 1855 1774 for (i = 0; i < 128; i++) { 1856 1775 for (j = i; j < 128; j++) { 1857 - __check_store_range(xa, i, j); 1858 - __check_store_range(xa, 128 + i, 128 + j); 1859 - __check_store_range(xa, 4095 + i, 4095 + j); 1860 - __check_store_range(xa, 4096 + i, 4096 + j); 1861 - __check_store_range(xa, 123456 + i, 123456 + j); 1862 - __check_store_range(xa, (1 << 24) + i, (1 << 24) + j); 1776 + __check_store_range(test, i, j); 1777 + __check_store_range(test, 128 + i, 128 + j); 1778 + __check_store_range(test, 4095 + i, 4095 + j); 1779 + __check_store_range(test, 4096 + i, 4096 + j); 1780 + __check_store_range(test, 123456 + i, 123456 + j); 1781 + __check_store_range(test, (1 << 24) + i, (1 << 24) + j); 1863 1782 } 1864 1783 } 1865 1784 } 1866 1785 1867 1786 #ifdef CONFIG_XARRAY_MULTI 1868 - static void check_split_1(struct xarray *xa, unsigned long index, 1787 + static void check_split_1(struct kunit *test, unsigned long index, 1869 1788 unsigned int order, unsigned int new_order) 1870 1789 { 1790 + struct xarray *xa = xa_param(test); 1791 + 1871 1792 XA_STATE_ORDER(xas, xa, index, new_order); 1872 1793 unsigned int i, found; 1873 1794 void *entry; ··· 1905 1822 xa_destroy(xa); 1906 1823 } 1907 1824 1908 - static noinline void check_split(struct xarray *xa) 1825 + static noinline void check_split(struct kunit *test) 1909 1826 { 1827 + struct xarray *xa = xa_param(test); 1828 + 1910 1829 unsigned int order, new_order; 1911 1830 1912 1831 XA_BUG_ON(xa, !xa_empty(xa)); 1913 1832 1914 1833 for (order = 1; order < 2 * XA_CHUNK_SHIFT; order++) { 1915 1834 for (new_order = 0; new_order < order; new_order++) { 1916 - check_split_1(xa, 0, order, new_order); 1917 - check_split_1(xa, 1UL << order, order, new_order); 1918 - check_split_1(xa, 3UL << order, order, new_order); 1835 + check_split_1(test, 0, order, new_order); 1836 + check_split_1(test, 1UL << order, order, new_order); 1837 + check_split_1(test, 3UL << order, order, new_order); 1919 1838 } 1920 1839 } 1921 1840 } 1922 1841 #else 1923 - static void check_split(struct xarray *xa) { } 1842 + static void check_split(struct kunit *test) { } 1924 1843 #endif 1925 1844 1926 - static void check_align_1(struct xarray *xa, char *name) 1845 + static void check_align_1(struct kunit *test, char *name) 1927 1846 { 1847 + struct xarray *xa = xa_param(test); 1848 + 1928 1849 int i; 1929 1850 unsigned int id; 1930 1851 unsigned long index; ··· 1948 1861 * We should always be able to store without allocating memory after 1949 1862 * reserving a slot. 1950 1863 */ 1951 - static void check_align_2(struct xarray *xa, char *name) 1864 + static void check_align_2(struct kunit *test, char *name) 1952 1865 { 1866 + struct xarray *xa = xa_param(test); 1867 + 1953 1868 int i; 1954 1869 1955 1870 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1970 1881 XA_BUG_ON(xa, !xa_empty(xa)); 1971 1882 } 1972 1883 1973 - static noinline void check_align(struct xarray *xa) 1884 + static noinline void check_align(struct kunit *test) 1974 1885 { 1975 1886 char name[] = "Motorola 68000"; 1976 1887 1977 - check_align_1(xa, name); 1978 - check_align_1(xa, name + 1); 1979 - check_align_1(xa, name + 2); 1980 - check_align_1(xa, name + 3); 1981 - check_align_2(xa, name); 1888 + check_align_1(test, name); 1889 + check_align_1(test, name + 1); 1890 + check_align_1(test, name + 2); 1891 + check_align_1(test, name + 3); 1892 + check_align_2(test, name); 1982 1893 } 1983 1894 1984 1895 static LIST_HEAD(shadow_nodes); ··· 1994 1905 } 1995 1906 } 1996 1907 1997 - static noinline void shadow_remove(struct xarray *xa) 1908 + static noinline void shadow_remove(struct kunit *test, struct xarray *xa) 1998 1909 { 1999 1910 struct xa_node *node; 2000 1911 ··· 2008 1919 xa_unlock(xa); 2009 1920 } 2010 1921 2011 - static noinline void check_workingset(struct xarray *xa, unsigned long index) 1922 + struct workingset_testcase { 1923 + struct xarray *xa; 1924 + unsigned long index; 1925 + }; 1926 + 1927 + static noinline void check_workingset(struct kunit *test) 2012 1928 { 1929 + struct workingset_testcase tc = *(struct workingset_testcase *)test->param_value; 1930 + struct xarray *xa = tc.xa; 1931 + unsigned long index = tc.index; 1932 + 2013 1933 XA_STATE(xas, xa, index); 2014 1934 xas_set_update(&xas, test_update_node); 2015 1935 ··· 2041 1943 xas_unlock(&xas); 2042 1944 XA_BUG_ON(xa, list_empty(&shadow_nodes)); 2043 1945 2044 - shadow_remove(xa); 1946 + shadow_remove(test, xa); 2045 1947 XA_BUG_ON(xa, !list_empty(&shadow_nodes)); 2046 1948 XA_BUG_ON(xa, !xa_empty(xa)); 2047 1949 } ··· 2050 1952 * Check that the pointer / value / sibling entries are accounted the 2051 1953 * way we expect them to be. 2052 1954 */ 2053 - static noinline void check_account(struct xarray *xa) 1955 + static noinline void check_account(struct kunit *test) 2054 1956 { 2055 1957 #ifdef CONFIG_XARRAY_MULTI 1958 + struct xarray *xa = xa_param(test); 1959 + 2056 1960 unsigned int order; 2057 1961 2058 1962 for (order = 1; order < 12; order++) { ··· 2081 1981 #endif 2082 1982 } 2083 1983 2084 - static noinline void check_get_order(struct xarray *xa) 1984 + static noinline void check_get_order(struct kunit *test) 2085 1985 { 1986 + struct xarray *xa = xa_param(test); 1987 + 2086 1988 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; 2087 1989 unsigned int order; 2088 1990 unsigned long i, j; ··· 2103 2001 } 2104 2002 } 2105 2003 2106 - static noinline void check_xas_get_order(struct xarray *xa) 2004 + static noinline void check_xas_get_order(struct kunit *test) 2107 2005 { 2006 + struct xarray *xa = xa_param(test); 2007 + 2108 2008 XA_STATE(xas, xa, 0); 2109 2009 2110 2010 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; ··· 2138 2034 } 2139 2035 } 2140 2036 2141 - static noinline void check_xas_conflict_get_order(struct xarray *xa) 2037 + static noinline void check_xas_conflict_get_order(struct kunit *test) 2142 2038 { 2039 + struct xarray *xa = xa_param(test); 2040 + 2143 2041 XA_STATE(xas, xa, 0); 2144 2042 2145 2043 void *entry; ··· 2198 2092 } 2199 2093 2200 2094 2201 - static noinline void check_destroy(struct xarray *xa) 2095 + static noinline void check_destroy(struct kunit *test) 2202 2096 { 2097 + struct xarray *xa = xa_param(test); 2098 + 2203 2099 unsigned long index; 2204 2100 2205 2101 XA_BUG_ON(xa, !xa_empty(xa)); ··· 2234 2126 } 2235 2127 2236 2128 static DEFINE_XARRAY(array); 2129 + static struct xarray *arrays[] = { &array }; 2130 + KUNIT_ARRAY_PARAM(array, arrays, NULL); 2237 2131 2238 - static int xarray_checks(void) 2239 - { 2240 - check_xa_err(&array); 2241 - check_xas_retry(&array); 2242 - check_xa_load(&array); 2243 - check_xa_mark(&array); 2244 - check_xa_shrink(&array); 2245 - check_xas_erase(&array); 2246 - check_insert(&array); 2247 - check_cmpxchg(&array); 2248 - check_cmpxchg_order(&array); 2249 - check_reserve(&array); 2250 - check_reserve(&xa0); 2251 - check_multi_store(&array); 2252 - check_multi_store_advanced(&array); 2253 - check_get_order(&array); 2254 - check_xas_get_order(&array); 2255 - check_xas_conflict_get_order(&array); 2256 - check_xa_alloc(); 2257 - check_find(&array); 2258 - check_find_entry(&array); 2259 - check_pause(&array); 2260 - check_account(&array); 2261 - check_destroy(&array); 2262 - check_move(&array); 2263 - check_create_range(&array); 2264 - check_store_range(&array); 2265 - check_store_iter(&array); 2266 - check_align(&xa0); 2267 - check_split(&array); 2132 + static struct xarray *xa0s[] = { &xa0 }; 2133 + KUNIT_ARRAY_PARAM(xa0, xa0s, NULL); 2268 2134 2269 - check_workingset(&array, 0); 2270 - check_workingset(&array, 64); 2271 - check_workingset(&array, 4096); 2135 + static struct workingset_testcase workingset_testcases[] = { 2136 + { &array, 0 }, 2137 + { &array, 64 }, 2138 + { &array, 4096 }, 2139 + }; 2140 + KUNIT_ARRAY_PARAM(workingset, workingset_testcases, NULL); 2272 2141 2273 - printk("XArray: %u of %u tests passed\n", tests_passed, tests_run); 2274 - return (tests_run == tests_passed) ? 0 : -EINVAL; 2275 - } 2142 + static struct kunit_case xarray_cases[] = { 2143 + KUNIT_CASE_PARAM(check_xa_err, array_gen_params), 2144 + KUNIT_CASE_PARAM(check_xas_retry, array_gen_params), 2145 + KUNIT_CASE_PARAM(check_xa_load, array_gen_params), 2146 + KUNIT_CASE_PARAM(check_xa_mark, array_gen_params), 2147 + KUNIT_CASE_PARAM(check_xa_shrink, array_gen_params), 2148 + KUNIT_CASE_PARAM(check_xas_erase, array_gen_params), 2149 + KUNIT_CASE_PARAM(check_insert, array_gen_params), 2150 + KUNIT_CASE_PARAM(check_cmpxchg, array_gen_params), 2151 + KUNIT_CASE_PARAM(check_cmpxchg_order, array_gen_params), 2152 + KUNIT_CASE_PARAM(check_reserve, array_gen_params), 2153 + KUNIT_CASE_PARAM(check_reserve, xa0_gen_params), 2154 + KUNIT_CASE_PARAM(check_multi_store, array_gen_params), 2155 + KUNIT_CASE_PARAM(check_multi_store_advanced, array_gen_params), 2156 + KUNIT_CASE_PARAM(check_get_order, array_gen_params), 2157 + KUNIT_CASE_PARAM(check_xas_get_order, array_gen_params), 2158 + KUNIT_CASE_PARAM(check_xas_conflict_get_order, array_gen_params), 2159 + KUNIT_CASE(check_xa_alloc), 2160 + KUNIT_CASE_PARAM(check_find, array_gen_params), 2161 + KUNIT_CASE_PARAM(check_find_entry, array_gen_params), 2162 + KUNIT_CASE_PARAM(check_pause, array_gen_params), 2163 + KUNIT_CASE_PARAM(check_account, array_gen_params), 2164 + KUNIT_CASE_PARAM(check_destroy, array_gen_params), 2165 + KUNIT_CASE_PARAM(check_move, array_gen_params), 2166 + KUNIT_CASE_PARAM(check_create_range, array_gen_params), 2167 + KUNIT_CASE_PARAM(check_store_range, array_gen_params), 2168 + KUNIT_CASE_PARAM(check_store_iter, array_gen_params), 2169 + KUNIT_CASE_PARAM(check_align, xa0_gen_params), 2170 + KUNIT_CASE_PARAM(check_split, array_gen_params), 2171 + KUNIT_CASE_PARAM(check_workingset, workingset_gen_params), 2172 + {}, 2173 + }; 2276 2174 2277 - static void xarray_exit(void) 2278 - { 2279 - } 2175 + static struct kunit_suite xarray_suite = { 2176 + .name = "xarray", 2177 + .test_cases = xarray_cases, 2178 + }; 2280 2179 2281 - module_init(xarray_checks); 2282 - module_exit(xarray_exit); 2180 + kunit_test_suite(xarray_suite); 2181 + 2283 2182 MODULE_AUTHOR("Matthew Wilcox <willy@infradead.org>"); 2284 2183 MODULE_DESCRIPTION("XArray API test module"); 2285 2184 MODULE_LICENSE("GPL");