Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'linux_kselftest-next-6.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest

Pull kselftest update from Shuah Khan:

- livepatch restructuring to move the module out of lib to be built as
a out-of-tree modules during kselftest build. This makes it easier
change, debug and rebuild the tests by running make on the
selftests/livepatch directory, which is not currently possible since
the modules on lib/livepatch are build and installed using the main
makefile modules target.

- livepatch restructuring fixes for problems found by kernel test
robot. The change skips the test if kernel-devel isn't installed
(default value of KDIR), or if KDIR variable passed doesn't exists.

- resctrl test restructuring and new non-contiguous CBMs CAT test

- new ktap_helpers to print diagnostic messages, pass/fail tests based
on exit code, abort test, and finish the test.

- a new test verify power supply properties.

- a new ftrace to exercise function tracer across cpu hotplug.

- timeout increase for mqueue test to allow the test to run on i3.metal
AWS instances.

- minor spelling corrections in several tests.

- missing gitignore files and changes to existing gitignore files.

* tag 'linux_kselftest-next-6.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: (57 commits)
kselftest: Add basic test for probing the rust sample modules
selftests: lib.mk: Do not process TEST_GEN_MODS_DIR
selftests: livepatch: Avoid running the tests if kernel-devel is missing
selftests: livepatch: Add initial .gitignore
selftests/resctrl: Add non-contiguous CBMs CAT test
selftests/resctrl: Add resource_info_file_exists()
selftests/resctrl: Split validate_resctrl_feature_request()
selftests/resctrl: Add a helper for the non-contiguous test
selftests/resctrl: Add test groups and name L3 CAT test L3_CAT
selftests: sched: Fix spelling mistake "hiearchy" -> "hierarchy"
selftests/mqueue: Set timeout to 180 seconds
selftests/ftrace: Add test to exercize function tracer across cpu hotplug
selftest: ftrace: fix minor typo in log
selftests: thermal: intel: workload_hint: add missing gitignore
selftests: thermal: intel: power_floor: add missing gitignore
selftests: uevent: add missing gitignore
selftests: Add test to verify power supply properties
selftests: ktap_helpers: Add a helper to finish the test
selftests: ktap_helpers: Add a helper to abort the test
selftests: ktap_helpers: Add helper to pass/fail test based on exit code
...

+1958 -896
+4
Documentation/dev-tools/kselftest.rst
··· 245 245 TEST_PROGS, TEST_GEN_PROGS mean it is the executable tested by 246 246 default. 247 247 248 + TEST_GEN_MODS_DIR should be used by tests that require modules to be built 249 + before the test starts. The variable will contain the name of the directory 250 + containing the modules. 251 + 248 252 TEST_CUSTOM_PROGS should be used by tests that require custom build 249 253 rules and prevent common build rule use. 250 254
+2 -1
MAINTAINERS
··· 12517 12517 F: include/linux/livepatch.h 12518 12518 F: kernel/livepatch/ 12519 12519 F: kernel/module/livepatch.c 12520 - F: lib/livepatch/ 12521 12520 F: samples/livepatch/ 12522 12521 F: tools/testing/selftests/livepatch/ 12523 12522 ··· 17549 17550 F: drivers/power/supply/ 17550 17551 F: include/linux/power/ 17551 17552 F: include/linux/power_supply.h 17553 + F: tools/testing/selftests/power_supply/ 17552 17554 17553 17555 POWERNV OPERATOR PANEL LCD DISPLAY DRIVER 17554 17556 M: Suraj Jitindar Singh <sjitindarsingh@gmail.com> ··· 19120 19120 F: rust/ 19121 19121 F: samples/rust/ 19122 19122 F: scripts/*rust* 19123 + F: tools/testing/selftests/rust/ 19123 19124 K: \b(?i:rust)\b 19124 19125 19125 19126 RXRPC SOCKETS (AF_RXRPC)
-1
arch/s390/configs/debug_defconfig
··· 880 880 CONFIG_STRING_SELFTEST=y 881 881 CONFIG_TEST_BITOPS=m 882 882 CONFIG_TEST_BPF=m 883 - CONFIG_TEST_LIVEPATCH=m
-1
arch/s390/configs/defconfig
··· 808 808 CONFIG_PERCPU_TEST=m 809 809 CONFIG_ATOMIC64_SELFTEST=y 810 810 CONFIG_TEST_BPF=m 811 - CONFIG_TEST_LIVEPATCH=m
-22
lib/Kconfig.debug
··· 2858 2858 2859 2859 If unsure, say N. 2860 2860 2861 - config TEST_LIVEPATCH 2862 - tristate "Test livepatching" 2863 - default n 2864 - depends on DYNAMIC_DEBUG 2865 - depends on LIVEPATCH 2866 - depends on m 2867 - help 2868 - Test kernel livepatching features for correctness. The tests will 2869 - load test modules that will be livepatched in various scenarios. 2870 - 2871 - To run all the livepatching tests: 2872 - 2873 - make -C tools/testing/selftests TARGETS=livepatch run_tests 2874 - 2875 - Alternatively, individual tests may be invoked: 2876 - 2877 - tools/testing/selftests/livepatch/test-callbacks.sh 2878 - tools/testing/selftests/livepatch/test-livepatch.sh 2879 - tools/testing/selftests/livepatch/test-shadow-vars.sh 2880 - 2881 - If unsure, say N. 2882 - 2883 2861 config TEST_OBJAGG 2884 2862 tristate "Perform selftest on object aggreration manager" 2885 2863 default n
-2
lib/Makefile
··· 134 134 obj-$(CONFIG_TEST_FPU) += test_fpu.o 135 135 CFLAGS_test_fpu.o += $(FPU_CFLAGS) 136 136 137 - obj-$(CONFIG_TEST_LIVEPATCH) += livepatch/ 138 - 139 137 # Some KUnit files (hooks.o) need to be built-in even when KUnit is a module, 140 138 # so we can't just use obj-$(CONFIG_KUNIT). 141 139 ifdef CONFIG_KUNIT
-14
lib/livepatch/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - # 3 - # Makefile for livepatch test code. 4 - 5 - obj-$(CONFIG_TEST_LIVEPATCH) += test_klp_atomic_replace.o \ 6 - test_klp_callbacks_demo.o \ 7 - test_klp_callbacks_demo2.o \ 8 - test_klp_callbacks_busy.o \ 9 - test_klp_callbacks_mod.o \ 10 - test_klp_livepatch.o \ 11 - test_klp_shadow_vars.o \ 12 - test_klp_state.o \ 13 - test_klp_state2.o \ 14 - test_klp_state3.o
lib/livepatch/test_klp_atomic_replace.c tools/testing/selftests/livepatch/test_modules/test_klp_atomic_replace.c
lib/livepatch/test_klp_callbacks_busy.c tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_busy.c
lib/livepatch/test_klp_callbacks_demo.c tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo.c
lib/livepatch/test_klp_callbacks_demo2.c tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo2.c
lib/livepatch/test_klp_callbacks_mod.c tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_mod.c
lib/livepatch/test_klp_livepatch.c tools/testing/selftests/livepatch/test_modules/test_klp_livepatch.c
lib/livepatch/test_klp_shadow_vars.c tools/testing/selftests/livepatch/test_modules/test_klp_shadow_vars.c
lib/livepatch/test_klp_state.c tools/testing/selftests/livepatch/test_modules/test_klp_state.c
lib/livepatch/test_klp_state2.c tools/testing/selftests/livepatch/test_modules/test_klp_state2.c
lib/livepatch/test_klp_state3.c tools/testing/selftests/livepatch/test_modules/test_klp_state3.c
+3
tools/testing/selftests/Makefile
··· 67 67 TARGETS += perf_events 68 68 TARGETS += pidfd 69 69 TARGETS += pid_namespace 70 + TARGETS += power_supply 70 71 TARGETS += powerpc 71 72 TARGETS += prctl 72 73 TARGETS += proc ··· 79 78 TARGETS += rlimits 80 79 TARGETS += rseq 81 80 TARGETS += rtc 81 + TARGETS += rust 82 82 TARGETS += seccomp 83 83 TARGETS += sgx 84 84 TARGETS += sigaltstack ··· 238 236 install -m 744 kselftest/module.sh $(INSTALL_PATH)/kselftest/ 239 237 install -m 744 kselftest/runner.sh $(INSTALL_PATH)/kselftest/ 240 238 install -m 744 kselftest/prefix.pl $(INSTALL_PATH)/kselftest/ 239 + install -m 744 kselftest/ktap_helpers.sh $(INSTALL_PATH)/kselftest/ 241 240 install -m 744 run_kselftest.sh $(INSTALL_PATH)/ 242 241 rm -f $(TEST_LIST) 243 242 @ret=1; \
+1 -1
tools/testing/selftests/dt/Makefile
··· 4 4 5 5 TEST_PROGS := test_unprobed_devices.sh 6 6 TEST_GEN_FILES := compatible_list 7 - TEST_FILES := compatible_ignore_list ktap_helpers.sh 7 + TEST_FILES := compatible_ignore_list 8 8 9 9 include ../lib.mk 10 10
+44 -3
tools/testing/selftests/dt/ktap_helpers.sh tools/testing/selftests/kselftest/ktap_helpers.sh
··· 9 9 KTAP_CNT_FAIL=0 10 10 KTAP_CNT_SKIP=0 11 11 12 + KSFT_PASS=0 13 + KSFT_FAIL=1 14 + KSFT_XFAIL=2 15 + KSFT_XPASS=3 16 + KSFT_SKIP=4 17 + 18 + KSFT_NUM_TESTS=0 19 + 12 20 ktap_print_header() { 13 21 echo "TAP version 13" 14 22 } 15 23 16 - ktap_set_plan() { 17 - num_tests="$1" 24 + ktap_print_msg() 25 + { 26 + echo "#" $@ 27 + } 18 28 19 - echo "1..$num_tests" 29 + ktap_set_plan() { 30 + KSFT_NUM_TESTS="$1" 31 + 32 + echo "1..$KSFT_NUM_TESTS" 20 33 } 21 34 22 35 ktap_skip_all() { ··· 76 63 __ktap_test "$result" "$description" 77 64 78 65 KTAP_CNT_FAIL=$((KTAP_CNT_FAIL+1)) 66 + } 67 + 68 + ktap_test_result() { 69 + description="$1" 70 + shift 71 + 72 + if $@; then 73 + ktap_test_pass "$description" 74 + else 75 + ktap_test_fail "$description" 76 + fi 77 + } 78 + 79 + ktap_exit_fail_msg() { 80 + echo "Bail out! " $@ 81 + ktap_print_totals 82 + 83 + exit "$KSFT_FAIL" 84 + } 85 + 86 + ktap_finished() { 87 + ktap_print_totals 88 + 89 + if [ $(("$KTAP_CNT_PASS" + "$KTAP_CNT_SKIP")) -eq "$KSFT_NUM_TESTS" ]; then 90 + exit "$KSFT_PASS" 91 + else 92 + exit "$KSFT_FAIL" 93 + fi 79 94 } 80 95 81 96 ktap_print_totals() {
+1 -5
tools/testing/selftests/dt/test_unprobed_devices.sh
··· 15 15 16 16 DIR="$(dirname $(readlink -f "$0"))" 17 17 18 - source "${DIR}"/ktap_helpers.sh 18 + source "${DIR}"/../kselftest/ktap_helpers.sh 19 19 20 20 PDT=/proc/device-tree/ 21 21 COMPAT_LIST="${DIR}"/compatible_list 22 22 IGNORE_LIST="${DIR}"/compatible_ignore_list 23 - 24 - KSFT_PASS=0 25 - KSFT_FAIL=1 26 - KSFT_SKIP=4 27 23 28 24 ktap_print_header 29 25
+1 -1
tools/testing/selftests/ftrace/ftracetest
··· 504 504 if [ "$KTAP" = "1" ]; then 505 505 echo -n "# Totals:" 506 506 echo -n " pass:"`echo $PASSED_CASES | wc -w` 507 - echo -n " faii:"`echo $FAILED_CASES | wc -w` 507 + echo -n " fail:"`echo $FAILED_CASES | wc -w` 508 508 echo -n " xfail:"`echo $XFAILED_CASES | wc -w` 509 509 echo -n " xpass:0" 510 510 echo -n " skip:"`echo $UNTESTED_CASES $UNSUPPORTED_CASES | wc -w`
+1 -1
tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 - # description: Test file and directory owership changes for eventfs 3 + # description: Test file and directory ownership changes for eventfs 4 4 5 5 original_group=`stat -c "%g" .` 6 6 original_owner=`stat -c "%u" .`
+42
tools/testing/selftests/ftrace/test.d/ftrace/func_hotplug.tc
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0-or-later 3 + # description: ftrace - function trace across cpu hotplug 4 + # requires: function:tracer 5 + 6 + if ! which nproc ; then 7 + nproc() { 8 + ls -d /sys/devices/system/cpu/cpu[0-9]* | wc -l 9 + } 10 + fi 11 + 12 + NP=`nproc` 13 + 14 + if [ $NP -eq 1 ] ;then 15 + echo "We cannot test cpu hotplug in UP environment" 16 + exit_unresolved 17 + fi 18 + 19 + # Find online cpu 20 + for i in /sys/devices/system/cpu/cpu[1-9]*; do 21 + if [ -f $i/online ] && [ "$(cat $i/online)" = "1" ]; then 22 + cpu=$i 23 + break 24 + fi 25 + done 26 + 27 + if [ -z "$cpu" ]; then 28 + echo "We cannot test cpu hotplug with a single cpu online" 29 + exit_unresolved 30 + fi 31 + 32 + echo 0 > tracing_on 33 + echo > trace 34 + 35 + : "Set $(basename $cpu) offline/online with function tracer enabled" 36 + echo function > current_tracer 37 + echo 1 > tracing_on 38 + (echo 0 > $cpu/online) 39 + (echo "forked"; sleep 1) 40 + (echo 1 > $cpu/online) 41 + echo 0 > tracing_on 42 + echo nop > current_tracer
+1 -1
tools/testing/selftests/ftrace/test.d/trigger/trigger-hist-mod.tc
··· 40 40 41 41 reset_trigger 42 42 43 - echo "Test histgram with log2 modifier" 43 + echo "Test histogram with log2 modifier" 44 44 45 45 echo 'hist:keys=bytes_req.log2' > events/kmem/kmalloc/trigger 46 46 for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
+12 -1
tools/testing/selftests/futex/functional/futex_requeue_pi.c
··· 17 17 * 18 18 *****************************************************************************/ 19 19 20 + #define _GNU_SOURCE 21 + 20 22 #include <errno.h> 21 23 #include <limits.h> 22 24 #include <pthread.h> ··· 360 358 361 359 int main(int argc, char *argv[]) 362 360 { 361 + const char *test_name; 363 362 int c, ret; 364 363 365 364 while ((c = getopt(argc, argv, "bchlot:v:")) != -1) { ··· 400 397 "\tArguments: broadcast=%d locked=%d owner=%d timeout=%ldns\n", 401 398 broadcast, locked, owner, timeout_ns); 402 399 400 + ret = asprintf(&test_name, 401 + "%s broadcast=%d locked=%d owner=%d timeout=%ldns", 402 + TEST_NAME, broadcast, locked, owner, timeout_ns); 403 + if (ret < 0) { 404 + ksft_print_msg("Failed to generate test name\n"); 405 + test_name = TEST_NAME; 406 + } 407 + 403 408 /* 404 409 * FIXME: unit_test is obsolete now that we parse options and the 405 410 * various style of runs are done by run.sh - simplify the code and move ··· 415 404 */ 416 405 ret = unit_test(broadcast, locked, owner, timeout_ns); 417 406 418 - print_result(TEST_NAME, ret); 407 + print_result(test_name, ret); 419 408 return ret; 420 409 }
+18 -5
tools/testing/selftests/lib.mk
··· 58 58 TEST_GEN_PROGS_EXTENDED := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS_EXTENDED)) 59 59 TEST_GEN_FILES := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_FILES)) 60 60 61 - all: $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) 61 + all: $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) \ 62 + $(if $(TEST_GEN_MODS_DIR),gen_mods_dir) 62 63 63 64 define RUN_TESTS 64 65 BASE_DIR="$(selfdir)"; \ ··· 72 71 73 72 run_tests: all 74 73 ifdef building_out_of_srctree 75 - @if [ "X$(TEST_PROGS)$(TEST_PROGS_EXTENDED)$(TEST_FILES)" != "X" ]; then \ 76 - rsync -aq --copy-unsafe-links $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \ 74 + @if [ "X$(TEST_PROGS)$(TEST_PROGS_EXTENDED)$(TEST_FILES)$(TEST_GEN_MODS_DIR)" != "X" ]; then \ 75 + rsync -aq --copy-unsafe-links $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(TEST_GEN_MODS_DIR) $(OUTPUT); \ 77 76 fi 78 77 @if [ "X$(TEST_PROGS)" != "X" ]; then \ 79 78 $(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) \ ··· 85 84 @$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(TEST_PROGS)) 86 85 endif 87 86 87 + gen_mods_dir: 88 + $(Q)$(MAKE) -C $(TEST_GEN_MODS_DIR) 89 + 90 + clean_mods_dir: 91 + $(Q)$(MAKE) -C $(TEST_GEN_MODS_DIR) clean 92 + 88 93 define INSTALL_SINGLE_RULE 89 94 $(if $(INSTALL_LIST),@mkdir -p $(INSTALL_PATH)) 90 95 $(if $(INSTALL_LIST),rsync -a --copy-unsafe-links $(INSTALL_LIST) $(INSTALL_PATH)/) 96 + endef 97 + 98 + define INSTALL_MODS_RULE 99 + $(if $(INSTALL_LIST),@mkdir -p $(INSTALL_PATH)/$(INSTALL_LIST)) 100 + $(if $(INSTALL_LIST),rsync -a --copy-unsafe-links $(INSTALL_LIST)/*.ko $(INSTALL_PATH)/$(INSTALL_LIST)) 91 101 endef 92 102 93 103 define INSTALL_RULE ··· 109 97 $(eval INSTALL_LIST = $(TEST_CUSTOM_PROGS)) $(INSTALL_SINGLE_RULE) 110 98 $(eval INSTALL_LIST = $(TEST_GEN_PROGS_EXTENDED)) $(INSTALL_SINGLE_RULE) 111 99 $(eval INSTALL_LIST = $(TEST_GEN_FILES)) $(INSTALL_SINGLE_RULE) 100 + $(eval INSTALL_LIST = $(notdir $(TEST_GEN_MODS_DIR))) $(INSTALL_MODS_RULE) 112 101 $(eval INSTALL_LIST = $(wildcard config settings)) $(INSTALL_SINGLE_RULE) 113 102 endef 114 103 ··· 135 122 $(RM) -r $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(EXTRA_CLEAN) 136 123 endef 137 124 138 - clean: 125 + clean: $(if $(TEST_GEN_MODS_DIR),clean_mods_dir) 139 126 $(CLEAN) 140 127 141 128 # Enables to extend CFLAGS and LDFLAGS from command line, e.g. ··· 166 153 $(LINK.S) $^ $(LDLIBS) -o $@ 167 154 endif 168 155 169 - .PHONY: run_tests all clean install emit_tests 156 + .PHONY: run_tests all clean install emit_tests gen_mods_dir clean_mods_dir
+1
tools/testing/selftests/livepatch/.gitignore
··· 1 + test_klp-call_getpid
+4 -1
tools/testing/selftests/livepatch/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + TEST_GEN_FILES := test_klp-call_getpid 4 + TEST_GEN_MODS_DIR := test_modules 3 5 TEST_PROGS_EXTENDED := functions.sh 4 6 TEST_PROGS := \ 5 7 test-livepatch.sh \ ··· 9 7 test-shadow-vars.sh \ 10 8 test-state.sh \ 11 9 test-ftrace.sh \ 12 - test-sysfs.sh 10 + test-sysfs.sh \ 11 + test-syscall.sh 13 12 14 13 TEST_FILES := settings 15 14
+19 -6
tools/testing/selftests/livepatch/README
··· 13 13 Config 14 14 ------ 15 15 16 - Set these config options and their prerequisites: 16 + Set CONFIG_LIVEPATCH=y option and it's prerequisites. 17 17 18 - CONFIG_LIVEPATCH=y 19 - CONFIG_TEST_LIVEPATCH=m 20 18 19 + Building the tests 20 + ------------------ 21 + 22 + To only build the tests without running them, run: 23 + 24 + % make -C tools/testing/selftests/livepatch 25 + 26 + The command above will compile all test modules and test programs, making them 27 + ready to be packaged if so desired. 21 28 22 29 Running the tests 23 30 ----------------- 24 31 25 - Test kernel modules are built as part of lib/ (make modules) and need to 26 - be installed (make modules_install) as the test scripts will modprobe 27 - them. 32 + Test kernel modules are built before running the livepatch selftests. The 33 + modules are located under test_modules directory, and are built as out-of-tree 34 + modules. This is specially useful since the same sources can be built and 35 + tested on systems with different kABI, ensuring they the tests are backwards 36 + compatible. The modules will be loaded by the test scripts using insmod. 28 37 29 38 To run the livepatch selftests, from the top of the kernel source tree: 30 39 31 40 % make -C tools/testing/selftests TARGETS=livepatch run_tests 41 + 42 + or 43 + 44 + % make kselftest TARGETS=livepatch 32 45 33 46 34 47 Adding tests
-1
tools/testing/selftests/livepatch/config
··· 1 1 CONFIG_LIVEPATCH=y 2 2 CONFIG_DYNAMIC_DEBUG=y 3 - CONFIG_TEST_LIVEPATCH=m
+26 -21
tools/testing/selftests/livepatch/functions.sh
··· 34 34 fi 35 35 } 36 36 37 + # Check if we can compile the modules before loading them 38 + function has_kdir() { 39 + if [ -z "$KDIR" ]; then 40 + KDIR="/lib/modules/$(uname -r)/build" 41 + fi 42 + 43 + if [ ! -d "$KDIR" ]; then 44 + echo "skip all tests: KDIR ($KDIR) not available to compile modules." 45 + exit $ksft_skip 46 + fi 47 + } 48 + 37 49 # die(msg) - game over, man 38 50 # msg - dying words 39 51 function die() { ··· 108 96 # the ftrace_enabled sysctl. 109 97 function setup_config() { 110 98 is_root 99 + has_kdir 111 100 push_config 112 101 set_dynamic_debug 113 102 set_ftrace_enabled 1 ··· 128 115 done 129 116 } 130 117 131 - function assert_mod() { 132 - local mod="$1" 133 - 134 - modprobe --dry-run "$mod" &>/dev/null 135 - } 136 - 137 118 function is_livepatch_mod() { 138 119 local mod="$1" 139 120 140 - if [[ $(modinfo "$mod" | awk '/^livepatch:/{print $NF}') == "Y" ]]; then 121 + if [[ ! -f "test_modules/$mod.ko" ]]; then 122 + die "Can't find \"test_modules/$mod.ko\", try \"make\"" 123 + fi 124 + 125 + if [[ $(modinfo "test_modules/$mod.ko" | awk '/^livepatch:/{print $NF}') == "Y" ]]; then 141 126 return 0 142 127 fi 143 128 ··· 145 134 function __load_mod() { 146 135 local mod="$1"; shift 147 136 148 - local msg="% modprobe $mod $*" 137 + local msg="% insmod test_modules/$mod.ko $*" 149 138 log "${msg%% }" 150 - ret=$(modprobe "$mod" "$@" 2>&1) 139 + ret=$(insmod "test_modules/$mod.ko" "$@" 2>&1) 151 140 if [[ "$ret" != "" ]]; then 152 141 die "$ret" 153 142 fi ··· 160 149 161 150 # load_mod(modname, params) - load a kernel module 162 151 # modname - module name to load 163 - # params - module parameters to pass to modprobe 152 + # params - module parameters to pass to insmod 164 153 function load_mod() { 165 154 local mod="$1"; shift 166 - 167 - assert_mod "$mod" || 168 - skip "unable to load module ${mod}, verify CONFIG_TEST_LIVEPATCH=m and run self-tests as root" 169 155 170 156 is_livepatch_mod "$mod" && 171 157 die "use load_lp() to load the livepatch module $mod" ··· 173 165 # load_lp_nowait(modname, params) - load a kernel module with a livepatch 174 166 # but do not wait on until the transition finishes 175 167 # modname - module name to load 176 - # params - module parameters to pass to modprobe 168 + # params - module parameters to pass to insmod 177 169 function load_lp_nowait() { 178 170 local mod="$1"; shift 179 - 180 - assert_mod "$mod" || 181 - skip "unable to load module ${mod}, verify CONFIG_TEST_LIVEPATCH=m and run self-tests as root" 182 171 183 172 is_livepatch_mod "$mod" || 184 173 die "module $mod is not a livepatch" ··· 189 184 190 185 # load_lp(modname, params) - load a kernel module with a livepatch 191 186 # modname - module name to load 192 - # params - module parameters to pass to modprobe 187 + # params - module parameters to pass to insmod 193 188 function load_lp() { 194 189 local mod="$1"; shift 195 190 ··· 202 197 203 198 # load_failing_mod(modname, params) - load a kernel module, expect to fail 204 199 # modname - module name to load 205 - # params - module parameters to pass to modprobe 200 + # params - module parameters to pass to insmod 206 201 function load_failing_mod() { 207 202 local mod="$1"; shift 208 203 209 - local msg="% modprobe $mod $*" 204 + local msg="% insmod test_modules/$mod.ko $*" 210 205 log "${msg%% }" 211 - ret=$(modprobe "$mod" "$@" 2>&1) 206 + ret=$(insmod "test_modules/$mod.ko" "$@" 2>&1) 212 207 if [[ "$ret" == "" ]]; then 213 208 die "$mod unexpectedly loaded" 214 209 fi
+25 -25
tools/testing/selftests/livepatch/test-callbacks.sh
··· 34 34 unload_lp $MOD_LIVEPATCH 35 35 unload_mod $MOD_TARGET 36 36 37 - check_result "% modprobe $MOD_TARGET 37 + check_result "% insmod test_modules/$MOD_TARGET.ko 38 38 $MOD_TARGET: ${MOD_TARGET}_init 39 - % modprobe $MOD_LIVEPATCH 39 + % insmod test_modules/$MOD_LIVEPATCH.ko 40 40 livepatch: enabling patch '$MOD_LIVEPATCH' 41 41 livepatch: '$MOD_LIVEPATCH': initializing patching transition 42 42 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 81 81 unload_lp $MOD_LIVEPATCH 82 82 unload_mod $MOD_TARGET 83 83 84 - check_result "% modprobe $MOD_LIVEPATCH 84 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 85 85 livepatch: enabling patch '$MOD_LIVEPATCH' 86 86 livepatch: '$MOD_LIVEPATCH': initializing patching transition 87 87 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 89 89 livepatch: '$MOD_LIVEPATCH': completing patching transition 90 90 $MOD_LIVEPATCH: post_patch_callback: vmlinux 91 91 livepatch: '$MOD_LIVEPATCH': patching complete 92 - % modprobe $MOD_TARGET 92 + % insmod test_modules/$MOD_TARGET.ko 93 93 livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET' 94 94 $MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init 95 95 $MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init ··· 129 129 disable_lp $MOD_LIVEPATCH 130 130 unload_lp $MOD_LIVEPATCH 131 131 132 - check_result "% modprobe $MOD_TARGET 132 + check_result "% insmod test_modules/$MOD_TARGET.ko 133 133 $MOD_TARGET: ${MOD_TARGET}_init 134 - % modprobe $MOD_LIVEPATCH 134 + % insmod test_modules/$MOD_LIVEPATCH.ko 135 135 livepatch: enabling patch '$MOD_LIVEPATCH' 136 136 livepatch: '$MOD_LIVEPATCH': initializing patching transition 137 137 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 177 177 disable_lp $MOD_LIVEPATCH 178 178 unload_lp $MOD_LIVEPATCH 179 179 180 - check_result "% modprobe $MOD_LIVEPATCH 180 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 181 181 livepatch: enabling patch '$MOD_LIVEPATCH' 182 182 livepatch: '$MOD_LIVEPATCH': initializing patching transition 183 183 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 185 185 livepatch: '$MOD_LIVEPATCH': completing patching transition 186 186 $MOD_LIVEPATCH: post_patch_callback: vmlinux 187 187 livepatch: '$MOD_LIVEPATCH': patching complete 188 - % modprobe $MOD_TARGET 188 + % insmod test_modules/$MOD_TARGET.ko 189 189 livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET' 190 190 $MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init 191 191 $MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init ··· 219 219 disable_lp $MOD_LIVEPATCH 220 220 unload_lp $MOD_LIVEPATCH 221 221 222 - check_result "% modprobe $MOD_LIVEPATCH 222 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 223 223 livepatch: enabling patch '$MOD_LIVEPATCH' 224 224 livepatch: '$MOD_LIVEPATCH': initializing patching transition 225 225 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 254 254 load_failing_mod $MOD_LIVEPATCH pre_patch_ret=-19 255 255 unload_mod $MOD_TARGET 256 256 257 - check_result "% modprobe $MOD_TARGET 257 + check_result "% insmod test_modules/$MOD_TARGET.ko 258 258 $MOD_TARGET: ${MOD_TARGET}_init 259 - % modprobe $MOD_LIVEPATCH pre_patch_ret=-19 259 + % insmod test_modules/$MOD_LIVEPATCH.ko pre_patch_ret=-19 260 260 livepatch: enabling patch '$MOD_LIVEPATCH' 261 261 livepatch: '$MOD_LIVEPATCH': initializing patching transition 262 262 test_klp_callbacks_demo: pre_patch_callback: vmlinux ··· 265 265 livepatch: '$MOD_LIVEPATCH': canceling patching transition, going to unpatch 266 266 livepatch: '$MOD_LIVEPATCH': completing unpatching transition 267 267 livepatch: '$MOD_LIVEPATCH': unpatching complete 268 - modprobe: ERROR: could not insert '$MOD_LIVEPATCH': No such device 268 + insmod: ERROR: could not insert module test_modules/$MOD_LIVEPATCH.ko: No such device 269 269 % rmmod $MOD_TARGET 270 270 $MOD_TARGET: ${MOD_TARGET}_exit" 271 271 ··· 295 295 disable_lp $MOD_LIVEPATCH 296 296 unload_lp $MOD_LIVEPATCH 297 297 298 - check_result "% modprobe $MOD_LIVEPATCH 298 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 299 299 livepatch: enabling patch '$MOD_LIVEPATCH' 300 300 livepatch: '$MOD_LIVEPATCH': initializing patching transition 301 301 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 304 304 $MOD_LIVEPATCH: post_patch_callback: vmlinux 305 305 livepatch: '$MOD_LIVEPATCH': patching complete 306 306 % echo -19 > /sys/module/$MOD_LIVEPATCH/parameters/pre_patch_ret 307 - % modprobe $MOD_TARGET 307 + % insmod test_modules/$MOD_TARGET.ko 308 308 livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET' 309 309 $MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init 310 310 livepatch: pre-patch callback failed for object '$MOD_TARGET' 311 311 livepatch: patch '$MOD_LIVEPATCH' failed for module '$MOD_TARGET', refusing to load module '$MOD_TARGET' 312 - modprobe: ERROR: could not insert '$MOD_TARGET': No such device 312 + insmod: ERROR: could not insert module test_modules/$MOD_TARGET.ko: No such device 313 313 % echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled 314 314 livepatch: '$MOD_LIVEPATCH': initializing unpatching transition 315 315 $MOD_LIVEPATCH: pre_unpatch_callback: vmlinux ··· 340 340 unload_lp $MOD_LIVEPATCH 341 341 unload_mod $MOD_TARGET_BUSY 342 342 343 - check_result "% modprobe $MOD_TARGET_BUSY block_transition=N 343 + check_result "% insmod test_modules/$MOD_TARGET_BUSY.ko block_transition=N 344 344 $MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_init 345 345 $MOD_TARGET_BUSY: busymod_work_func enter 346 346 $MOD_TARGET_BUSY: busymod_work_func exit 347 - % modprobe $MOD_LIVEPATCH 347 + % insmod test_modules/$MOD_LIVEPATCH.ko 348 348 livepatch: enabling patch '$MOD_LIVEPATCH' 349 349 livepatch: '$MOD_LIVEPATCH': initializing patching transition 350 350 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 354 354 $MOD_LIVEPATCH: post_patch_callback: vmlinux 355 355 $MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state 356 356 livepatch: '$MOD_LIVEPATCH': patching complete 357 - % modprobe $MOD_TARGET 357 + % insmod test_modules/$MOD_TARGET.ko 358 358 livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET' 359 359 $MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init 360 360 $MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init ··· 421 421 unload_lp $MOD_LIVEPATCH 422 422 unload_mod $MOD_TARGET_BUSY 423 423 424 - check_result "% modprobe $MOD_TARGET_BUSY block_transition=Y 424 + check_result "% insmod test_modules/$MOD_TARGET_BUSY.ko block_transition=Y 425 425 $MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_init 426 426 $MOD_TARGET_BUSY: busymod_work_func enter 427 - % modprobe $MOD_LIVEPATCH 427 + % insmod test_modules/$MOD_LIVEPATCH.ko 428 428 livepatch: enabling patch '$MOD_LIVEPATCH' 429 429 livepatch: '$MOD_LIVEPATCH': initializing patching transition 430 430 $MOD_LIVEPATCH: pre_patch_callback: vmlinux 431 431 $MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state 432 432 livepatch: '$MOD_LIVEPATCH': starting patching transition 433 - % modprobe $MOD_TARGET 433 + % insmod test_modules/$MOD_TARGET.ko 434 434 livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET' 435 435 $MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init 436 436 $MOD_TARGET: ${MOD_TARGET}_init ··· 467 467 unload_lp $MOD_LIVEPATCH2 468 468 unload_lp $MOD_LIVEPATCH 469 469 470 - check_result "% modprobe $MOD_LIVEPATCH 470 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 471 471 livepatch: enabling patch '$MOD_LIVEPATCH' 472 472 livepatch: '$MOD_LIVEPATCH': initializing patching transition 473 473 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 475 475 livepatch: '$MOD_LIVEPATCH': completing patching transition 476 476 $MOD_LIVEPATCH: post_patch_callback: vmlinux 477 477 livepatch: '$MOD_LIVEPATCH': patching complete 478 - % modprobe $MOD_LIVEPATCH2 478 + % insmod test_modules/$MOD_LIVEPATCH2.ko 479 479 livepatch: enabling patch '$MOD_LIVEPATCH2' 480 480 livepatch: '$MOD_LIVEPATCH2': initializing patching transition 481 481 $MOD_LIVEPATCH2: pre_patch_callback: vmlinux ··· 523 523 unload_lp $MOD_LIVEPATCH2 524 524 unload_lp $MOD_LIVEPATCH 525 525 526 - check_result "% modprobe $MOD_LIVEPATCH 526 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 527 527 livepatch: enabling patch '$MOD_LIVEPATCH' 528 528 livepatch: '$MOD_LIVEPATCH': initializing patching transition 529 529 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 531 531 livepatch: '$MOD_LIVEPATCH': completing patching transition 532 532 $MOD_LIVEPATCH: post_patch_callback: vmlinux 533 533 livepatch: '$MOD_LIVEPATCH': patching complete 534 - % modprobe $MOD_LIVEPATCH2 replace=1 534 + % insmod test_modules/$MOD_LIVEPATCH2.ko replace=1 535 535 livepatch: enabling patch '$MOD_LIVEPATCH2' 536 536 livepatch: '$MOD_LIVEPATCH2': initializing patching transition 537 537 $MOD_LIVEPATCH2: pre_patch_callback: vmlinux
+3 -3
tools/testing/selftests/livepatch/test-ftrace.sh
··· 35 35 unload_lp $MOD_LIVEPATCH 36 36 37 37 check_result "livepatch: kernel.ftrace_enabled = 0 38 - % modprobe $MOD_LIVEPATCH 38 + % insmod test_modules/$MOD_LIVEPATCH.ko 39 39 livepatch: enabling patch '$MOD_LIVEPATCH' 40 40 livepatch: '$MOD_LIVEPATCH': initializing patching transition 41 41 livepatch: failed to register ftrace handler for function 'cmdline_proc_show' (-16) ··· 44 44 livepatch: '$MOD_LIVEPATCH': canceling patching transition, going to unpatch 45 45 livepatch: '$MOD_LIVEPATCH': completing unpatching transition 46 46 livepatch: '$MOD_LIVEPATCH': unpatching complete 47 - modprobe: ERROR: could not insert '$MOD_LIVEPATCH': Device or resource busy 47 + insmod: ERROR: could not insert module test_modules/$MOD_LIVEPATCH.ko: Device or resource busy 48 48 livepatch: kernel.ftrace_enabled = 1 49 - % modprobe $MOD_LIVEPATCH 49 + % insmod test_modules/$MOD_LIVEPATCH.ko 50 50 livepatch: enabling patch '$MOD_LIVEPATCH' 51 51 livepatch: '$MOD_LIVEPATCH': initializing patching transition 52 52 livepatch: '$MOD_LIVEPATCH': starting patching transition
+5 -5
tools/testing/selftests/livepatch/test-livepatch.sh
··· 31 31 die "livepatch kselftest(s) failed" 32 32 fi 33 33 34 - check_result "% modprobe $MOD_LIVEPATCH 34 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 35 35 livepatch: enabling patch '$MOD_LIVEPATCH' 36 36 livepatch: '$MOD_LIVEPATCH': initializing patching transition 37 37 livepatch: '$MOD_LIVEPATCH': starting patching transition ··· 75 75 grep 'live patched' /proc/cmdline > /dev/kmsg 76 76 grep 'live patched' /proc/meminfo > /dev/kmsg 77 77 78 - check_result "% modprobe $MOD_LIVEPATCH 78 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 79 79 livepatch: enabling patch '$MOD_LIVEPATCH' 80 80 livepatch: '$MOD_LIVEPATCH': initializing patching transition 81 81 livepatch: '$MOD_LIVEPATCH': starting patching transition 82 82 livepatch: '$MOD_LIVEPATCH': completing patching transition 83 83 livepatch: '$MOD_LIVEPATCH': patching complete 84 84 $MOD_LIVEPATCH: this has been live patched 85 - % modprobe $MOD_REPLACE replace=0 85 + % insmod test_modules/$MOD_REPLACE.ko replace=0 86 86 livepatch: enabling patch '$MOD_REPLACE' 87 87 livepatch: '$MOD_REPLACE': initializing patching transition 88 88 livepatch: '$MOD_REPLACE': starting patching transition ··· 135 135 grep 'live patched' /proc/cmdline > /dev/kmsg 136 136 grep 'live patched' /proc/meminfo > /dev/kmsg 137 137 138 - check_result "% modprobe $MOD_LIVEPATCH 138 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 139 139 livepatch: enabling patch '$MOD_LIVEPATCH' 140 140 livepatch: '$MOD_LIVEPATCH': initializing patching transition 141 141 livepatch: '$MOD_LIVEPATCH': starting patching transition 142 142 livepatch: '$MOD_LIVEPATCH': completing patching transition 143 143 livepatch: '$MOD_LIVEPATCH': patching complete 144 144 $MOD_LIVEPATCH: this has been live patched 145 - % modprobe $MOD_REPLACE replace=1 145 + % insmod test_modules/$MOD_REPLACE.ko replace=1 146 146 livepatch: enabling patch '$MOD_REPLACE' 147 147 livepatch: '$MOD_REPLACE': initializing patching transition 148 148 livepatch: '$MOD_REPLACE': starting patching transition
+1 -1
tools/testing/selftests/livepatch/test-shadow-vars.sh
··· 16 16 load_mod $MOD_TEST 17 17 unload_mod $MOD_TEST 18 18 19 - check_result "% modprobe $MOD_TEST 19 + check_result "% insmod test_modules/$MOD_TEST.ko 20 20 $MOD_TEST: klp_shadow_get(obj=PTR1, id=0x1234) = PTR0 21 21 $MOD_TEST: got expected NULL result 22 22 $MOD_TEST: shadow_ctor: PTR3 -> PTR2
+9 -9
tools/testing/selftests/livepatch/test-state.sh
··· 19 19 disable_lp $MOD_LIVEPATCH 20 20 unload_lp $MOD_LIVEPATCH 21 21 22 - check_result "% modprobe $MOD_LIVEPATCH 22 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 23 23 livepatch: enabling patch '$MOD_LIVEPATCH' 24 24 livepatch: '$MOD_LIVEPATCH': initializing patching transition 25 25 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 51 51 disable_lp $MOD_LIVEPATCH2 52 52 unload_lp $MOD_LIVEPATCH2 53 53 54 - check_result "% modprobe $MOD_LIVEPATCH 54 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 55 55 livepatch: enabling patch '$MOD_LIVEPATCH' 56 56 livepatch: '$MOD_LIVEPATCH': initializing patching transition 57 57 $MOD_LIVEPATCH: pre_patch_callback: vmlinux ··· 61 61 $MOD_LIVEPATCH: post_patch_callback: vmlinux 62 62 $MOD_LIVEPATCH: fix_console_loglevel: fixing console_loglevel 63 63 livepatch: '$MOD_LIVEPATCH': patching complete 64 - % modprobe $MOD_LIVEPATCH2 64 + % insmod test_modules/$MOD_LIVEPATCH2.ko 65 65 livepatch: enabling patch '$MOD_LIVEPATCH2' 66 66 livepatch: '$MOD_LIVEPATCH2': initializing patching transition 67 67 $MOD_LIVEPATCH2: pre_patch_callback: vmlinux ··· 96 96 unload_lp $MOD_LIVEPATCH2 97 97 unload_lp $MOD_LIVEPATCH3 98 98 99 - check_result "% modprobe $MOD_LIVEPATCH2 99 + check_result "% insmod test_modules/$MOD_LIVEPATCH2.ko 100 100 livepatch: enabling patch '$MOD_LIVEPATCH2' 101 101 livepatch: '$MOD_LIVEPATCH2': initializing patching transition 102 102 $MOD_LIVEPATCH2: pre_patch_callback: vmlinux ··· 106 106 $MOD_LIVEPATCH2: post_patch_callback: vmlinux 107 107 $MOD_LIVEPATCH2: fix_console_loglevel: fixing console_loglevel 108 108 livepatch: '$MOD_LIVEPATCH2': patching complete 109 - % modprobe $MOD_LIVEPATCH3 109 + % insmod test_modules/$MOD_LIVEPATCH3.ko 110 110 livepatch: enabling patch '$MOD_LIVEPATCH3' 111 111 livepatch: '$MOD_LIVEPATCH3': initializing patching transition 112 112 $MOD_LIVEPATCH3: pre_patch_callback: vmlinux ··· 117 117 $MOD_LIVEPATCH3: fix_console_loglevel: taking over the console_loglevel change 118 118 livepatch: '$MOD_LIVEPATCH3': patching complete 119 119 % rmmod $MOD_LIVEPATCH2 120 - % modprobe $MOD_LIVEPATCH2 120 + % insmod test_modules/$MOD_LIVEPATCH2.ko 121 121 livepatch: enabling patch '$MOD_LIVEPATCH2' 122 122 livepatch: '$MOD_LIVEPATCH2': initializing patching transition 123 123 $MOD_LIVEPATCH2: pre_patch_callback: vmlinux ··· 149 149 disable_lp $MOD_LIVEPATCH2 150 150 unload_lp $MOD_LIVEPATCH2 151 151 152 - check_result "% modprobe $MOD_LIVEPATCH2 152 + check_result "% insmod test_modules/$MOD_LIVEPATCH2.ko 153 153 livepatch: enabling patch '$MOD_LIVEPATCH2' 154 154 livepatch: '$MOD_LIVEPATCH2': initializing patching transition 155 155 $MOD_LIVEPATCH2: pre_patch_callback: vmlinux ··· 159 159 $MOD_LIVEPATCH2: post_patch_callback: vmlinux 160 160 $MOD_LIVEPATCH2: fix_console_loglevel: fixing console_loglevel 161 161 livepatch: '$MOD_LIVEPATCH2': patching complete 162 - % modprobe $MOD_LIVEPATCH 162 + % insmod test_modules/$MOD_LIVEPATCH.ko 163 163 livepatch: Livepatch patch ($MOD_LIVEPATCH) is not compatible with the already installed livepatches. 164 - modprobe: ERROR: could not insert '$MOD_LIVEPATCH': Invalid argument 164 + insmod: ERROR: could not insert module test_modules/$MOD_LIVEPATCH.ko: Invalid parameters 165 165 % echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH2/enabled 166 166 livepatch: '$MOD_LIVEPATCH2': initializing unpatching transition 167 167 $MOD_LIVEPATCH2: pre_unpatch_callback: vmlinux
+53
tools/testing/selftests/livepatch/test-syscall.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # Copyright (C) 2023 SUSE 4 + # Author: Marcos Paulo de Souza <mpdesouza@suse.com> 5 + 6 + . $(dirname $0)/functions.sh 7 + 8 + MOD_SYSCALL=test_klp_syscall 9 + 10 + setup_config 11 + 12 + # - Start _NRPROC processes calling getpid and load a livepatch to patch the 13 + # getpid syscall. Check if all the processes transitioned to the livepatched 14 + # state. 15 + 16 + start_test "patch getpid syscall while being heavily hammered" 17 + 18 + for i in $(seq 1 $(getconf _NPROCESSORS_ONLN)); do 19 + ./test_klp-call_getpid & 20 + pids[$i]="$!" 21 + done 22 + 23 + pid_list=$(echo ${pids[@]} | tr ' ' ',') 24 + load_lp $MOD_SYSCALL klp_pids=$pid_list 25 + 26 + # wait for all tasks to transition to patched state 27 + loop_until 'grep -q '^0$' /sys/kernel/test_klp_syscall/npids' 28 + 29 + pending_pids=$(cat /sys/kernel/test_klp_syscall/npids) 30 + log "$MOD_SYSCALL: Remaining not livepatched processes: $pending_pids" 31 + 32 + for pid in ${pids[@]}; do 33 + kill $pid || true 34 + done 35 + 36 + disable_lp $MOD_SYSCALL 37 + unload_lp $MOD_SYSCALL 38 + 39 + check_result "% insmod test_modules/$MOD_SYSCALL.ko klp_pids=$pid_list 40 + livepatch: enabling patch '$MOD_SYSCALL' 41 + livepatch: '$MOD_SYSCALL': initializing patching transition 42 + livepatch: '$MOD_SYSCALL': starting patching transition 43 + livepatch: '$MOD_SYSCALL': completing patching transition 44 + livepatch: '$MOD_SYSCALL': patching complete 45 + $MOD_SYSCALL: Remaining not livepatched processes: 0 46 + % echo 0 > /sys/kernel/livepatch/$MOD_SYSCALL/enabled 47 + livepatch: '$MOD_SYSCALL': initializing unpatching transition 48 + livepatch: '$MOD_SYSCALL': starting unpatching transition 49 + livepatch: '$MOD_SYSCALL': completing unpatching transition 50 + livepatch: '$MOD_SYSCALL': unpatching complete 51 + % rmmod $MOD_SYSCALL" 52 + 53 + exit 0
+3 -3
tools/testing/selftests/livepatch/test-sysfs.sh
··· 27 27 28 28 unload_lp $MOD_LIVEPATCH 29 29 30 - check_result "% modprobe $MOD_LIVEPATCH 30 + check_result "% insmod test_modules/$MOD_LIVEPATCH.ko 31 31 livepatch: enabling patch '$MOD_LIVEPATCH' 32 32 livepatch: '$MOD_LIVEPATCH': initializing patching transition 33 33 livepatch: '$MOD_LIVEPATCH': starting patching transition ··· 56 56 disable_lp $MOD_LIVEPATCH 57 57 unload_lp $MOD_LIVEPATCH 58 58 59 - check_result "% modprobe test_klp_callbacks_demo 59 + check_result "% insmod test_modules/test_klp_callbacks_demo.ko 60 60 livepatch: enabling patch 'test_klp_callbacks_demo' 61 61 livepatch: 'test_klp_callbacks_demo': initializing patching transition 62 62 test_klp_callbacks_demo: pre_patch_callback: vmlinux ··· 64 64 livepatch: 'test_klp_callbacks_demo': completing patching transition 65 65 test_klp_callbacks_demo: post_patch_callback: vmlinux 66 66 livepatch: 'test_klp_callbacks_demo': patching complete 67 - % modprobe test_klp_callbacks_mod 67 + % insmod test_modules/test_klp_callbacks_mod.ko 68 68 livepatch: applying patch 'test_klp_callbacks_demo' to loading module 'test_klp_callbacks_mod' 69 69 test_klp_callbacks_demo: pre_patch_callback: test_klp_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init 70 70 test_klp_callbacks_demo: post_patch_callback: test_klp_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
+44
tools/testing/selftests/livepatch/test_klp-call_getpid.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2023 SUSE 4 + * Authors: Libor Pechacek <lpechacek@suse.cz> 5 + * Marcos Paulo de Souza <mpdesouza@suse.com> 6 + */ 7 + 8 + #include <stdio.h> 9 + #include <unistd.h> 10 + #include <sys/syscall.h> 11 + #include <sys/types.h> 12 + #include <signal.h> 13 + 14 + static int stop; 15 + static int sig_int; 16 + 17 + void hup_handler(int signum) 18 + { 19 + stop = 1; 20 + } 21 + 22 + void int_handler(int signum) 23 + { 24 + stop = 1; 25 + sig_int = 1; 26 + } 27 + 28 + int main(int argc, char *argv[]) 29 + { 30 + long count = 0; 31 + 32 + signal(SIGHUP, &hup_handler); 33 + signal(SIGINT, &int_handler); 34 + 35 + while (!stop) { 36 + (void)syscall(SYS_getpid); 37 + count++; 38 + } 39 + 40 + if (sig_int) 41 + printf("%ld iterations done\n", count); 42 + 43 + return 0; 44 + }
+26
tools/testing/selftests/livepatch/test_modules/Makefile
··· 1 + TESTMODS_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST))))) 2 + KDIR ?= /lib/modules/$(shell uname -r)/build 3 + 4 + obj-m += test_klp_atomic_replace.o \ 5 + test_klp_callbacks_busy.o \ 6 + test_klp_callbacks_demo.o \ 7 + test_klp_callbacks_demo2.o \ 8 + test_klp_callbacks_mod.o \ 9 + test_klp_livepatch.o \ 10 + test_klp_state.o \ 11 + test_klp_state2.o \ 12 + test_klp_state3.o \ 13 + test_klp_shadow_vars.o \ 14 + test_klp_syscall.o 15 + 16 + # Ensure that KDIR exists, otherwise skip the compilation 17 + modules: 18 + ifneq ("$(wildcard $(KDIR))", "") 19 + $(Q)$(MAKE) -C $(KDIR) modules KBUILD_EXTMOD=$(TESTMODS_DIR) 20 + endif 21 + 22 + # Ensure that KDIR exists, otherwise skip the clean target 23 + clean: 24 + ifneq ("$(wildcard $(KDIR))", "") 25 + $(Q)$(MAKE) -C $(KDIR) clean KBUILD_EXTMOD=$(TESTMODS_DIR) 26 + endif
+116
tools/testing/selftests/livepatch/test_modules/test_klp_syscall.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2017-2023 SUSE 4 + * Authors: Libor Pechacek <lpechacek@suse.cz> 5 + * Nicolai Stange <nstange@suse.de> 6 + * Marcos Paulo de Souza <mpdesouza@suse.com> 7 + */ 8 + 9 + #include <linux/module.h> 10 + #include <linux/kernel.h> 11 + #include <linux/sched.h> 12 + #include <linux/slab.h> 13 + #include <linux/livepatch.h> 14 + 15 + #if defined(__x86_64__) 16 + #define FN_PREFIX __x64_ 17 + #elif defined(__s390x__) 18 + #define FN_PREFIX __s390x_ 19 + #elif defined(__aarch64__) 20 + #define FN_PREFIX __arm64_ 21 + #else 22 + /* powerpc does not select ARCH_HAS_SYSCALL_WRAPPER */ 23 + #define FN_PREFIX 24 + #endif 25 + 26 + /* Protects klp_pids */ 27 + static DEFINE_MUTEX(kpid_mutex); 28 + 29 + static unsigned int npids, npids_pending; 30 + static int klp_pids[NR_CPUS]; 31 + module_param_array(klp_pids, int, &npids_pending, 0); 32 + MODULE_PARM_DESC(klp_pids, "Array of pids to be transitioned to livepatched state."); 33 + 34 + static ssize_t npids_show(struct kobject *kobj, struct kobj_attribute *attr, 35 + char *buf) 36 + { 37 + return sprintf(buf, "%u\n", npids_pending); 38 + } 39 + 40 + static struct kobj_attribute klp_attr = __ATTR_RO(npids); 41 + static struct kobject *klp_kobj; 42 + 43 + static asmlinkage long lp_sys_getpid(void) 44 + { 45 + int i; 46 + 47 + mutex_lock(&kpid_mutex); 48 + if (npids_pending > 0) { 49 + for (i = 0; i < npids; i++) { 50 + if (current->pid == klp_pids[i]) { 51 + klp_pids[i] = 0; 52 + npids_pending--; 53 + break; 54 + } 55 + } 56 + } 57 + mutex_unlock(&kpid_mutex); 58 + 59 + return task_tgid_vnr(current); 60 + } 61 + 62 + static struct klp_func vmlinux_funcs[] = { 63 + { 64 + .old_name = __stringify(FN_PREFIX) "sys_getpid", 65 + .new_func = lp_sys_getpid, 66 + }, {} 67 + }; 68 + 69 + static struct klp_object objs[] = { 70 + { 71 + /* name being NULL means vmlinux */ 72 + .funcs = vmlinux_funcs, 73 + }, {} 74 + }; 75 + 76 + static struct klp_patch patch = { 77 + .mod = THIS_MODULE, 78 + .objs = objs, 79 + }; 80 + 81 + static int livepatch_init(void) 82 + { 83 + int ret; 84 + 85 + klp_kobj = kobject_create_and_add("test_klp_syscall", kernel_kobj); 86 + if (!klp_kobj) 87 + return -ENOMEM; 88 + 89 + ret = sysfs_create_file(klp_kobj, &klp_attr.attr); 90 + if (ret) { 91 + kobject_put(klp_kobj); 92 + return ret; 93 + } 94 + 95 + /* 96 + * Save the number pids to transition to livepatched state before the 97 + * number of pending pids is decremented. 98 + */ 99 + npids = npids_pending; 100 + 101 + return klp_enable_patch(&patch); 102 + } 103 + 104 + static void livepatch_exit(void) 105 + { 106 + kobject_put(klp_kobj); 107 + } 108 + 109 + module_init(livepatch_init); 110 + module_exit(livepatch_exit); 111 + MODULE_LICENSE("GPL"); 112 + MODULE_INFO(livepatch, "Y"); 113 + MODULE_AUTHOR("Libor Pechacek <lpechacek@suse.cz>"); 114 + MODULE_AUTHOR("Nicolai Stange <nstange@suse.de>"); 115 + MODULE_AUTHOR("Marcos Paulo de Souza <mpdesouza@suse.com>"); 116 + MODULE_DESCRIPTION("Livepatch test: syscall transition");
+1
tools/testing/selftests/mqueue/setting
··· 1 + timeout=180
+4
tools/testing/selftests/power_supply/Makefile
··· 1 + TEST_PROGS := test_power_supply_properties.sh 2 + TEST_FILES := helpers.sh 3 + 4 + include ../lib.mk
+178
tools/testing/selftests/power_supply/helpers.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Copyright (c) 2022, 2024 Collabora Ltd 5 + SYSFS_SUPPLIES=/sys/class/power_supply 6 + 7 + calc() { 8 + awk "BEGIN { print $* }"; 9 + } 10 + 11 + test_sysfs_prop() { 12 + PROP="$1" 13 + VALUE="$2" # optional 14 + 15 + PROP_PATH="$SYSFS_SUPPLIES"/"$DEVNAME"/"$PROP" 16 + TEST_NAME="$DEVNAME".sysfs."$PROP" 17 + 18 + if [ -z "$VALUE" ]; then 19 + ktap_test_result "$TEST_NAME" [ -f "$PROP_PATH" ] 20 + else 21 + ktap_test_result "$TEST_NAME" grep -q "$VALUE" "$PROP_PATH" 22 + fi 23 + } 24 + 25 + to_human_readable_unit() { 26 + VALUE="$1" 27 + UNIT="$2" 28 + 29 + case "$VALUE" in 30 + *[!0-9]* ) return ;; # Not a number 31 + esac 32 + 33 + if [ "$UNIT" = "uA" ]; then 34 + new_unit="mA" 35 + div=1000 36 + elif [ "$UNIT" = "uV" ]; then 37 + new_unit="V" 38 + div=1000000 39 + elif [ "$UNIT" = "uAh" ]; then 40 + new_unit="Ah" 41 + div=1000000 42 + elif [ "$UNIT" = "uW" ]; then 43 + new_unit="mW" 44 + div=1000 45 + elif [ "$UNIT" = "uWh" ]; then 46 + new_unit="Wh" 47 + div=1000000 48 + else 49 + return 50 + fi 51 + 52 + value_converted=$(calc "$VALUE"/"$div") 53 + echo "$value_converted" "$new_unit" 54 + } 55 + 56 + _check_sysfs_prop_available() { 57 + PROP=$1 58 + 59 + PROP_PATH="$SYSFS_SUPPLIES"/"$DEVNAME"/"$PROP" 60 + TEST_NAME="$DEVNAME".sysfs."$PROP" 61 + 62 + if [ ! -e "$PROP_PATH" ] ; then 63 + ktap_test_skip "$TEST_NAME" 64 + return 1 65 + fi 66 + 67 + if ! cat "$PROP_PATH" >/dev/null; then 68 + ktap_print_msg "Failed to read" 69 + ktap_test_fail "$TEST_NAME" 70 + return 1 71 + fi 72 + 73 + return 0 74 + } 75 + 76 + test_sysfs_prop_optional() { 77 + PROP=$1 78 + UNIT=$2 # optional 79 + 80 + TEST_NAME="$DEVNAME".sysfs."$PROP" 81 + 82 + _check_sysfs_prop_available "$PROP" || return 83 + DATA=$(cat "$SYSFS_SUPPLIES"/"$DEVNAME"/"$PROP") 84 + 85 + ktap_print_msg "Reported: '$DATA' $UNIT ($(to_human_readable_unit "$DATA" "$UNIT"))" 86 + ktap_test_pass "$TEST_NAME" 87 + } 88 + 89 + test_sysfs_prop_optional_range() { 90 + PROP=$1 91 + MIN=$2 92 + MAX=$3 93 + UNIT=$4 # optional 94 + 95 + TEST_NAME="$DEVNAME".sysfs."$PROP" 96 + 97 + _check_sysfs_prop_available "$PROP" || return 98 + DATA=$(cat "$SYSFS_SUPPLIES"/"$DEVNAME"/"$PROP") 99 + 100 + if [ "$DATA" -lt "$MIN" ] || [ "$DATA" -gt "$MAX" ]; then 101 + ktap_print_msg "'$DATA' is out of range (min=$MIN, max=$MAX)" 102 + ktap_test_fail "$TEST_NAME" 103 + else 104 + ktap_print_msg "Reported: '$DATA' $UNIT ($(to_human_readable_unit "$DATA" "$UNIT"))" 105 + ktap_test_pass "$TEST_NAME" 106 + fi 107 + } 108 + 109 + test_sysfs_prop_optional_list() { 110 + PROP=$1 111 + LIST=$2 112 + 113 + TEST_NAME="$DEVNAME".sysfs."$PROP" 114 + 115 + _check_sysfs_prop_available "$PROP" || return 116 + DATA=$(cat "$SYSFS_SUPPLIES"/"$DEVNAME"/"$PROP") 117 + 118 + valid=0 119 + 120 + OLDIFS=$IFS 121 + IFS="," 122 + for item in $LIST; do 123 + if [ "$DATA" = "$item" ]; then 124 + valid=1 125 + break 126 + fi 127 + done 128 + if [ "$valid" -eq 1 ]; then 129 + ktap_print_msg "Reported: '$DATA'" 130 + ktap_test_pass "$TEST_NAME" 131 + else 132 + ktap_print_msg "'$DATA' is not a valid value for this property" 133 + ktap_test_fail "$TEST_NAME" 134 + fi 135 + IFS=$OLDIFS 136 + } 137 + 138 + dump_file() { 139 + FILE="$1" 140 + while read -r line; do 141 + ktap_print_msg "$line" 142 + done < "$FILE" 143 + } 144 + 145 + __test_uevent_prop() { 146 + PROP="$1" 147 + OPTIONAL="$2" 148 + VALUE="$3" # optional 149 + 150 + UEVENT_PATH="$SYSFS_SUPPLIES"/"$DEVNAME"/uevent 151 + TEST_NAME="$DEVNAME".uevent."$PROP" 152 + 153 + if ! grep -q "POWER_SUPPLY_$PROP=" "$UEVENT_PATH"; then 154 + if [ "$OPTIONAL" -eq 1 ]; then 155 + ktap_test_skip "$TEST_NAME" 156 + else 157 + ktap_print_msg "Missing property" 158 + ktap_test_fail "$TEST_NAME" 159 + fi 160 + return 161 + fi 162 + 163 + if ! grep -q "POWER_SUPPLY_$PROP=$VALUE" "$UEVENT_PATH"; then 164 + ktap_print_msg "Invalid value for uevent property, dumping..." 165 + dump_file "$UEVENT_PATH" 166 + ktap_test_fail "$TEST_NAME" 167 + else 168 + ktap_test_pass "$TEST_NAME" 169 + fi 170 + } 171 + 172 + test_uevent_prop() { 173 + __test_uevent_prop "$1" 0 "$2" 174 + } 175 + 176 + test_uevent_prop_optional() { 177 + __test_uevent_prop "$1" 1 "$2" 178 + }
+114
tools/testing/selftests/power_supply/test_power_supply_properties.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Copyright (c) 2022, 2024 Collabora Ltd 5 + # 6 + # This test validates the power supply uAPI: namely, the files in sysfs and 7 + # lines in uevent that expose the power supply properties. 8 + # 9 + # By default all power supplies available are tested. Optionally the name of a 10 + # power supply can be passed as a parameter to test only that one instead. 11 + DIR="$(dirname "$(readlink -f "$0")")" 12 + 13 + . "${DIR}"/../kselftest/ktap_helpers.sh 14 + 15 + . "${DIR}"/helpers.sh 16 + 17 + count_tests() { 18 + SUPPLIES=$1 19 + 20 + # This needs to be updated every time a new test is added. 21 + NUM_TESTS=33 22 + 23 + total_tests=0 24 + 25 + for i in $SUPPLIES; do 26 + total_tests=$(("$total_tests" + "$NUM_TESTS")) 27 + done 28 + 29 + echo "$total_tests" 30 + } 31 + 32 + ktap_print_header 33 + 34 + SYSFS_SUPPLIES=/sys/class/power_supply/ 35 + 36 + if [ $# -eq 0 ]; then 37 + supplies=$(ls "$SYSFS_SUPPLIES") 38 + else 39 + supplies=$1 40 + fi 41 + 42 + ktap_set_plan "$(count_tests "$supplies")" 43 + 44 + for DEVNAME in $supplies; do 45 + ktap_print_msg Testing device "$DEVNAME" 46 + 47 + if [ ! -d "$SYSFS_SUPPLIES"/"$DEVNAME" ]; then 48 + ktap_test_fail "$DEVNAME".exists 49 + ktap_exit_fail_msg Device does not exist 50 + fi 51 + 52 + ktap_test_pass "$DEVNAME".exists 53 + 54 + test_uevent_prop NAME "$DEVNAME" 55 + 56 + test_sysfs_prop type 57 + SUPPLY_TYPE=$(cat "$SYSFS_SUPPLIES"/"$DEVNAME"/type) 58 + # This fails on kernels < 5.8 (needs 2ad3d74e3c69f) 59 + test_uevent_prop TYPE "$SUPPLY_TYPE" 60 + 61 + test_sysfs_prop_optional usb_type 62 + 63 + test_sysfs_prop_optional_range online 0 2 64 + test_sysfs_prop_optional_range present 0 1 65 + 66 + test_sysfs_prop_optional_list status "Unknown","Charging","Discharging","Not charging","Full" 67 + 68 + # Capacity is reported as percentage, thus any value less than 0 and 69 + # greater than 100 are not allowed. 70 + test_sysfs_prop_optional_range capacity 0 100 "%" 71 + 72 + test_sysfs_prop_optional_list capacity_level "Unknown","Critical","Low","Normal","High","Full" 73 + 74 + test_sysfs_prop_optional model_name 75 + test_sysfs_prop_optional manufacturer 76 + test_sysfs_prop_optional serial_number 77 + test_sysfs_prop_optional_list technology "Unknown","NiMH","Li-ion","Li-poly","LiFe","NiCd","LiMn" 78 + 79 + test_sysfs_prop_optional cycle_count 80 + 81 + test_sysfs_prop_optional_list scope "Unknown","System","Device" 82 + 83 + test_sysfs_prop_optional input_current_limit "uA" 84 + test_sysfs_prop_optional input_voltage_limit "uV" 85 + 86 + # Technically the power-supply class does not limit reported values. 87 + # E.g. one could expose an RTC backup-battery, which goes below 1.5V or 88 + # an electric vehicle battery with over 300V. But most devices do not 89 + # have a step-up capable regulator behind the battery and operate with 90 + # voltages considered safe to touch, so we limit the allowed range to 91 + # 1.8V-60V to catch drivers reporting incorrectly scaled values. E.g. a 92 + # common mistake is reporting data in mV instead of µV. 93 + test_sysfs_prop_optional_range voltage_now 1800000 60000000 "uV" 94 + test_sysfs_prop_optional_range voltage_min 1800000 60000000 "uV" 95 + test_sysfs_prop_optional_range voltage_max 1800000 60000000 "uV" 96 + test_sysfs_prop_optional_range voltage_min_design 1800000 60000000 "uV" 97 + test_sysfs_prop_optional_range voltage_max_design 1800000 60000000 "uV" 98 + 99 + # current based systems 100 + test_sysfs_prop_optional current_now "uA" 101 + test_sysfs_prop_optional current_max "uA" 102 + test_sysfs_prop_optional charge_now "uAh" 103 + test_sysfs_prop_optional charge_full "uAh" 104 + test_sysfs_prop_optional charge_full_design "uAh" 105 + 106 + # power based systems 107 + test_sysfs_prop_optional power_now "uW" 108 + test_sysfs_prop_optional energy_now "uWh" 109 + test_sysfs_prop_optional energy_full "uWh" 110 + test_sysfs_prop_optional energy_full_design "uWh" 111 + test_sysfs_prop_optional energy_full_design "uWh" 112 + done 113 + 114 + ktap_finished
+85 -204
tools/testing/selftests/resctrl/cache.c
··· 3 3 #include <stdint.h> 4 4 #include "resctrl.h" 5 5 6 - struct read_format { 7 - __u64 nr; /* The number of events */ 8 - struct { 9 - __u64 value; /* The value of the event */ 10 - } values[2]; 11 - }; 12 - 13 - static struct perf_event_attr pea_llc_miss; 14 - static struct read_format rf_cqm; 15 - static int fd_lm; 16 6 char llc_occup_path[1024]; 17 7 18 - static void initialize_perf_event_attr(void) 8 + void perf_event_attr_initialize(struct perf_event_attr *pea, __u64 config) 19 9 { 20 - pea_llc_miss.type = PERF_TYPE_HARDWARE; 21 - pea_llc_miss.size = sizeof(struct perf_event_attr); 22 - pea_llc_miss.read_format = PERF_FORMAT_GROUP; 23 - pea_llc_miss.exclude_kernel = 1; 24 - pea_llc_miss.exclude_hv = 1; 25 - pea_llc_miss.exclude_idle = 1; 26 - pea_llc_miss.exclude_callchain_kernel = 1; 27 - pea_llc_miss.inherit = 1; 28 - pea_llc_miss.exclude_guest = 1; 29 - pea_llc_miss.disabled = 1; 10 + memset(pea, 0, sizeof(*pea)); 11 + pea->type = PERF_TYPE_HARDWARE; 12 + pea->size = sizeof(*pea); 13 + pea->read_format = PERF_FORMAT_GROUP; 14 + pea->exclude_kernel = 1; 15 + pea->exclude_hv = 1; 16 + pea->exclude_idle = 1; 17 + pea->exclude_callchain_kernel = 1; 18 + pea->inherit = 1; 19 + pea->exclude_guest = 1; 20 + pea->disabled = 1; 21 + pea->config = config; 30 22 } 31 23 32 - static void ioctl_perf_event_ioc_reset_enable(void) 24 + /* Start counters to log values */ 25 + int perf_event_reset_enable(int pe_fd) 33 26 { 34 - ioctl(fd_lm, PERF_EVENT_IOC_RESET, 0); 35 - ioctl(fd_lm, PERF_EVENT_IOC_ENABLE, 0); 36 - } 27 + int ret; 37 28 38 - static int perf_event_open_llc_miss(pid_t pid, int cpu_no) 39 - { 40 - fd_lm = perf_event_open(&pea_llc_miss, pid, cpu_no, -1, 41 - PERF_FLAG_FD_CLOEXEC); 42 - if (fd_lm == -1) { 43 - perror("Error opening leader"); 44 - ctrlc_handler(0, NULL, NULL); 45 - return -1; 46 - } 47 - 48 - return 0; 49 - } 50 - 51 - static void initialize_llc_perf(void) 52 - { 53 - memset(&pea_llc_miss, 0, sizeof(struct perf_event_attr)); 54 - memset(&rf_cqm, 0, sizeof(struct read_format)); 55 - 56 - /* Initialize perf_event_attr structures for HW_CACHE_MISSES */ 57 - initialize_perf_event_attr(); 58 - 59 - pea_llc_miss.config = PERF_COUNT_HW_CACHE_MISSES; 60 - 61 - rf_cqm.nr = 1; 62 - } 63 - 64 - static int reset_enable_llc_perf(pid_t pid, int cpu_no) 65 - { 66 - int ret = 0; 67 - 68 - ret = perf_event_open_llc_miss(pid, cpu_no); 29 + ret = ioctl(pe_fd, PERF_EVENT_IOC_RESET, 0); 69 30 if (ret < 0) 70 31 return ret; 71 32 72 - /* Start counters to log values */ 73 - ioctl_perf_event_ioc_reset_enable(); 33 + ret = ioctl(pe_fd, PERF_EVENT_IOC_ENABLE, 0); 34 + if (ret < 0) 35 + return ret; 74 36 75 37 return 0; 76 38 } 77 39 78 - /* 79 - * get_llc_perf: llc cache miss through perf events 80 - * @llc_perf_miss: LLC miss counter that is filled on success 81 - * 82 - * Perf events like HW_CACHE_MISSES could be used to validate number of 83 - * cache lines allocated. 84 - * 85 - * Return: =0 on success. <0 on failure. 86 - */ 87 - static int get_llc_perf(unsigned long *llc_perf_miss) 40 + void perf_event_initialize_read_format(struct perf_event_read *pe_read) 88 41 { 89 - __u64 total_misses; 90 - int ret; 42 + memset(pe_read, 0, sizeof(*pe_read)); 43 + pe_read->nr = 1; 44 + } 91 45 92 - /* Stop counters after one span to get miss rate */ 46 + int perf_open(struct perf_event_attr *pea, pid_t pid, int cpu_no) 47 + { 48 + int pe_fd; 93 49 94 - ioctl(fd_lm, PERF_EVENT_IOC_DISABLE, 0); 95 - 96 - ret = read(fd_lm, &rf_cqm, sizeof(struct read_format)); 97 - if (ret == -1) { 98 - perror("Could not get llc misses through perf"); 50 + pe_fd = perf_event_open(pea, pid, cpu_no, -1, PERF_FLAG_FD_CLOEXEC); 51 + if (pe_fd == -1) { 52 + ksft_perror("Error opening leader"); 99 53 return -1; 100 54 } 101 55 102 - total_misses = rf_cqm.values[0].value; 103 - *llc_perf_miss = total_misses; 56 + perf_event_reset_enable(pe_fd); 104 57 105 - return 0; 58 + return pe_fd; 106 59 } 107 60 108 61 /* ··· 77 124 78 125 fp = fopen(llc_occup_path, "r"); 79 126 if (!fp) { 80 - perror("Failed to open results file"); 127 + ksft_perror("Failed to open results file"); 81 128 82 - return errno; 129 + return -1; 83 130 } 84 131 if (fscanf(fp, "%lu", llc_occupancy) <= 0) { 85 - perror("Could not get llc occupancy"); 132 + ksft_perror("Could not get llc occupancy"); 86 133 fclose(fp); 87 134 88 135 return -1; ··· 99 146 * @llc_value: perf miss value / 100 147 * llc occupancy value reported by resctrl FS 101 148 * 102 - * Return: 0 on success. non-zero on failure. 149 + * Return: 0 on success, < 0 on error. 103 150 */ 104 - static int print_results_cache(char *filename, int bm_pid, 105 - unsigned long llc_value) 151 + static int print_results_cache(const char *filename, int bm_pid, __u64 llc_value) 106 152 { 107 153 FILE *fp; 108 154 109 155 if (strcmp(filename, "stdio") == 0 || strcmp(filename, "stderr") == 0) { 110 - printf("Pid: %d \t LLC_value: %lu\n", bm_pid, 111 - llc_value); 156 + printf("Pid: %d \t LLC_value: %llu\n", bm_pid, llc_value); 112 157 } else { 113 158 fp = fopen(filename, "a"); 114 159 if (!fp) { 115 - perror("Cannot open results file"); 160 + ksft_perror("Cannot open results file"); 116 161 117 - return errno; 162 + return -1; 118 163 } 119 - fprintf(fp, "Pid: %d \t llc_value: %lu\n", bm_pid, llc_value); 164 + fprintf(fp, "Pid: %d \t llc_value: %llu\n", bm_pid, llc_value); 120 165 fclose(fp); 121 166 } 122 167 123 168 return 0; 124 169 } 125 170 126 - int measure_cache_vals(struct resctrl_val_param *param, int bm_pid) 171 + /* 172 + * perf_event_measure - Measure perf events 173 + * @filename: Filename for writing the results 174 + * @bm_pid: PID that runs the benchmark 175 + * 176 + * Measures perf events (e.g., cache misses) and writes the results into 177 + * @filename. @bm_pid is written to the results file along with the measured 178 + * value. 179 + * 180 + * Return: =0 on success. <0 on failure. 181 + */ 182 + int perf_event_measure(int pe_fd, struct perf_event_read *pe_read, 183 + const char *filename, int bm_pid) 127 184 { 128 - unsigned long llc_perf_miss = 0, llc_occu_resc = 0, llc_value = 0; 129 185 int ret; 130 186 131 - /* 132 - * Measure cache miss from perf. 133 - */ 134 - if (!strncmp(param->resctrl_val, CAT_STR, sizeof(CAT_STR))) { 135 - ret = get_llc_perf(&llc_perf_miss); 136 - if (ret < 0) 137 - return ret; 138 - llc_value = llc_perf_miss; 139 - } 140 - 141 - /* 142 - * Measure llc occupancy from resctrl. 143 - */ 144 - if (!strncmp(param->resctrl_val, CMT_STR, sizeof(CMT_STR))) { 145 - ret = get_llc_occu_resctrl(&llc_occu_resc); 146 - if (ret < 0) 147 - return ret; 148 - llc_value = llc_occu_resc; 149 - } 150 - ret = print_results_cache(param->filename, bm_pid, llc_value); 151 - if (ret) 187 + /* Stop counters after one span to get miss rate */ 188 + ret = ioctl(pe_fd, PERF_EVENT_IOC_DISABLE, 0); 189 + if (ret < 0) 152 190 return ret; 153 191 154 - return 0; 192 + ret = read(pe_fd, pe_read, sizeof(*pe_read)); 193 + if (ret == -1) { 194 + ksft_perror("Could not get perf value"); 195 + return -1; 196 + } 197 + 198 + return print_results_cache(filename, bm_pid, pe_read->values[0].value); 155 199 } 156 200 157 201 /* 158 - * cache_val: execute benchmark and measure LLC occupancy resctrl 159 - * and perf cache miss for the benchmark 160 - * @param: parameters passed to cache_val() 161 - * @span: buffer size for the benchmark 202 + * measure_llc_resctrl - Measure resctrl LLC value from resctrl 203 + * @filename: Filename for writing the results 204 + * @bm_pid: PID that runs the benchmark 162 205 * 163 - * Return: 0 on success. non-zero on failure. 206 + * Measures LLC occupancy from resctrl and writes the results into @filename. 207 + * @bm_pid is written to the results file along with the measured value. 208 + * 209 + * Return: =0 on success. <0 on failure. 164 210 */ 165 - int cat_val(struct resctrl_val_param *param, size_t span) 211 + int measure_llc_resctrl(const char *filename, int bm_pid) 166 212 { 167 - int memflush = 1, operation = 0, ret = 0; 168 - char *resctrl_val = param->resctrl_val; 169 - pid_t bm_pid; 213 + unsigned long llc_occu_resc = 0; 214 + int ret; 170 215 171 - if (strcmp(param->filename, "") == 0) 172 - sprintf(param->filename, "stdio"); 173 - 174 - bm_pid = getpid(); 175 - 176 - /* Taskset benchmark to specified cpu */ 177 - ret = taskset_benchmark(bm_pid, param->cpu_no); 178 - if (ret) 216 + ret = get_llc_occu_resctrl(&llc_occu_resc); 217 + if (ret < 0) 179 218 return ret; 180 219 181 - /* Write benchmark to specified con_mon grp, mon_grp in resctrl FS*/ 182 - ret = write_bm_pid_to_resctrl(bm_pid, param->ctrlgrp, param->mongrp, 183 - resctrl_val); 184 - if (ret) 185 - return ret; 186 - 187 - initialize_llc_perf(); 188 - 189 - /* Test runs until the callback setup() tells the test to stop. */ 190 - while (1) { 191 - ret = param->setup(param); 192 - if (ret == END_OF_TESTS) { 193 - ret = 0; 194 - break; 195 - } 196 - if (ret < 0) 197 - break; 198 - ret = reset_enable_llc_perf(bm_pid, param->cpu_no); 199 - if (ret) 200 - break; 201 - 202 - if (run_fill_buf(span, memflush, operation, true)) { 203 - fprintf(stderr, "Error-running fill buffer\n"); 204 - ret = -1; 205 - goto pe_close; 206 - } 207 - 208 - sleep(1); 209 - ret = measure_cache_vals(param, bm_pid); 210 - if (ret) 211 - goto pe_close; 212 - } 213 - 214 - return ret; 215 - 216 - pe_close: 217 - close(fd_lm); 218 - return ret; 220 + return print_results_cache(filename, bm_pid, llc_occu_resc); 219 221 } 220 222 221 223 /* 222 - * show_cache_info: show cache test result information 223 - * @sum_llc_val: sum of LLC cache result data 224 - * @no_of_bits: number of bits 225 - * @cache_span: cache span in bytes for CMT or in lines for CAT 226 - * @max_diff: max difference 227 - * @max_diff_percent: max difference percentage 228 - * @num_of_runs: number of runs 229 - * @platform: show test information on this platform 230 - * @cmt: CMT test or CAT test 231 - * 232 - * Return: 0 on success. non-zero on failure. 224 + * show_cache_info - Show generic cache test information 225 + * @no_of_bits: Number of bits 226 + * @avg_llc_val: Average of LLC cache result data 227 + * @cache_span: Cache span 228 + * @lines: @cache_span in lines or bytes 233 229 */ 234 - int show_cache_info(unsigned long sum_llc_val, int no_of_bits, 235 - size_t cache_span, unsigned long max_diff, 236 - unsigned long max_diff_percent, unsigned long num_of_runs, 237 - bool platform, bool cmt) 230 + void show_cache_info(int no_of_bits, __u64 avg_llc_val, size_t cache_span, bool lines) 238 231 { 239 - unsigned long avg_llc_val = 0; 240 - float diff_percent; 241 - long avg_diff = 0; 242 - int ret; 243 - 244 - avg_llc_val = sum_llc_val / num_of_runs; 245 - avg_diff = (long)abs(cache_span - avg_llc_val); 246 - diff_percent = ((float)cache_span - avg_llc_val) / cache_span * 100; 247 - 248 - ret = platform && abs((int)diff_percent) > max_diff_percent && 249 - (cmt ? (abs(avg_diff) > max_diff) : true); 250 - 251 - ksft_print_msg("%s Check cache miss rate within %lu%%\n", 252 - ret ? "Fail:" : "Pass:", max_diff_percent); 253 - 254 - ksft_print_msg("Percent diff=%d\n", abs((int)diff_percent)); 255 232 ksft_print_msg("Number of bits: %d\n", no_of_bits); 256 - ksft_print_msg("Average LLC val: %lu\n", avg_llc_val); 257 - ksft_print_msg("Cache span (%s): %zu\n", cmt ? "bytes" : "lines", 233 + ksft_print_msg("Average LLC val: %llu\n", avg_llc_val); 234 + ksft_print_msg("Cache span (%s): %zu\n", lines ? "lines" : "bytes", 258 235 cache_span); 259 - 260 - return ret; 261 236 }
+305 -124
tools/testing/selftests/resctrl/cat_test.c
··· 11 11 #include "resctrl.h" 12 12 #include <unistd.h> 13 13 14 - #define RESULT_FILE_NAME1 "result_cat1" 15 - #define RESULT_FILE_NAME2 "result_cat2" 14 + #define RESULT_FILE_NAME "result_cat" 16 15 #define NUM_OF_RUNS 5 17 - #define MAX_DIFF_PERCENT 4 18 - #define MAX_DIFF 1000000 19 16 20 17 /* 21 - * Change schemata. Write schemata to specified 22 - * con_mon grp, mon_grp in resctrl FS. 23 - * Run 5 times in order to get average values. 18 + * Minimum difference in LLC misses between a test with n+1 bits CBM to the 19 + * test with n bits is MIN_DIFF_PERCENT_PER_BIT * (n - 1). With e.g. 5 vs 4 20 + * bits in the CBM mask, the minimum difference must be at least 21 + * MIN_DIFF_PERCENT_PER_BIT * (4 - 1) = 3 percent. 22 + * 23 + * The relationship between number of used CBM bits and difference in LLC 24 + * misses is not expected to be linear. With a small number of bits, the 25 + * margin is smaller than with larger number of bits. For selftest purposes, 26 + * however, linear approach is enough because ultimately only pass/fail 27 + * decision has to be made and distinction between strong and stronger 28 + * signal is irrelevant. 24 29 */ 25 - static int cat_setup(struct resctrl_val_param *p) 30 + #define MIN_DIFF_PERCENT_PER_BIT 1UL 31 + 32 + static int show_results_info(__u64 sum_llc_val, int no_of_bits, 33 + unsigned long cache_span, 34 + unsigned long min_diff_percent, 35 + unsigned long num_of_runs, bool platform, 36 + __s64 *prev_avg_llc_val) 26 37 { 27 - char schemata[64]; 38 + __u64 avg_llc_val = 0; 39 + float avg_diff; 28 40 int ret = 0; 29 41 30 - /* Run NUM_OF_RUNS times */ 31 - if (p->num_of_runs >= NUM_OF_RUNS) 32 - return END_OF_TESTS; 42 + avg_llc_val = sum_llc_val / num_of_runs; 43 + if (*prev_avg_llc_val) { 44 + float delta = (__s64)(avg_llc_val - *prev_avg_llc_val); 33 45 34 - if (p->num_of_runs == 0) { 35 - sprintf(schemata, "%lx", p->mask); 36 - ret = write_schemata(p->ctrlgrp, schemata, p->cpu_no, 37 - p->resctrl_val); 46 + avg_diff = delta / *prev_avg_llc_val; 47 + ret = platform && (avg_diff * 100) < (float)min_diff_percent; 48 + 49 + ksft_print_msg("%s Check cache miss rate changed more than %.1f%%\n", 50 + ret ? "Fail:" : "Pass:", (float)min_diff_percent); 51 + 52 + ksft_print_msg("Percent diff=%.1f\n", avg_diff * 100); 38 53 } 39 - p->num_of_runs++; 54 + *prev_avg_llc_val = avg_llc_val; 55 + 56 + show_cache_info(no_of_bits, avg_llc_val, cache_span, true); 40 57 41 58 return ret; 42 59 } 43 60 44 - static int check_results(struct resctrl_val_param *param, size_t span) 61 + /* Remove the highest bit from CBM */ 62 + static unsigned long next_mask(unsigned long current_mask) 63 + { 64 + return current_mask & (current_mask >> 1); 65 + } 66 + 67 + static int check_results(struct resctrl_val_param *param, const char *cache_type, 68 + unsigned long cache_total_size, unsigned long full_cache_mask, 69 + unsigned long current_mask) 45 70 { 46 71 char *token_array[8], temp[512]; 47 - unsigned long sum_llc_perf_miss = 0; 48 - int runs = 0, no_of_bits = 0; 72 + __u64 sum_llc_perf_miss = 0; 73 + __s64 prev_avg_llc_val = 0; 74 + unsigned long alloc_size; 75 + int runs = 0; 76 + int fail = 0; 77 + int ret; 49 78 FILE *fp; 50 79 51 80 ksft_print_msg("Checking for pass/fail\n"); 52 81 fp = fopen(param->filename, "r"); 53 82 if (!fp) { 54 - perror("# Cannot open file"); 83 + ksft_perror("Cannot open file"); 55 84 56 - return errno; 85 + return -1; 57 86 } 58 87 59 88 while (fgets(temp, sizeof(temp), fp)) { 60 89 char *token = strtok(temp, ":\t"); 61 90 int fields = 0; 91 + int bits; 62 92 63 93 while (token) { 64 94 token_array[fields++] = token; 65 95 token = strtok(NULL, ":\t"); 66 96 } 67 - /* 68 - * Discard the first value which is inaccurate due to monitoring 69 - * setup transition phase. 70 - */ 71 - if (runs > 0) 72 - sum_llc_perf_miss += strtoul(token_array[3], NULL, 0); 97 + 98 + sum_llc_perf_miss += strtoull(token_array[3], NULL, 0); 73 99 runs++; 100 + 101 + if (runs < NUM_OF_RUNS) 102 + continue; 103 + 104 + if (!current_mask) { 105 + ksft_print_msg("Unexpected empty cache mask\n"); 106 + break; 107 + } 108 + 109 + alloc_size = cache_portion_size(cache_total_size, current_mask, full_cache_mask); 110 + 111 + bits = count_bits(current_mask); 112 + 113 + ret = show_results_info(sum_llc_perf_miss, bits, 114 + alloc_size / 64, 115 + MIN_DIFF_PERCENT_PER_BIT * (bits - 1), 116 + runs, get_vendor() == ARCH_INTEL, 117 + &prev_avg_llc_val); 118 + if (ret) 119 + fail = 1; 120 + 121 + runs = 0; 122 + sum_llc_perf_miss = 0; 123 + current_mask = next_mask(current_mask); 74 124 } 75 125 76 126 fclose(fp); 77 - no_of_bits = count_bits(param->mask); 78 127 79 - return show_cache_info(sum_llc_perf_miss, no_of_bits, span / 64, 80 - MAX_DIFF, MAX_DIFF_PERCENT, runs - 1, 81 - get_vendor() == ARCH_INTEL, false); 128 + return fail; 82 129 } 83 130 84 131 void cat_test_cleanup(void) 85 132 { 86 - remove(RESULT_FILE_NAME1); 87 - remove(RESULT_FILE_NAME2); 133 + remove(RESULT_FILE_NAME); 88 134 } 89 135 90 - int cat_perf_miss_val(int cpu_no, int n, char *cache_type) 136 + /* 137 + * cat_test - Execute CAT benchmark and measure cache misses 138 + * @test: Test information structure 139 + * @uparams: User supplied parameters 140 + * @param: Parameters passed to cat_test() 141 + * @span: Buffer size for the benchmark 142 + * @current_mask Start mask for the first iteration 143 + * 144 + * Run CAT selftest by varying the allocated cache portion and comparing the 145 + * impact on cache misses (the result analysis is done in check_results() 146 + * and show_results_info(), not in this function). 147 + * 148 + * One bit is removed from the CAT allocation bit mask (in current_mask) for 149 + * each subsequent test which keeps reducing the size of the allocated cache 150 + * portion. A single test flushes the buffer, reads it to warm up the cache, 151 + * and reads the buffer again. The cache misses are measured during the last 152 + * read pass. 153 + * 154 + * Return: 0 when the test was run, < 0 on error. 155 + */ 156 + static int cat_test(const struct resctrl_test *test, 157 + const struct user_params *uparams, 158 + struct resctrl_val_param *param, 159 + size_t span, unsigned long current_mask) 91 160 { 92 - unsigned long l_mask, l_mask_1; 93 - int ret, pipefd[2], sibling_cpu_no; 94 - unsigned long cache_size = 0; 95 - unsigned long long_mask; 96 - char cbm_mask[256]; 97 - int count_of_bits; 98 - char pipe_message; 99 - size_t span; 161 + char *resctrl_val = param->resctrl_val; 162 + struct perf_event_read pe_read; 163 + struct perf_event_attr pea; 164 + cpu_set_t old_affinity; 165 + unsigned char *buf; 166 + char schemata[64]; 167 + int ret, i, pe_fd; 168 + pid_t bm_pid; 100 169 101 - /* Get default cbm mask for L3/L2 cache */ 102 - ret = get_cbm_mask(cache_type, cbm_mask); 170 + if (strcmp(param->filename, "") == 0) 171 + sprintf(param->filename, "stdio"); 172 + 173 + bm_pid = getpid(); 174 + 175 + /* Taskset benchmark to specified cpu */ 176 + ret = taskset_benchmark(bm_pid, uparams->cpu, &old_affinity); 103 177 if (ret) 104 178 return ret; 105 179 106 - long_mask = strtoul(cbm_mask, NULL, 16); 180 + /* Write benchmark to specified con_mon grp, mon_grp in resctrl FS*/ 181 + ret = write_bm_pid_to_resctrl(bm_pid, param->ctrlgrp, param->mongrp, 182 + resctrl_val); 183 + if (ret) 184 + goto reset_affinity; 185 + 186 + perf_event_attr_initialize(&pea, PERF_COUNT_HW_CACHE_MISSES); 187 + perf_event_initialize_read_format(&pe_read); 188 + pe_fd = perf_open(&pea, bm_pid, uparams->cpu); 189 + if (pe_fd < 0) { 190 + ret = -1; 191 + goto reset_affinity; 192 + } 193 + 194 + buf = alloc_buffer(span, 1); 195 + if (!buf) { 196 + ret = -1; 197 + goto pe_close; 198 + } 199 + 200 + while (current_mask) { 201 + snprintf(schemata, sizeof(schemata), "%lx", param->mask & ~current_mask); 202 + ret = write_schemata("", schemata, uparams->cpu, test->resource); 203 + if (ret) 204 + goto free_buf; 205 + snprintf(schemata, sizeof(schemata), "%lx", current_mask); 206 + ret = write_schemata(param->ctrlgrp, schemata, uparams->cpu, test->resource); 207 + if (ret) 208 + goto free_buf; 209 + 210 + for (i = 0; i < NUM_OF_RUNS; i++) { 211 + mem_flush(buf, span); 212 + fill_cache_read(buf, span, true); 213 + 214 + ret = perf_event_reset_enable(pe_fd); 215 + if (ret) 216 + goto free_buf; 217 + 218 + fill_cache_read(buf, span, true); 219 + 220 + ret = perf_event_measure(pe_fd, &pe_read, param->filename, bm_pid); 221 + if (ret) 222 + goto free_buf; 223 + } 224 + current_mask = next_mask(current_mask); 225 + } 226 + 227 + free_buf: 228 + free(buf); 229 + pe_close: 230 + close(pe_fd); 231 + reset_affinity: 232 + taskset_restore(bm_pid, &old_affinity); 233 + 234 + return ret; 235 + } 236 + 237 + static int cat_run_test(const struct resctrl_test *test, const struct user_params *uparams) 238 + { 239 + unsigned long long_mask, start_mask, full_cache_mask; 240 + unsigned long cache_total_size = 0; 241 + int n = uparams->bits; 242 + unsigned int start; 243 + int count_of_bits; 244 + size_t span; 245 + int ret; 246 + 247 + ret = get_full_cbm(test->resource, &full_cache_mask); 248 + if (ret) 249 + return ret; 250 + /* Get the largest contiguous exclusive portion of the cache */ 251 + ret = get_mask_no_shareable(test->resource, &long_mask); 252 + if (ret) 253 + return ret; 107 254 108 255 /* Get L3/L2 cache size */ 109 - ret = get_cache_size(cpu_no, cache_type, &cache_size); 256 + ret = get_cache_size(uparams->cpu, test->resource, &cache_total_size); 110 257 if (ret) 111 258 return ret; 112 - ksft_print_msg("Cache size :%lu\n", cache_size); 259 + ksft_print_msg("Cache size :%lu\n", cache_total_size); 113 260 114 - /* Get max number of bits from default-cabm mask */ 115 - count_of_bits = count_bits(long_mask); 261 + count_of_bits = count_contiguous_bits(long_mask, &start); 116 262 117 263 if (!n) 118 264 n = count_of_bits / 2; ··· 269 123 count_of_bits - 1); 270 124 return -1; 271 125 } 272 - 273 - /* Get core id from same socket for running another thread */ 274 - sibling_cpu_no = get_core_sibling(cpu_no); 275 - if (sibling_cpu_no < 0) 276 - return -1; 126 + start_mask = create_bit_mask(start, n); 277 127 278 128 struct resctrl_val_param param = { 279 129 .resctrl_val = CAT_STR, 280 - .cpu_no = cpu_no, 281 - .setup = cat_setup, 130 + .ctrlgrp = "c1", 131 + .filename = RESULT_FILE_NAME, 132 + .num_of_runs = 0, 282 133 }; 283 - 284 - l_mask = long_mask >> n; 285 - l_mask_1 = ~l_mask & long_mask; 286 - 287 - /* Set param values for parent thread which will be allocated bitmask 288 - * with (max_bits - n) bits 289 - */ 290 - span = cache_size * (count_of_bits - n) / count_of_bits; 291 - strcpy(param.ctrlgrp, "c2"); 292 - strcpy(param.mongrp, "m2"); 293 - strcpy(param.filename, RESULT_FILE_NAME2); 294 - param.mask = l_mask; 295 - param.num_of_runs = 0; 296 - 297 - if (pipe(pipefd)) { 298 - perror("# Unable to create pipe"); 299 - return errno; 300 - } 301 - 302 - fflush(stdout); 303 - bm_pid = fork(); 304 - 305 - /* Set param values for child thread which will be allocated bitmask 306 - * with n bits 307 - */ 308 - if (bm_pid == 0) { 309 - param.mask = l_mask_1; 310 - strcpy(param.ctrlgrp, "c1"); 311 - strcpy(param.mongrp, "m1"); 312 - span = cache_size * n / count_of_bits; 313 - strcpy(param.filename, RESULT_FILE_NAME1); 314 - param.num_of_runs = 0; 315 - param.cpu_no = sibling_cpu_no; 316 - } 134 + param.mask = long_mask; 135 + span = cache_portion_size(cache_total_size, start_mask, full_cache_mask); 317 136 318 137 remove(param.filename); 319 138 320 - ret = cat_val(&param, span); 321 - if (ret == 0) 322 - ret = check_results(&param, span); 139 + ret = cat_test(test, uparams, &param, span, start_mask); 140 + if (ret) 141 + goto out; 323 142 324 - if (bm_pid == 0) { 325 - /* Tell parent that child is ready */ 326 - close(pipefd[0]); 327 - pipe_message = 1; 328 - if (write(pipefd[1], &pipe_message, sizeof(pipe_message)) < 329 - sizeof(pipe_message)) 330 - /* 331 - * Just print the error message. 332 - * Let while(1) run and wait for itself to be killed. 333 - */ 334 - perror("# failed signaling parent process"); 335 - 336 - close(pipefd[1]); 337 - while (1) 338 - ; 339 - } else { 340 - /* Parent waits for child to be ready. */ 341 - close(pipefd[1]); 342 - pipe_message = 0; 343 - while (pipe_message != 1) { 344 - if (read(pipefd[0], &pipe_message, 345 - sizeof(pipe_message)) < sizeof(pipe_message)) { 346 - perror("# failed reading from child process"); 347 - break; 348 - } 349 - } 350 - close(pipefd[0]); 351 - kill(bm_pid, SIGKILL); 352 - } 353 - 143 + ret = check_results(&param, test->resource, 144 + cache_total_size, full_cache_mask, start_mask); 145 + out: 354 146 cat_test_cleanup(); 355 147 356 148 return ret; 357 149 } 150 + 151 + static int noncont_cat_run_test(const struct resctrl_test *test, 152 + const struct user_params *uparams) 153 + { 154 + unsigned long full_cache_mask, cont_mask, noncont_mask; 155 + unsigned int eax, ebx, ecx, edx, sparse_masks; 156 + int bit_center, ret; 157 + char schemata[64]; 158 + 159 + /* Check to compare sparse_masks content to CPUID output. */ 160 + ret = resource_info_unsigned_get(test->resource, "sparse_masks", &sparse_masks); 161 + if (ret) 162 + return ret; 163 + 164 + if (!strcmp(test->resource, "L3")) 165 + __cpuid_count(0x10, 1, eax, ebx, ecx, edx); 166 + else if (!strcmp(test->resource, "L2")) 167 + __cpuid_count(0x10, 2, eax, ebx, ecx, edx); 168 + else 169 + return -EINVAL; 170 + 171 + if (sparse_masks != ((ecx >> 3) & 1)) { 172 + ksft_print_msg("CPUID output doesn't match 'sparse_masks' file content!\n"); 173 + return 1; 174 + } 175 + 176 + /* Write checks initialization. */ 177 + ret = get_full_cbm(test->resource, &full_cache_mask); 178 + if (ret < 0) 179 + return ret; 180 + bit_center = count_bits(full_cache_mask) / 2; 181 + 182 + /* 183 + * The bit_center needs to be at least 3 to properly calculate the CBM 184 + * hole in the noncont_mask. If it's smaller return an error since the 185 + * cache mask is too short and that shouldn't happen. 186 + */ 187 + if (bit_center < 3) 188 + return -EINVAL; 189 + cont_mask = full_cache_mask >> bit_center; 190 + 191 + /* Contiguous mask write check. */ 192 + snprintf(schemata, sizeof(schemata), "%lx", cont_mask); 193 + ret = write_schemata("", schemata, uparams->cpu, test->resource); 194 + if (ret) { 195 + ksft_print_msg("Write of contiguous CBM failed\n"); 196 + return 1; 197 + } 198 + 199 + /* 200 + * Non-contiguous mask write check. CBM has a 0xf hole approximately in the middle. 201 + * Output is compared with support information to catch any edge case errors. 202 + */ 203 + noncont_mask = ~(0xfUL << (bit_center - 2)) & full_cache_mask; 204 + snprintf(schemata, sizeof(schemata), "%lx", noncont_mask); 205 + ret = write_schemata("", schemata, uparams->cpu, test->resource); 206 + if (ret && sparse_masks) 207 + ksft_print_msg("Non-contiguous CBMs supported but write of non-contiguous CBM failed\n"); 208 + else if (ret && !sparse_masks) 209 + ksft_print_msg("Non-contiguous CBMs not supported and write of non-contiguous CBM failed as expected\n"); 210 + else if (!ret && !sparse_masks) 211 + ksft_print_msg("Non-contiguous CBMs not supported but write of non-contiguous CBM succeeded\n"); 212 + 213 + return !ret == !sparse_masks; 214 + } 215 + 216 + static bool noncont_cat_feature_check(const struct resctrl_test *test) 217 + { 218 + if (!resctrl_resource_exists(test->resource)) 219 + return false; 220 + 221 + return resource_info_file_exists(test->resource, "sparse_masks"); 222 + } 223 + 224 + struct resctrl_test l3_cat_test = { 225 + .name = "L3_CAT", 226 + .group = "CAT", 227 + .resource = "L3", 228 + .feature_check = test_resource_feature_check, 229 + .run_test = cat_run_test, 230 + }; 231 + 232 + struct resctrl_test l3_noncont_cat_test = { 233 + .name = "L3_NONCONT_CAT", 234 + .group = "CAT", 235 + .resource = "L3", 236 + .feature_check = noncont_cat_feature_check, 237 + .run_test = noncont_cat_run_test, 238 + }; 239 + 240 + struct resctrl_test l2_noncont_cat_test = { 241 + .name = "L2_NONCONT_CAT", 242 + .group = "CAT", 243 + .resource = "L2", 244 + .feature_check = noncont_cat_feature_check, 245 + .run_test = noncont_cat_run_test, 246 + };
+60 -20
tools/testing/selftests/resctrl/cmt_test.c
··· 16 16 #define MAX_DIFF 2000000 17 17 #define MAX_DIFF_PERCENT 15 18 18 19 - static int cmt_setup(struct resctrl_val_param *p) 19 + static int cmt_setup(const struct resctrl_test *test, 20 + const struct user_params *uparams, 21 + struct resctrl_val_param *p) 20 22 { 21 23 /* Run NUM_OF_RUNS times */ 22 24 if (p->num_of_runs >= NUM_OF_RUNS) ··· 27 25 p->num_of_runs++; 28 26 29 27 return 0; 28 + } 29 + 30 + static int show_results_info(unsigned long sum_llc_val, int no_of_bits, 31 + unsigned long cache_span, unsigned long max_diff, 32 + unsigned long max_diff_percent, unsigned long num_of_runs, 33 + bool platform) 34 + { 35 + unsigned long avg_llc_val = 0; 36 + float diff_percent; 37 + long avg_diff = 0; 38 + int ret; 39 + 40 + avg_llc_val = sum_llc_val / num_of_runs; 41 + avg_diff = (long)abs(cache_span - avg_llc_val); 42 + diff_percent = ((float)cache_span - avg_llc_val) / cache_span * 100; 43 + 44 + ret = platform && abs((int)diff_percent) > max_diff_percent && 45 + abs(avg_diff) > max_diff; 46 + 47 + ksft_print_msg("%s Check cache miss rate within %lu%%\n", 48 + ret ? "Fail:" : "Pass:", max_diff_percent); 49 + 50 + ksft_print_msg("Percent diff=%d\n", abs((int)diff_percent)); 51 + 52 + show_cache_info(no_of_bits, avg_llc_val, cache_span, false); 53 + 54 + return ret; 30 55 } 31 56 32 57 static int check_results(struct resctrl_val_param *param, size_t span, int no_of_bits) ··· 66 37 ksft_print_msg("Checking for pass/fail\n"); 67 38 fp = fopen(param->filename, "r"); 68 39 if (!fp) { 69 - perror("# Error in opening file\n"); 40 + ksft_perror("Error in opening file"); 70 41 71 - return errno; 42 + return -1; 72 43 } 73 44 74 45 while (fgets(temp, sizeof(temp), fp)) { ··· 87 58 } 88 59 fclose(fp); 89 60 90 - return show_cache_info(sum_llc_occu_resc, no_of_bits, span, 91 - MAX_DIFF, MAX_DIFF_PERCENT, runs - 1, 92 - true, true); 61 + return show_results_info(sum_llc_occu_resc, no_of_bits, span, 62 + MAX_DIFF, MAX_DIFF_PERCENT, runs - 1, true); 93 63 } 94 64 95 65 void cmt_test_cleanup(void) ··· 96 68 remove(RESULT_FILE_NAME); 97 69 } 98 70 99 - int cmt_resctrl_val(int cpu_no, int n, const char * const *benchmark_cmd) 71 + static int cmt_run_test(const struct resctrl_test *test, const struct user_params *uparams) 100 72 { 101 - const char * const *cmd = benchmark_cmd; 73 + const char * const *cmd = uparams->benchmark_cmd; 102 74 const char *new_cmd[BENCHMARK_ARGS]; 103 - unsigned long cache_size = 0; 75 + unsigned long cache_total_size = 0; 76 + int n = uparams->bits ? : 5; 104 77 unsigned long long_mask; 105 78 char *span_str = NULL; 106 - char cbm_mask[256]; 107 79 int count_of_bits; 108 80 size_t span; 109 81 int ret, i; 110 82 111 - ret = get_cbm_mask("L3", cbm_mask); 83 + ret = get_full_cbm("L3", &long_mask); 112 84 if (ret) 113 85 return ret; 114 86 115 - long_mask = strtoul(cbm_mask, NULL, 16); 116 - 117 - ret = get_cache_size(cpu_no, "L3", &cache_size); 87 + ret = get_cache_size(uparams->cpu, "L3", &cache_total_size); 118 88 if (ret) 119 89 return ret; 120 - ksft_print_msg("Cache size :%lu\n", cache_size); 90 + ksft_print_msg("Cache size :%lu\n", cache_total_size); 121 91 122 92 count_of_bits = count_bits(long_mask); 123 93 ··· 129 103 .resctrl_val = CMT_STR, 130 104 .ctrlgrp = "c1", 131 105 .mongrp = "m1", 132 - .cpu_no = cpu_no, 133 106 .filename = RESULT_FILE_NAME, 134 107 .mask = ~(long_mask << n) & long_mask, 135 108 .num_of_runs = 0, 136 109 .setup = cmt_setup, 137 110 }; 138 111 139 - span = cache_size * n / count_of_bits; 112 + span = cache_portion_size(cache_total_size, param.mask, long_mask); 140 113 141 114 if (strcmp(cmd[0], "fill_buf") == 0) { 142 115 /* Duplicate the command to be able to replace span in it */ 143 - for (i = 0; benchmark_cmd[i]; i++) 144 - new_cmd[i] = benchmark_cmd[i]; 116 + for (i = 0; uparams->benchmark_cmd[i]; i++) 117 + new_cmd[i] = uparams->benchmark_cmd[i]; 145 118 new_cmd[i] = NULL; 146 119 147 120 ret = asprintf(&span_str, "%zu", span); ··· 152 127 153 128 remove(RESULT_FILE_NAME); 154 129 155 - ret = resctrl_val(cmd, &param); 130 + ret = resctrl_val(test, uparams, cmd, &param); 156 131 if (ret) 157 132 goto out; 158 133 159 134 ret = check_results(&param, span, n); 135 + if (ret && (get_vendor() == ARCH_INTEL)) 136 + ksft_print_msg("Intel CMT may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n"); 160 137 161 138 out: 162 139 cmt_test_cleanup(); ··· 166 139 167 140 return ret; 168 141 } 142 + 143 + static bool cmt_feature_check(const struct resctrl_test *test) 144 + { 145 + return test_resource_feature_check(test) && 146 + resctrl_mon_feature_exists("L3_MON", "llc_occupancy"); 147 + } 148 + 149 + struct resctrl_test cmt_test = { 150 + .name = "CMT", 151 + .resource = "L3", 152 + .feature_check = cmt_feature_check, 153 + .run_test = cmt_run_test, 154 + };
+60 -70
tools/testing/selftests/resctrl/fill_buf.c
··· 38 38 #endif 39 39 } 40 40 41 - static void mem_flush(unsigned char *buf, size_t buf_size) 41 + void mem_flush(unsigned char *buf, size_t buf_size) 42 42 { 43 43 unsigned char *cp = buf; 44 44 size_t i = 0; ··· 51 51 sb(); 52 52 } 53 53 54 - static void *malloc_and_init_memory(size_t buf_size) 55 - { 56 - void *p = NULL; 57 - uint64_t *p64; 58 - size_t s64; 59 - int ret; 60 - 61 - ret = posix_memalign(&p, PAGE_SIZE, buf_size); 62 - if (ret < 0) 63 - return NULL; 64 - 65 - p64 = (uint64_t *)p; 66 - s64 = buf_size / sizeof(uint64_t); 67 - 68 - while (s64 > 0) { 69 - *p64 = (uint64_t)rand(); 70 - p64 += (CL_SIZE / sizeof(uint64_t)); 71 - s64 -= (CL_SIZE / sizeof(uint64_t)); 72 - } 73 - 74 - return p; 75 - } 54 + /* 55 + * Buffer index step advance to workaround HW prefetching interfering with 56 + * the measurements. 57 + * 58 + * Must be a prime to step through all indexes of the buffer. 59 + * 60 + * Some primes work better than others on some architectures (from MBA/MBM 61 + * result stability point of view). 62 + */ 63 + #define FILL_IDX_MULT 23 76 64 77 65 static int fill_one_span_read(unsigned char *buf, size_t buf_size) 78 66 { 79 - unsigned char *end_ptr = buf + buf_size; 80 - unsigned char sum, *p; 67 + unsigned int size = buf_size / (CL_SIZE / 2); 68 + unsigned int i, idx = 0; 69 + unsigned char sum = 0; 81 70 82 - sum = 0; 83 - p = buf; 84 - while (p < end_ptr) { 85 - sum += *p; 86 - p += (CL_SIZE / 2); 71 + /* 72 + * Read the buffer in an order that is unexpected by HW prefetching 73 + * optimizations to prevent them interfering with the caching pattern. 74 + * 75 + * The read order is (in terms of halves of cachelines): 76 + * i * FILL_IDX_MULT % size 77 + * The formula is open-coded below to avoiding modulo inside the loop 78 + * as it improves MBA/MBM result stability on some architectures. 79 + */ 80 + for (i = 0; i < size; i++) { 81 + sum += buf[idx * (CL_SIZE / 2)]; 82 + 83 + idx += FILL_IDX_MULT; 84 + while (idx >= size) 85 + idx -= size; 87 86 } 88 87 89 88 return sum; ··· 100 101 } 101 102 } 102 103 103 - static int fill_cache_read(unsigned char *buf, size_t buf_size, bool once) 104 + void fill_cache_read(unsigned char *buf, size_t buf_size, bool once) 104 105 { 105 106 int ret = 0; 106 - FILE *fp; 107 107 108 108 while (1) { 109 109 ret = fill_one_span_read(buf, buf_size); ··· 111 113 } 112 114 113 115 /* Consume read result so that reading memory is not optimized out. */ 114 - fp = fopen("/dev/null", "w"); 115 - if (!fp) { 116 - perror("Unable to write to /dev/null"); 117 - return -1; 118 - } 119 - fprintf(fp, "Sum: %d ", ret); 120 - fclose(fp); 121 - 122 - return 0; 116 + *value_sink = ret; 123 117 } 124 118 125 - static int fill_cache_write(unsigned char *buf, size_t buf_size, bool once) 119 + static void fill_cache_write(unsigned char *buf, size_t buf_size, bool once) 126 120 { 127 121 while (1) { 128 122 fill_one_span_write(buf, buf_size); 129 123 if (once) 130 124 break; 131 125 } 132 - 133 - return 0; 134 126 } 135 127 136 - static int fill_cache(size_t buf_size, int memflush, int op, bool once) 128 + unsigned char *alloc_buffer(size_t buf_size, int memflush) 137 129 { 138 - unsigned char *buf; 130 + void *buf = NULL; 131 + uint64_t *p64; 132 + size_t s64; 139 133 int ret; 140 134 141 - buf = malloc_and_init_memory(buf_size); 142 - if (!buf) 143 - return -1; 135 + ret = posix_memalign(&buf, PAGE_SIZE, buf_size); 136 + if (ret < 0) 137 + return NULL; 138 + 139 + /* Initialize the buffer */ 140 + p64 = buf; 141 + s64 = buf_size / sizeof(uint64_t); 142 + 143 + while (s64 > 0) { 144 + *p64 = (uint64_t)rand(); 145 + p64 += (CL_SIZE / sizeof(uint64_t)); 146 + s64 -= (CL_SIZE / sizeof(uint64_t)); 147 + } 144 148 145 149 /* Flush the memory before using to avoid "cache hot pages" effect */ 146 150 if (memflush) 147 151 mem_flush(buf, buf_size); 148 152 149 - if (op == 0) 150 - ret = fill_cache_read(buf, buf_size, once); 151 - else 152 - ret = fill_cache_write(buf, buf_size, once); 153 - 154 - free(buf); 155 - 156 - if (ret) { 157 - printf("\n Error in fill cache read/write...\n"); 158 - return -1; 159 - } 160 - 161 - 162 - return 0; 153 + return buf; 163 154 } 164 155 165 - int run_fill_buf(size_t span, int memflush, int op, bool once) 156 + int run_fill_buf(size_t buf_size, int memflush, int op, bool once) 166 157 { 167 - size_t cache_size = span; 168 - int ret; 158 + unsigned char *buf; 169 159 170 - ret = fill_cache(cache_size, memflush, op, once); 171 - if (ret) { 172 - printf("\n Error in fill cache\n"); 160 + buf = alloc_buffer(buf_size, memflush); 161 + if (!buf) 173 162 return -1; 174 - } 163 + 164 + if (op == 0) 165 + fill_cache_read(buf, buf_size, once); 166 + else 167 + fill_cache_write(buf, buf_size, once); 168 + free(buf); 175 169 176 170 return 0; 177 171 }
+22 -8
tools/testing/selftests/resctrl/mba_test.c
··· 22 22 * con_mon grp, mon_grp in resctrl FS. 23 23 * For each allocation, run 5 times in order to get average values. 24 24 */ 25 - static int mba_setup(struct resctrl_val_param *p) 25 + static int mba_setup(const struct resctrl_test *test, 26 + const struct user_params *uparams, 27 + struct resctrl_val_param *p) 26 28 { 27 29 static int runs_per_allocation, allocation = 100; 28 30 char allocation_str[64]; ··· 42 40 43 41 sprintf(allocation_str, "%d", allocation); 44 42 45 - ret = write_schemata(p->ctrlgrp, allocation_str, p->cpu_no, 46 - p->resctrl_val); 43 + ret = write_schemata(p->ctrlgrp, allocation_str, uparams->cpu, test->resource); 47 44 if (ret < 0) 48 45 return ret; 49 46 ··· 110 109 111 110 fp = fopen(output, "r"); 112 111 if (!fp) { 113 - perror(output); 112 + ksft_perror(output); 114 113 115 - return errno; 114 + return -1; 116 115 } 117 116 118 117 runs = 0; ··· 142 141 remove(RESULT_FILE_NAME); 143 142 } 144 143 145 - int mba_schemata_change(int cpu_no, const char * const *benchmark_cmd) 144 + static int mba_run_test(const struct resctrl_test *test, const struct user_params *uparams) 146 145 { 147 146 struct resctrl_val_param param = { 148 147 .resctrl_val = MBA_STR, 149 148 .ctrlgrp = "c1", 150 149 .mongrp = "m1", 151 - .cpu_no = cpu_no, 152 150 .filename = RESULT_FILE_NAME, 153 151 .bw_report = "reads", 154 152 .setup = mba_setup ··· 156 156 157 157 remove(RESULT_FILE_NAME); 158 158 159 - ret = resctrl_val(benchmark_cmd, &param); 159 + ret = resctrl_val(test, uparams, uparams->benchmark_cmd, &param); 160 160 if (ret) 161 161 goto out; 162 162 ··· 167 167 168 168 return ret; 169 169 } 170 + 171 + static bool mba_feature_check(const struct resctrl_test *test) 172 + { 173 + return test_resource_feature_check(test) && 174 + resctrl_mon_feature_exists("L3_MON", "mbm_local_bytes"); 175 + } 176 + 177 + struct resctrl_test mba_test = { 178 + .name = "MBA", 179 + .resource = "MB", 180 + .vendor_specific = ARCH_INTEL, 181 + .feature_check = mba_feature_check, 182 + .run_test = mba_run_test, 183 + };
+25 -9
tools/testing/selftests/resctrl/mbm_test.c
··· 59 59 60 60 fp = fopen(output, "r"); 61 61 if (!fp) { 62 - perror(output); 62 + ksft_perror(output); 63 63 64 - return errno; 64 + return -1; 65 65 } 66 66 67 67 runs = 0; ··· 86 86 return ret; 87 87 } 88 88 89 - static int mbm_setup(struct resctrl_val_param *p) 89 + static int mbm_setup(const struct resctrl_test *test, 90 + const struct user_params *uparams, 91 + struct resctrl_val_param *p) 90 92 { 91 93 int ret = 0; 92 94 ··· 97 95 return END_OF_TESTS; 98 96 99 97 /* Set up shemata with 100% allocation on the first run. */ 100 - if (p->num_of_runs == 0 && validate_resctrl_feature_request("MB", NULL)) 101 - ret = write_schemata(p->ctrlgrp, "100", p->cpu_no, 102 - p->resctrl_val); 98 + if (p->num_of_runs == 0 && resctrl_resource_exists("MB")) 99 + ret = write_schemata(p->ctrlgrp, "100", uparams->cpu, test->resource); 103 100 104 101 p->num_of_runs++; 105 102 ··· 110 109 remove(RESULT_FILE_NAME); 111 110 } 112 111 113 - int mbm_bw_change(int cpu_no, const char * const *benchmark_cmd) 112 + static int mbm_run_test(const struct resctrl_test *test, const struct user_params *uparams) 114 113 { 115 114 struct resctrl_val_param param = { 116 115 .resctrl_val = MBM_STR, 117 116 .ctrlgrp = "c1", 118 117 .mongrp = "m1", 119 - .cpu_no = cpu_no, 120 118 .filename = RESULT_FILE_NAME, 121 119 .bw_report = "reads", 122 120 .setup = mbm_setup ··· 124 124 125 125 remove(RESULT_FILE_NAME); 126 126 127 - ret = resctrl_val(benchmark_cmd, &param); 127 + ret = resctrl_val(test, uparams, uparams->benchmark_cmd, &param); 128 128 if (ret) 129 129 goto out; 130 130 131 131 ret = check_results(DEFAULT_SPAN); 132 + if (ret && (get_vendor() == ARCH_INTEL)) 133 + ksft_print_msg("Intel MBM may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n"); 132 134 133 135 out: 134 136 mbm_test_cleanup(); 135 137 136 138 return ret; 137 139 } 140 + 141 + static bool mbm_feature_check(const struct resctrl_test *test) 142 + { 143 + return resctrl_mon_feature_exists("L3_MON", "mbm_total_bytes") && 144 + resctrl_mon_feature_exists("L3_MON", "mbm_local_bytes"); 145 + } 146 + 147 + struct resctrl_test mbm_test = { 148 + .name = "MBM", 149 + .resource = "MB", 150 + .vendor_specific = ARCH_INTEL, 151 + .feature_check = mbm_feature_check, 152 + .run_test = mbm_run_test, 153 + };
+120 -25
tools/testing/selftests/resctrl/resctrl.h
··· 28 28 #define PHYS_ID_PATH "/sys/devices/system/cpu/cpu" 29 29 #define INFO_PATH "/sys/fs/resctrl/info" 30 30 31 + /* 32 + * CPU vendor IDs 33 + * 34 + * Define as bits because they're used for vendor_specific bitmask in 35 + * the struct resctrl_test. 36 + */ 31 37 #define ARCH_INTEL 1 32 38 #define ARCH_AMD 2 33 39 ··· 43 37 44 38 #define DEFAULT_SPAN (250 * MB) 45 39 46 - #define PARENT_EXIT(err_msg) \ 40 + #define PARENT_EXIT() \ 47 41 do { \ 48 - perror(err_msg); \ 49 42 kill(ppid, SIGKILL); \ 50 43 umount_resctrlfs(); \ 51 44 exit(EXIT_FAILURE); \ 52 45 } while (0) 53 46 54 47 /* 48 + * user_params: User supplied parameters 49 + * @cpu: CPU number to which the benchmark will be bound to 50 + * @bits: Number of bits used for cache allocation size 51 + * @benchmark_cmd: Benchmark command to run during (some of the) tests 52 + */ 53 + struct user_params { 54 + int cpu; 55 + int bits; 56 + const char *benchmark_cmd[BENCHMARK_ARGS]; 57 + }; 58 + 59 + /* 60 + * resctrl_test: resctrl test definition 61 + * @name: Test name 62 + * @group: Test group - a common name for tests that share some characteristic 63 + * (e.g., L3 CAT test belongs to the CAT group). Can be NULL 64 + * @resource: Resource to test (e.g., MB, L3, L2, etc.) 65 + * @vendor_specific: Bitmask for vendor-specific tests (can be 0 for universal tests) 66 + * @disabled: Test is disabled 67 + * @feature_check: Callback to check required resctrl features 68 + * @run_test: Callback to run the test 69 + */ 70 + struct resctrl_test { 71 + const char *name; 72 + const char *group; 73 + const char *resource; 74 + unsigned int vendor_specific; 75 + bool disabled; 76 + bool (*feature_check)(const struct resctrl_test *test); 77 + int (*run_test)(const struct resctrl_test *test, 78 + const struct user_params *uparams); 79 + }; 80 + 81 + /* 55 82 * resctrl_val_param: resctrl test parameters 56 83 * @resctrl_val: Resctrl feature (Eg: mbm, mba.. etc) 57 84 * @ctrlgrp: Name of the control monitor group (con_mon grp) 58 85 * @mongrp: Name of the monitor group (mon grp) 59 - * @cpu_no: CPU number to which the benchmark would be binded 60 86 * @filename: Name of file to which the o/p should be written 61 87 * @bw_report: Bandwidth report type (reads vs writes) 62 88 * @setup: Call back function to setup test environment ··· 97 59 char *resctrl_val; 98 60 char ctrlgrp[64]; 99 61 char mongrp[64]; 100 - int cpu_no; 101 62 char filename[64]; 102 63 char *bw_report; 103 64 unsigned long mask; 104 65 int num_of_runs; 105 - int (*setup)(struct resctrl_val_param *param); 66 + int (*setup)(const struct resctrl_test *test, 67 + const struct user_params *uparams, 68 + struct resctrl_val_param *param); 69 + }; 70 + 71 + struct perf_event_read { 72 + __u64 nr; /* The number of events */ 73 + struct { 74 + __u64 value; /* The value of the event */ 75 + } values[2]; 106 76 }; 107 77 108 78 #define MBM_STR "mbm" 109 79 #define MBA_STR "mba" 110 80 #define CMT_STR "cmt" 111 81 #define CAT_STR "cat" 82 + 83 + /* 84 + * Memory location that consumes values compiler must not optimize away. 85 + * Volatile ensures writes to this location cannot be optimized away by 86 + * compiler. 87 + */ 88 + extern volatile int *value_sink; 112 89 113 90 extern pid_t bm_pid, ppid; 114 91 ··· 132 79 int get_vendor(void); 133 80 bool check_resctrlfs_support(void); 134 81 int filter_dmesg(void); 135 - int get_resource_id(int cpu_no, int *resource_id); 82 + int get_domain_id(const char *resource, int cpu_no, int *domain_id); 136 83 int mount_resctrlfs(void); 137 84 int umount_resctrlfs(void); 138 85 int validate_bw_report_request(char *bw_report); 139 - bool validate_resctrl_feature_request(const char *resource, const char *feature); 86 + bool resctrl_resource_exists(const char *resource); 87 + bool resctrl_mon_feature_exists(const char *resource, const char *feature); 88 + bool resource_info_file_exists(const char *resource, const char *file); 89 + bool test_resource_feature_check(const struct resctrl_test *test); 140 90 char *fgrep(FILE *inf, const char *str); 141 - int taskset_benchmark(pid_t bm_pid, int cpu_no); 142 - int write_schemata(char *ctrlgrp, char *schemata, int cpu_no, 143 - char *resctrl_val); 91 + int taskset_benchmark(pid_t bm_pid, int cpu_no, cpu_set_t *old_affinity); 92 + int taskset_restore(pid_t bm_pid, cpu_set_t *old_affinity); 93 + int write_schemata(char *ctrlgrp, char *schemata, int cpu_no, const char *resource); 144 94 int write_bm_pid_to_resctrl(pid_t bm_pid, char *ctrlgrp, char *mongrp, 145 95 char *resctrl_val); 146 96 int perf_event_open(struct perf_event_attr *hw_event, pid_t pid, int cpu, 147 97 int group_fd, unsigned long flags); 148 - int run_fill_buf(size_t span, int memflush, int op, bool once); 149 - int resctrl_val(const char * const *benchmark_cmd, struct resctrl_val_param *param); 150 - int mbm_bw_change(int cpu_no, const char * const *benchmark_cmd); 98 + unsigned char *alloc_buffer(size_t buf_size, int memflush); 99 + void mem_flush(unsigned char *buf, size_t buf_size); 100 + void fill_cache_read(unsigned char *buf, size_t buf_size, bool once); 101 + int run_fill_buf(size_t buf_size, int memflush, int op, bool once); 102 + int resctrl_val(const struct resctrl_test *test, 103 + const struct user_params *uparams, 104 + const char * const *benchmark_cmd, 105 + struct resctrl_val_param *param); 151 106 void tests_cleanup(void); 152 107 void mbm_test_cleanup(void); 153 - int mba_schemata_change(int cpu_no, const char * const *benchmark_cmd); 154 108 void mba_test_cleanup(void); 155 - int get_cbm_mask(char *cache_type, char *cbm_mask); 156 - int get_cache_size(int cpu_no, char *cache_type, unsigned long *cache_size); 109 + unsigned long create_bit_mask(unsigned int start, unsigned int len); 110 + unsigned int count_contiguous_bits(unsigned long val, unsigned int *start); 111 + int get_full_cbm(const char *cache_type, unsigned long *mask); 112 + int get_mask_no_shareable(const char *cache_type, unsigned long *mask); 113 + int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size); 114 + int resource_info_unsigned_get(const char *resource, const char *filename, unsigned int *val); 157 115 void ctrlc_handler(int signum, siginfo_t *info, void *ptr); 158 116 int signal_handler_register(void); 159 117 void signal_handler_unregister(void); 160 - int cat_val(struct resctrl_val_param *param, size_t span); 161 118 void cat_test_cleanup(void); 162 - int cat_perf_miss_val(int cpu_no, int no_of_bits, char *cache_type); 163 - int cmt_resctrl_val(int cpu_no, int n, const char * const *benchmark_cmd); 164 119 unsigned int count_bits(unsigned long n); 165 120 void cmt_test_cleanup(void); 166 - int get_core_sibling(int cpu_no); 167 - int measure_cache_vals(struct resctrl_val_param *param, int bm_pid); 168 - int show_cache_info(unsigned long sum_llc_val, int no_of_bits, 169 - size_t cache_span, unsigned long max_diff, 170 - unsigned long max_diff_percent, unsigned long num_of_runs, 171 - bool platform, bool cmt); 121 + 122 + void perf_event_attr_initialize(struct perf_event_attr *pea, __u64 config); 123 + void perf_event_initialize_read_format(struct perf_event_read *pe_read); 124 + int perf_open(struct perf_event_attr *pea, pid_t pid, int cpu_no); 125 + int perf_event_reset_enable(int pe_fd); 126 + int perf_event_measure(int pe_fd, struct perf_event_read *pe_read, 127 + const char *filename, int bm_pid); 128 + int measure_llc_resctrl(const char *filename, int bm_pid); 129 + void show_cache_info(int no_of_bits, __u64 avg_llc_val, size_t cache_span, bool lines); 130 + 131 + /* 132 + * cache_portion_size - Calculate the size of a cache portion 133 + * @cache_size: Total cache size in bytes 134 + * @portion_mask: Cache portion mask 135 + * @full_cache_mask: Full Cache Bit Mask (CBM) for the cache 136 + * 137 + * Return: The size of the cache portion in bytes. 138 + */ 139 + static inline unsigned long cache_portion_size(unsigned long cache_size, 140 + unsigned long portion_mask, 141 + unsigned long full_cache_mask) 142 + { 143 + unsigned int bits = count_bits(full_cache_mask); 144 + 145 + /* 146 + * With no bits the full CBM, assume cache cannot be split into 147 + * smaller portions. To avoid divide by zero, return cache_size. 148 + */ 149 + if (!bits) 150 + return cache_size; 151 + 152 + return cache_size * count_bits(portion_mask) / bits; 153 + } 154 + 155 + extern struct resctrl_test mbm_test; 156 + extern struct resctrl_test mba_test; 157 + extern struct resctrl_test cmt_test; 158 + extern struct resctrl_test l3_cat_test; 159 + extern struct resctrl_test l3_noncont_cat_test; 160 + extern struct resctrl_test l2_noncont_cat_test; 172 161 173 162 #endif /* RESCTRL_H */
+93 -130
tools/testing/selftests/resctrl/resctrl_tests.c
··· 10 10 */ 11 11 #include "resctrl.h" 12 12 13 + /* Volatile memory sink to prevent compiler optimizations */ 14 + static volatile int sink_target; 15 + volatile int *value_sink = &sink_target; 16 + 17 + static struct resctrl_test *resctrl_tests[] = { 18 + &mbm_test, 19 + &mba_test, 20 + &cmt_test, 21 + &l3_cat_test, 22 + &l3_noncont_cat_test, 23 + &l2_noncont_cat_test, 24 + }; 25 + 13 26 static int detect_vendor(void) 14 27 { 15 28 FILE *inf = fopen("/proc/cpuinfo", "r"); ··· 62 49 63 50 static void cmd_help(void) 64 51 { 52 + int i; 53 + 65 54 printf("usage: resctrl_tests [-h] [-t test list] [-n no_of_bits] [-b benchmark_cmd [option]...]\n"); 66 55 printf("\t-b benchmark_cmd [option]...: run specified benchmark for MBM, MBA and CMT\n"); 67 56 printf("\t default benchmark is builtin fill_buf\n"); 68 - printf("\t-t test list: run tests specified in the test list, "); 57 + printf("\t-t test list: run tests/groups specified by the list, "); 69 58 printf("e.g. -t mbm,mba,cmt,cat\n"); 59 + printf("\t\tSupported tests (group):\n"); 60 + for (i = 0; i < ARRAY_SIZE(resctrl_tests); i++) { 61 + if (resctrl_tests[i]->group) 62 + printf("\t\t\t%s (%s)\n", resctrl_tests[i]->name, resctrl_tests[i]->group); 63 + else 64 + printf("\t\t\t%s\n", resctrl_tests[i]->name); 65 + } 70 66 printf("\t-n no_of_bits: run cache tests using specified no of bits in cache bit mask\n"); 71 67 printf("\t-p cpu_no: specify CPU number to run the test. 1 is default\n"); 72 68 printf("\t-h: help\n"); ··· 114 92 signal_handler_unregister(); 115 93 } 116 94 117 - static void run_mbm_test(const char * const *benchmark_cmd, int cpu_no) 95 + static bool test_vendor_specific_check(const struct resctrl_test *test) 118 96 { 119 - int res; 97 + if (!test->vendor_specific) 98 + return true; 120 99 121 - ksft_print_msg("Starting MBM BW change ...\n"); 100 + return get_vendor() & test->vendor_specific; 101 + } 102 + 103 + static void run_single_test(const struct resctrl_test *test, const struct user_params *uparams) 104 + { 105 + int ret; 106 + 107 + if (test->disabled) 108 + return; 109 + 110 + if (!test_vendor_specific_check(test)) { 111 + ksft_test_result_skip("Hardware does not support %s\n", test->name); 112 + return; 113 + } 114 + 115 + ksft_print_msg("Starting %s test ...\n", test->name); 122 116 123 117 if (test_prepare()) { 124 118 ksft_exit_fail_msg("Abnormal failure when preparing for the test\n"); 125 119 return; 126 120 } 127 121 128 - if (!validate_resctrl_feature_request("L3_MON", "mbm_total_bytes") || 129 - !validate_resctrl_feature_request("L3_MON", "mbm_local_bytes") || 130 - (get_vendor() != ARCH_INTEL)) { 131 - ksft_test_result_skip("Hardware does not support MBM or MBM is disabled\n"); 122 + if (!test->feature_check(test)) { 123 + ksft_test_result_skip("Hardware does not support %s or %s is disabled\n", 124 + test->name, test->name); 132 125 goto cleanup; 133 126 } 134 127 135 - res = mbm_bw_change(cpu_no, benchmark_cmd); 136 - ksft_test_result(!res, "MBM: bw change\n"); 137 - if ((get_vendor() == ARCH_INTEL) && res) 138 - ksft_print_msg("Intel MBM may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n"); 128 + ret = test->run_test(test, uparams); 129 + ksft_test_result(!ret, "%s: test\n", test->name); 139 130 140 131 cleanup: 141 132 test_cleanup(); 142 133 } 143 134 144 - static void run_mba_test(const char * const *benchmark_cmd, int cpu_no) 135 + static void init_user_params(struct user_params *uparams) 145 136 { 146 - int res; 137 + memset(uparams, 0, sizeof(*uparams)); 147 138 148 - ksft_print_msg("Starting MBA Schemata change ...\n"); 149 - 150 - if (test_prepare()) { 151 - ksft_exit_fail_msg("Abnormal failure when preparing for the test\n"); 152 - return; 153 - } 154 - 155 - if (!validate_resctrl_feature_request("MB", NULL) || 156 - !validate_resctrl_feature_request("L3_MON", "mbm_local_bytes") || 157 - (get_vendor() != ARCH_INTEL)) { 158 - ksft_test_result_skip("Hardware does not support MBA or MBA is disabled\n"); 159 - goto cleanup; 160 - } 161 - 162 - res = mba_schemata_change(cpu_no, benchmark_cmd); 163 - ksft_test_result(!res, "MBA: schemata change\n"); 164 - 165 - cleanup: 166 - test_cleanup(); 167 - } 168 - 169 - static void run_cmt_test(const char * const *benchmark_cmd, int cpu_no) 170 - { 171 - int res; 172 - 173 - ksft_print_msg("Starting CMT test ...\n"); 174 - 175 - if (test_prepare()) { 176 - ksft_exit_fail_msg("Abnormal failure when preparing for the test\n"); 177 - return; 178 - } 179 - 180 - if (!validate_resctrl_feature_request("L3_MON", "llc_occupancy") || 181 - !validate_resctrl_feature_request("L3", NULL)) { 182 - ksft_test_result_skip("Hardware does not support CMT or CMT is disabled\n"); 183 - goto cleanup; 184 - } 185 - 186 - res = cmt_resctrl_val(cpu_no, 5, benchmark_cmd); 187 - ksft_test_result(!res, "CMT: test\n"); 188 - if ((get_vendor() == ARCH_INTEL) && res) 189 - ksft_print_msg("Intel CMT may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n"); 190 - 191 - cleanup: 192 - test_cleanup(); 193 - } 194 - 195 - static void run_cat_test(int cpu_no, int no_of_bits) 196 - { 197 - int res; 198 - 199 - ksft_print_msg("Starting CAT test ...\n"); 200 - 201 - if (test_prepare()) { 202 - ksft_exit_fail_msg("Abnormal failure when preparing for the test\n"); 203 - return; 204 - } 205 - 206 - if (!validate_resctrl_feature_request("L3", NULL)) { 207 - ksft_test_result_skip("Hardware does not support CAT or CAT is disabled\n"); 208 - goto cleanup; 209 - } 210 - 211 - res = cat_perf_miss_val(cpu_no, no_of_bits, "L3"); 212 - ksft_test_result(!res, "CAT: test\n"); 213 - 214 - cleanup: 215 - test_cleanup(); 139 + uparams->cpu = 1; 140 + uparams->bits = 0; 216 141 } 217 142 218 143 int main(int argc, char **argv) 219 144 { 220 - bool mbm_test = true, mba_test = true, cmt_test = true; 221 - const char *benchmark_cmd[BENCHMARK_ARGS] = {}; 222 - int c, cpu_no = 1, i, no_of_bits = 0; 145 + int tests = ARRAY_SIZE(resctrl_tests); 146 + bool test_param_seen = false; 147 + struct user_params uparams; 223 148 char *span_str = NULL; 224 - bool cat_test = true; 225 - int tests = 0; 226 - int ret; 149 + int ret, c, i; 150 + 151 + init_user_params(&uparams); 227 152 228 153 while ((c = getopt(argc, argv, "ht:b:n:p:")) != -1) { 229 154 char *token; ··· 188 219 189 220 /* Extract benchmark command from command line. */ 190 221 for (i = 0; i < argc - optind; i++) 191 - benchmark_cmd[i] = argv[i + optind]; 192 - benchmark_cmd[i] = NULL; 222 + uparams.benchmark_cmd[i] = argv[i + optind]; 223 + uparams.benchmark_cmd[i] = NULL; 193 224 194 225 goto last_arg; 195 226 case 't': 196 227 token = strtok(optarg, ","); 197 228 198 - mbm_test = false; 199 - mba_test = false; 200 - cmt_test = false; 201 - cat_test = false; 229 + if (!test_param_seen) { 230 + for (i = 0; i < ARRAY_SIZE(resctrl_tests); i++) 231 + resctrl_tests[i]->disabled = true; 232 + tests = 0; 233 + test_param_seen = true; 234 + } 202 235 while (token) { 203 - if (!strncmp(token, MBM_STR, sizeof(MBM_STR))) { 204 - mbm_test = true; 205 - tests++; 206 - } else if (!strncmp(token, MBA_STR, sizeof(MBA_STR))) { 207 - mba_test = true; 208 - tests++; 209 - } else if (!strncmp(token, CMT_STR, sizeof(CMT_STR))) { 210 - cmt_test = true; 211 - tests++; 212 - } else if (!strncmp(token, CAT_STR, sizeof(CAT_STR))) { 213 - cat_test = true; 214 - tests++; 215 - } else { 216 - printf("invalid argument\n"); 236 + bool found = false; 237 + 238 + for (i = 0; i < ARRAY_SIZE(resctrl_tests); i++) { 239 + if (!strcasecmp(token, resctrl_tests[i]->name) || 240 + (resctrl_tests[i]->group && 241 + !strcasecmp(token, resctrl_tests[i]->group))) { 242 + if (resctrl_tests[i]->disabled) 243 + tests++; 244 + resctrl_tests[i]->disabled = false; 245 + found = true; 246 + } 247 + } 248 + 249 + if (!found) { 250 + printf("invalid test: %s\n", token); 217 251 218 252 return -1; 219 253 } ··· 224 252 } 225 253 break; 226 254 case 'p': 227 - cpu_no = atoi(optarg); 255 + uparams.cpu = atoi(optarg); 228 256 break; 229 257 case 'n': 230 - no_of_bits = atoi(optarg); 231 - if (no_of_bits <= 0) { 258 + uparams.bits = atoi(optarg); 259 + if (uparams.bits <= 0) { 232 260 printf("Bail out! invalid argument for no_of_bits\n"); 233 261 return -1; 234 262 } ··· 263 291 264 292 filter_dmesg(); 265 293 266 - if (!benchmark_cmd[0]) { 294 + if (!uparams.benchmark_cmd[0]) { 267 295 /* If no benchmark is given by "-b" argument, use fill_buf. */ 268 - benchmark_cmd[0] = "fill_buf"; 296 + uparams.benchmark_cmd[0] = "fill_buf"; 269 297 ret = asprintf(&span_str, "%u", DEFAULT_SPAN); 270 298 if (ret < 0) 271 299 ksft_exit_fail_msg("Out of memory!\n"); 272 - benchmark_cmd[1] = span_str; 273 - benchmark_cmd[2] = "1"; 274 - benchmark_cmd[3] = "0"; 275 - benchmark_cmd[4] = "false"; 276 - benchmark_cmd[5] = NULL; 300 + uparams.benchmark_cmd[1] = span_str; 301 + uparams.benchmark_cmd[2] = "1"; 302 + uparams.benchmark_cmd[3] = "0"; 303 + uparams.benchmark_cmd[4] = "false"; 304 + uparams.benchmark_cmd[5] = NULL; 277 305 } 278 306 279 - ksft_set_plan(tests ? : 4); 307 + ksft_set_plan(tests); 280 308 281 - if (mbm_test) 282 - run_mbm_test(benchmark_cmd, cpu_no); 283 - 284 - if (mba_test) 285 - run_mba_test(benchmark_cmd, cpu_no); 286 - 287 - if (cmt_test) 288 - run_cmt_test(benchmark_cmd, cpu_no); 289 - 290 - if (cat_test) 291 - run_cat_test(cpu_no, no_of_bits); 309 + for (i = 0; i < ARRAY_SIZE(resctrl_tests); i++) 310 + run_single_test(resctrl_tests[i], &uparams); 292 311 293 312 free(span_str); 294 313 ksft_finished();
+76 -62
tools/testing/selftests/resctrl/resctrl_val.c
··· 156 156 sprintf(imc_counter_type, "%s%s", imc_dir, "type"); 157 157 fp = fopen(imc_counter_type, "r"); 158 158 if (!fp) { 159 - perror("Failed to open imc counter type file"); 159 + ksft_perror("Failed to open iMC counter type file"); 160 160 161 161 return -1; 162 162 } 163 163 if (fscanf(fp, "%u", &imc_counters_config[count][READ].type) <= 0) { 164 - perror("Could not get imc type"); 164 + ksft_perror("Could not get iMC type"); 165 165 fclose(fp); 166 166 167 167 return -1; ··· 175 175 sprintf(imc_counter_cfg, "%s%s", imc_dir, READ_FILE_NAME); 176 176 fp = fopen(imc_counter_cfg, "r"); 177 177 if (!fp) { 178 - perror("Failed to open imc config file"); 178 + ksft_perror("Failed to open iMC config file"); 179 179 180 180 return -1; 181 181 } 182 182 if (fscanf(fp, "%s", cas_count_cfg) <= 0) { 183 - perror("Could not get imc cas count read"); 183 + ksft_perror("Could not get iMC cas count read"); 184 184 fclose(fp); 185 185 186 186 return -1; ··· 193 193 sprintf(imc_counter_cfg, "%s%s", imc_dir, WRITE_FILE_NAME); 194 194 fp = fopen(imc_counter_cfg, "r"); 195 195 if (!fp) { 196 - perror("Failed to open imc config file"); 196 + ksft_perror("Failed to open iMC config file"); 197 197 198 198 return -1; 199 199 } 200 200 if (fscanf(fp, "%s", cas_count_cfg) <= 0) { 201 - perror("Could not get imc cas count write"); 201 + ksft_perror("Could not get iMC cas count write"); 202 202 fclose(fp); 203 203 204 204 return -1; ··· 262 262 } 263 263 closedir(dp); 264 264 if (count == 0) { 265 - perror("Unable find iMC counters!\n"); 265 + ksft_print_msg("Unable to find iMC counters\n"); 266 266 267 267 return -1; 268 268 } 269 269 } else { 270 - perror("Unable to open PMU directory!\n"); 270 + ksft_perror("Unable to open PMU directory"); 271 271 272 272 return -1; 273 273 } ··· 339 339 340 340 if (read(r->fd, &r->return_value, 341 341 sizeof(struct membw_read_format)) == -1) { 342 - perror("Couldn't get read b/w through iMC"); 342 + ksft_perror("Couldn't get read b/w through iMC"); 343 343 344 344 return -1; 345 345 } 346 346 347 347 if (read(w->fd, &w->return_value, 348 348 sizeof(struct membw_read_format)) == -1) { 349 - perror("Couldn't get write bw through iMC"); 349 + ksft_perror("Couldn't get write bw through iMC"); 350 350 351 351 return -1; 352 352 } ··· 387 387 return 0; 388 388 } 389 389 390 - void set_mbm_path(const char *ctrlgrp, const char *mongrp, int resource_id) 390 + void set_mbm_path(const char *ctrlgrp, const char *mongrp, int domain_id) 391 391 { 392 392 if (ctrlgrp && mongrp) 393 393 sprintf(mbm_total_path, CON_MON_MBM_LOCAL_BYTES_PATH, 394 - RESCTRL_PATH, ctrlgrp, mongrp, resource_id); 394 + RESCTRL_PATH, ctrlgrp, mongrp, domain_id); 395 395 else if (!ctrlgrp && mongrp) 396 396 sprintf(mbm_total_path, MON_MBM_LOCAL_BYTES_PATH, RESCTRL_PATH, 397 - mongrp, resource_id); 397 + mongrp, domain_id); 398 398 else if (ctrlgrp && !mongrp) 399 399 sprintf(mbm_total_path, CON_MBM_LOCAL_BYTES_PATH, RESCTRL_PATH, 400 - ctrlgrp, resource_id); 400 + ctrlgrp, domain_id); 401 401 else if (!ctrlgrp && !mongrp) 402 402 sprintf(mbm_total_path, MBM_LOCAL_BYTES_PATH, RESCTRL_PATH, 403 - resource_id); 403 + domain_id); 404 404 } 405 405 406 406 /* ··· 413 413 static void initialize_mem_bw_resctrl(const char *ctrlgrp, const char *mongrp, 414 414 int cpu_no, char *resctrl_val) 415 415 { 416 - int resource_id; 416 + int domain_id; 417 417 418 - if (get_resource_id(cpu_no, &resource_id) < 0) { 419 - perror("Could not get resource_id"); 418 + if (get_domain_id("MB", cpu_no, &domain_id) < 0) { 419 + ksft_print_msg("Could not get domain ID\n"); 420 420 return; 421 421 } 422 422 423 423 if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR))) 424 - set_mbm_path(ctrlgrp, mongrp, resource_id); 424 + set_mbm_path(ctrlgrp, mongrp, domain_id); 425 425 426 426 if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) { 427 427 if (ctrlgrp) 428 428 sprintf(mbm_total_path, CON_MBM_LOCAL_BYTES_PATH, 429 - RESCTRL_PATH, ctrlgrp, resource_id); 429 + RESCTRL_PATH, ctrlgrp, domain_id); 430 430 else 431 431 sprintf(mbm_total_path, MBM_LOCAL_BYTES_PATH, 432 - RESCTRL_PATH, resource_id); 432 + RESCTRL_PATH, domain_id); 433 433 } 434 434 } 435 435 ··· 449 449 450 450 fp = fopen(mbm_total_path, "r"); 451 451 if (!fp) { 452 - perror("Failed to open total bw file"); 452 + ksft_perror("Failed to open total bw file"); 453 453 454 454 return -1; 455 455 } 456 456 if (fscanf(fp, "%lu", mbm_total) <= 0) { 457 - perror("Could not get mbm local bytes"); 457 + ksft_perror("Could not get mbm local bytes"); 458 458 fclose(fp); 459 459 460 460 return -1; ··· 495 495 if (sigaction(SIGINT, &sigact, NULL) || 496 496 sigaction(SIGTERM, &sigact, NULL) || 497 497 sigaction(SIGHUP, &sigact, NULL)) { 498 - perror("# sigaction"); 498 + ksft_perror("sigaction"); 499 499 ret = -1; 500 500 } 501 501 return ret; ··· 515 515 if (sigaction(SIGINT, &sigact, NULL) || 516 516 sigaction(SIGTERM, &sigact, NULL) || 517 517 sigaction(SIGHUP, &sigact, NULL)) { 518 - perror("# sigaction"); 518 + ksft_perror("sigaction"); 519 519 } 520 520 } 521 521 ··· 526 526 * @bw_imc: perf imc counter value 527 527 * @bw_resc: memory bandwidth value 528 528 * 529 - * Return: 0 on success. non-zero on failure. 529 + * Return: 0 on success, < 0 on error. 530 530 */ 531 531 static int print_results_bw(char *filename, int bm_pid, float bw_imc, 532 532 unsigned long bw_resc) ··· 540 540 } else { 541 541 fp = fopen(filename, "a"); 542 542 if (!fp) { 543 - perror("Cannot open results file"); 543 + ksft_perror("Cannot open results file"); 544 544 545 - return errno; 545 + return -1; 546 546 } 547 547 if (fprintf(fp, "Pid: %d \t Mem_BW_iMC: %f \t Mem_BW_resc: %lu \t Difference: %lu\n", 548 548 bm_pid, bw_imc, bw_resc, diff) <= 0) { 549 + ksft_print_msg("Could not log results\n"); 549 550 fclose(fp); 550 - perror("Could not log results."); 551 551 552 - return errno; 552 + return -1; 553 553 } 554 554 fclose(fp); 555 555 } ··· 582 582 static void initialize_llc_occu_resctrl(const char *ctrlgrp, const char *mongrp, 583 583 int cpu_no, char *resctrl_val) 584 584 { 585 - int resource_id; 585 + int domain_id; 586 586 587 - if (get_resource_id(cpu_no, &resource_id) < 0) { 588 - perror("# Unable to resource_id"); 587 + if (get_domain_id("L3", cpu_no, &domain_id) < 0) { 588 + ksft_print_msg("Could not get domain ID\n"); 589 589 return; 590 590 } 591 591 592 592 if (!strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) 593 - set_cmt_path(ctrlgrp, mongrp, resource_id); 593 + set_cmt_path(ctrlgrp, mongrp, domain_id); 594 594 } 595 595 596 - static int 597 - measure_vals(struct resctrl_val_param *param, unsigned long *bw_resc_start) 596 + static int measure_vals(const struct user_params *uparams, 597 + struct resctrl_val_param *param, 598 + unsigned long *bw_resc_start) 598 599 { 599 600 unsigned long bw_resc, bw_resc_end; 600 601 float bw_imc; ··· 608 607 * Compare the two values to validate resctrl value. 609 608 * It takes 1sec to measure the data. 610 609 */ 611 - ret = get_mem_bw_imc(param->cpu_no, param->bw_report, &bw_imc); 610 + ret = get_mem_bw_imc(uparams->cpu, param->bw_report, &bw_imc); 612 611 if (ret < 0) 613 612 return ret; 614 613 ··· 648 647 * stdio (console) 649 648 */ 650 649 fp = freopen("/dev/null", "w", stdout); 651 - if (!fp) 652 - PARENT_EXIT("Unable to direct benchmark status to /dev/null"); 650 + if (!fp) { 651 + ksft_perror("Unable to direct benchmark status to /dev/null"); 652 + PARENT_EXIT(); 653 + } 653 654 654 655 if (strcmp(benchmark_cmd[0], "fill_buf") == 0) { 655 656 /* Execute default fill_buf benchmark */ 656 657 span = strtoul(benchmark_cmd[1], NULL, 10); 657 658 memflush = atoi(benchmark_cmd[2]); 658 659 operation = atoi(benchmark_cmd[3]); 659 - if (!strcmp(benchmark_cmd[4], "true")) 660 + if (!strcmp(benchmark_cmd[4], "true")) { 660 661 once = true; 661 - else if (!strcmp(benchmark_cmd[4], "false")) 662 + } else if (!strcmp(benchmark_cmd[4], "false")) { 662 663 once = false; 663 - else 664 - PARENT_EXIT("Invalid once parameter"); 664 + } else { 665 + ksft_print_msg("Invalid once parameter\n"); 666 + PARENT_EXIT(); 667 + } 665 668 666 669 if (run_fill_buf(span, memflush, operation, once)) 667 670 fprintf(stderr, "Error in running fill buffer\n"); ··· 673 668 /* Execute specified benchmark */ 674 669 ret = execvp(benchmark_cmd[0], benchmark_cmd); 675 670 if (ret) 676 - perror("wrong\n"); 671 + ksft_perror("execvp"); 677 672 } 678 673 679 674 fclose(stdout); 680 - PARENT_EXIT("Unable to run specified benchmark"); 675 + ksft_print_msg("Unable to run specified benchmark\n"); 676 + PARENT_EXIT(); 681 677 } 682 678 683 679 /* 684 680 * resctrl_val: execute benchmark and measure memory bandwidth on 685 681 * the benchmark 682 + * @test: test information structure 683 + * @uparams: user supplied parameters 686 684 * @benchmark_cmd: benchmark command and its arguments 687 685 * @param: parameters passed to resctrl_val() 688 686 * 689 - * Return: 0 on success. non-zero on failure. 687 + * Return: 0 when the test was run, < 0 on error. 690 688 */ 691 - int resctrl_val(const char * const *benchmark_cmd, struct resctrl_val_param *param) 689 + int resctrl_val(const struct resctrl_test *test, 690 + const struct user_params *uparams, 691 + const char * const *benchmark_cmd, 692 + struct resctrl_val_param *param) 692 693 { 693 694 char *resctrl_val = param->resctrl_val; 694 695 unsigned long bw_resc_start = 0; ··· 720 709 ppid = getpid(); 721 710 722 711 if (pipe(pipefd)) { 723 - perror("# Unable to create pipe"); 712 + ksft_perror("Unable to create pipe"); 724 713 725 714 return -1; 726 715 } ··· 732 721 fflush(stdout); 733 722 bm_pid = fork(); 734 723 if (bm_pid == -1) { 735 - perror("# Unable to fork"); 724 + ksft_perror("Unable to fork"); 736 725 737 726 return -1; 738 727 } ··· 749 738 sigact.sa_flags = SA_SIGINFO; 750 739 751 740 /* Register for "SIGUSR1" signal from parent */ 752 - if (sigaction(SIGUSR1, &sigact, NULL)) 753 - PARENT_EXIT("Can't register child for signal"); 741 + if (sigaction(SIGUSR1, &sigact, NULL)) { 742 + ksft_perror("Can't register child for signal"); 743 + PARENT_EXIT(); 744 + } 754 745 755 746 /* Tell parent that child is ready */ 756 747 close(pipefd[0]); 757 748 pipe_message = 1; 758 749 if (write(pipefd[1], &pipe_message, sizeof(pipe_message)) < 759 750 sizeof(pipe_message)) { 760 - perror("# failed signaling parent process"); 751 + ksft_perror("Failed signaling parent process"); 761 752 close(pipefd[1]); 762 753 return -1; 763 754 } ··· 768 755 /* Suspend child until delivery of "SIGUSR1" from parent */ 769 756 sigsuspend(&sigact.sa_mask); 770 757 771 - PARENT_EXIT("Child is done"); 758 + ksft_perror("Child is done"); 759 + PARENT_EXIT(); 772 760 } 773 761 774 762 ksft_print_msg("Benchmark PID: %d\n", bm_pid); ··· 783 769 value.sival_ptr = (void *)benchmark_cmd; 784 770 785 771 /* Taskset benchmark to specified cpu */ 786 - ret = taskset_benchmark(bm_pid, param->cpu_no); 772 + ret = taskset_benchmark(bm_pid, uparams->cpu, NULL); 787 773 if (ret) 788 774 goto out; 789 775 ··· 800 786 goto out; 801 787 802 788 initialize_mem_bw_resctrl(param->ctrlgrp, param->mongrp, 803 - param->cpu_no, resctrl_val); 789 + uparams->cpu, resctrl_val); 804 790 } else if (!strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) 805 791 initialize_llc_occu_resctrl(param->ctrlgrp, param->mongrp, 806 - param->cpu_no, resctrl_val); 792 + uparams->cpu, resctrl_val); 807 793 808 794 /* Parent waits for child to be ready. */ 809 795 close(pipefd[1]); 810 796 while (pipe_message != 1) { 811 797 if (read(pipefd[0], &pipe_message, sizeof(pipe_message)) < 812 798 sizeof(pipe_message)) { 813 - perror("# failed reading message from child process"); 799 + ksft_perror("Failed reading message from child process"); 814 800 close(pipefd[0]); 815 801 goto out; 816 802 } ··· 819 805 820 806 /* Signal child to start benchmark */ 821 807 if (sigqueue(bm_pid, SIGUSR1, value) == -1) { 822 - perror("# sigqueue SIGUSR1 to child"); 823 - ret = errno; 808 + ksft_perror("sigqueue SIGUSR1 to child"); 809 + ret = -1; 824 810 goto out; 825 811 } 826 812 ··· 829 815 830 816 /* Test runs until the callback setup() tells the test to stop. */ 831 817 while (1) { 832 - ret = param->setup(param); 818 + ret = param->setup(test, uparams, param); 833 819 if (ret == END_OF_TESTS) { 834 820 ret = 0; 835 821 break; ··· 839 825 840 826 if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) || 841 827 !strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) { 842 - ret = measure_vals(param, &bw_resc_start); 828 + ret = measure_vals(uparams, param, &bw_resc_start); 843 829 if (ret) 844 830 break; 845 831 } else if (!strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) { 846 832 sleep(1); 847 - ret = measure_cache_vals(param, bm_pid); 833 + ret = measure_llc_resctrl(param->filename, bm_pid); 848 834 if (ret) 849 835 break; 850 836 }
+296 -109
tools/testing/selftests/resctrl/resctrlfs.c
··· 20 20 21 21 mounts = fopen("/proc/mounts", "r"); 22 22 if (!mounts) { 23 - perror("/proc/mounts"); 23 + ksft_perror("/proc/mounts"); 24 24 return -ENXIO; 25 25 } 26 26 while (!feof(mounts)) { ··· 56 56 * Mounts resctrl FS. Fails if resctrl FS is already mounted to avoid 57 57 * pre-existing settings interfering with the test results. 58 58 * 59 - * Return: 0 on success, non-zero on failure 59 + * Return: 0 on success, < 0 on error. 60 60 */ 61 61 int mount_resctrlfs(void) 62 62 { ··· 69 69 ksft_print_msg("Mounting resctrl to \"%s\"\n", RESCTRL_PATH); 70 70 ret = mount("resctrl", RESCTRL_PATH, "resctrl", 0, NULL); 71 71 if (ret) 72 - perror("# mount"); 72 + ksft_perror("mount"); 73 73 74 74 return ret; 75 75 } ··· 86 86 return ret; 87 87 88 88 if (umount(mountpoint)) { 89 - perror("# Unable to umount resctrl"); 89 + ksft_perror("Unable to umount resctrl"); 90 90 91 - return errno; 91 + return -1; 92 92 } 93 93 94 94 return 0; 95 95 } 96 96 97 97 /* 98 - * get_resource_id - Get socket number/l3 id for a specified CPU 98 + * get_cache_level - Convert cache level from string to integer 99 + * @cache_type: Cache level as string 100 + * 101 + * Return: cache level as integer or -1 if @cache_type is invalid. 102 + */ 103 + static int get_cache_level(const char *cache_type) 104 + { 105 + if (!strcmp(cache_type, "L3")) 106 + return 3; 107 + if (!strcmp(cache_type, "L2")) 108 + return 2; 109 + 110 + ksft_print_msg("Invalid cache level\n"); 111 + return -1; 112 + } 113 + 114 + static int get_resource_cache_level(const char *resource) 115 + { 116 + /* "MB" use L3 (LLC) as resource */ 117 + if (!strcmp(resource, "MB")) 118 + return 3; 119 + return get_cache_level(resource); 120 + } 121 + 122 + /* 123 + * get_domain_id - Get resctrl domain ID for a specified CPU 124 + * @resource: resource name 99 125 * @cpu_no: CPU number 100 - * @resource_id: Socket number or l3_id 126 + * @domain_id: domain ID (cache ID; for MB, L3 cache ID) 101 127 * 102 128 * Return: >= 0 on success, < 0 on failure. 103 129 */ 104 - int get_resource_id(int cpu_no, int *resource_id) 130 + int get_domain_id(const char *resource, int cpu_no, int *domain_id) 105 131 { 106 132 char phys_pkg_path[1024]; 133 + int cache_num; 107 134 FILE *fp; 108 135 109 - if (get_vendor() == ARCH_AMD) 110 - sprintf(phys_pkg_path, "%s%d/cache/index3/id", 111 - PHYS_ID_PATH, cpu_no); 112 - else 113 - sprintf(phys_pkg_path, "%s%d/topology/physical_package_id", 114 - PHYS_ID_PATH, cpu_no); 136 + cache_num = get_resource_cache_level(resource); 137 + if (cache_num < 0) 138 + return cache_num; 139 + 140 + sprintf(phys_pkg_path, "%s%d/cache/index%d/id", PHYS_ID_PATH, cpu_no, cache_num); 115 141 116 142 fp = fopen(phys_pkg_path, "r"); 117 143 if (!fp) { 118 - perror("Failed to open physical_package_id"); 144 + ksft_perror("Failed to open cache id file"); 119 145 120 146 return -1; 121 147 } 122 - if (fscanf(fp, "%d", resource_id) <= 0) { 123 - perror("Could not get socket number or l3 id"); 148 + if (fscanf(fp, "%d", domain_id) <= 0) { 149 + ksft_perror("Could not get domain ID"); 124 150 fclose(fp); 125 151 126 152 return -1; ··· 164 138 * 165 139 * Return: = 0 on success, < 0 on failure. 166 140 */ 167 - int get_cache_size(int cpu_no, char *cache_type, unsigned long *cache_size) 141 + int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size) 168 142 { 169 143 char cache_path[1024], cache_str[64]; 170 144 int length, i, cache_num; 171 145 FILE *fp; 172 146 173 - if (!strcmp(cache_type, "L3")) { 174 - cache_num = 3; 175 - } else if (!strcmp(cache_type, "L2")) { 176 - cache_num = 2; 177 - } else { 178 - perror("Invalid cache level"); 179 - return -1; 180 - } 147 + cache_num = get_cache_level(cache_type); 148 + if (cache_num < 0) 149 + return cache_num; 181 150 182 151 sprintf(cache_path, "/sys/bus/cpu/devices/cpu%d/cache/index%d/size", 183 152 cpu_no, cache_num); 184 153 fp = fopen(cache_path, "r"); 185 154 if (!fp) { 186 - perror("Failed to open cache size"); 155 + ksft_perror("Failed to open cache size"); 187 156 188 157 return -1; 189 158 } 190 159 if (fscanf(fp, "%s", cache_str) <= 0) { 191 - perror("Could not get cache_size"); 160 + ksft_perror("Could not get cache_size"); 192 161 fclose(fp); 193 162 194 163 return -1; ··· 217 196 #define CORE_SIBLINGS_PATH "/sys/bus/cpu/devices/cpu" 218 197 219 198 /* 220 - * get_cbm_mask - Get cbm mask for given cache 221 - * @cache_type: Cache level L2/L3 222 - * @cbm_mask: cbm_mask returned as a string 199 + * get_bit_mask - Get bit mask from given file 200 + * @filename: File containing the mask 201 + * @mask: The bit mask returned as unsigned long 223 202 * 224 203 * Return: = 0 on success, < 0 on failure. 225 204 */ 226 - int get_cbm_mask(char *cache_type, char *cbm_mask) 205 + static int get_bit_mask(const char *filename, unsigned long *mask) 227 206 { 228 - char cbm_mask_path[1024]; 229 207 FILE *fp; 230 208 231 - if (!cbm_mask) 209 + if (!filename || !mask) 232 210 return -1; 233 211 234 - sprintf(cbm_mask_path, "%s/%s/cbm_mask", INFO_PATH, cache_type); 235 - 236 - fp = fopen(cbm_mask_path, "r"); 212 + fp = fopen(filename, "r"); 237 213 if (!fp) { 238 - perror("Failed to open cache level"); 239 - 214 + ksft_print_msg("Failed to open bit mask file '%s': %s\n", 215 + filename, strerror(errno)); 240 216 return -1; 241 217 } 242 - if (fscanf(fp, "%s", cbm_mask) <= 0) { 243 - perror("Could not get max cbm_mask"); 218 + 219 + if (fscanf(fp, "%lx", mask) <= 0) { 220 + ksft_print_msg("Could not read bit mask file '%s': %s\n", 221 + filename, strerror(errno)); 244 222 fclose(fp); 245 223 246 224 return -1; ··· 250 230 } 251 231 252 232 /* 253 - * get_core_sibling - Get sibling core id from the same socket for given CPU 254 - * @cpu_no: CPU number 233 + * resource_info_unsigned_get - Read an unsigned value from 234 + * /sys/fs/resctrl/info/@resource/@filename 235 + * @resource: Resource name that matches directory name in 236 + * /sys/fs/resctrl/info 237 + * @filename: File in /sys/fs/resctrl/info/@resource 238 + * @val: Contains read value on success. 255 239 * 256 - * Return: > 0 on success, < 0 on failure. 240 + * Return: = 0 on success, < 0 on failure. On success the read 241 + * value is saved into @val. 257 242 */ 258 - int get_core_sibling(int cpu_no) 243 + int resource_info_unsigned_get(const char *resource, const char *filename, 244 + unsigned int *val) 259 245 { 260 - char core_siblings_path[1024], cpu_list_str[64]; 261 - int sibling_cpu_no = -1; 246 + char file_path[PATH_MAX]; 262 247 FILE *fp; 263 248 264 - sprintf(core_siblings_path, "%s%d/topology/core_siblings_list", 265 - CORE_SIBLINGS_PATH, cpu_no); 249 + snprintf(file_path, sizeof(file_path), "%s/%s/%s", INFO_PATH, resource, 250 + filename); 266 251 267 - fp = fopen(core_siblings_path, "r"); 252 + fp = fopen(file_path, "r"); 268 253 if (!fp) { 269 - perror("Failed to open core siblings path"); 270 - 254 + ksft_print_msg("Error opening %s: %m\n", file_path); 271 255 return -1; 272 256 } 273 - if (fscanf(fp, "%s", cpu_list_str) <= 0) { 274 - perror("Could not get core_siblings list"); 257 + 258 + if (fscanf(fp, "%u", val) <= 0) { 259 + ksft_print_msg("Could not get contents of %s: %m\n", file_path); 275 260 fclose(fp); 276 - 277 261 return -1; 278 262 } 263 + 279 264 fclose(fp); 265 + return 0; 266 + } 280 267 281 - char *token = strtok(cpu_list_str, "-,"); 268 + /* 269 + * create_bit_mask- Create bit mask from start, len pair 270 + * @start: LSB of the mask 271 + * @len Number of bits in the mask 272 + */ 273 + unsigned long create_bit_mask(unsigned int start, unsigned int len) 274 + { 275 + return ((1UL << len) - 1UL) << start; 276 + } 282 277 283 - while (token) { 284 - sibling_cpu_no = atoi(token); 285 - /* Skipping core 0 as we don't want to run test on core 0 */ 286 - if (sibling_cpu_no != 0 && sibling_cpu_no != cpu_no) 287 - break; 288 - token = strtok(NULL, "-,"); 278 + /* 279 + * count_contiguous_bits - Returns the longest train of bits in a bit mask 280 + * @val A bit mask 281 + * @start The location of the least-significant bit of the longest train 282 + * 283 + * Return: The length of the contiguous bits in the longest train of bits 284 + */ 285 + unsigned int count_contiguous_bits(unsigned long val, unsigned int *start) 286 + { 287 + unsigned long last_val; 288 + unsigned int count = 0; 289 + 290 + while (val) { 291 + last_val = val; 292 + val &= (val >> 1); 293 + count++; 289 294 } 290 295 291 - return sibling_cpu_no; 296 + if (start) { 297 + if (count) 298 + *start = ffsl(last_val) - 1; 299 + else 300 + *start = 0; 301 + } 302 + 303 + return count; 304 + } 305 + 306 + /* 307 + * get_full_cbm - Get full Cache Bit Mask (CBM) 308 + * @cache_type: Cache type as "L2" or "L3" 309 + * @mask: Full cache bit mask representing the maximal portion of cache 310 + * available for allocation, returned as unsigned long. 311 + * 312 + * Return: = 0 on success, < 0 on failure. 313 + */ 314 + int get_full_cbm(const char *cache_type, unsigned long *mask) 315 + { 316 + char cbm_path[PATH_MAX]; 317 + int ret; 318 + 319 + if (!cache_type) 320 + return -1; 321 + 322 + snprintf(cbm_path, sizeof(cbm_path), "%s/%s/cbm_mask", 323 + INFO_PATH, cache_type); 324 + 325 + ret = get_bit_mask(cbm_path, mask); 326 + if (ret || !*mask) 327 + return -1; 328 + 329 + return 0; 330 + } 331 + 332 + /* 333 + * get_shareable_mask - Get shareable mask from shareable_bits 334 + * @cache_type: Cache type as "L2" or "L3" 335 + * @shareable_mask: Shareable mask returned as unsigned long 336 + * 337 + * Return: = 0 on success, < 0 on failure. 338 + */ 339 + static int get_shareable_mask(const char *cache_type, unsigned long *shareable_mask) 340 + { 341 + char mask_path[PATH_MAX]; 342 + 343 + if (!cache_type) 344 + return -1; 345 + 346 + snprintf(mask_path, sizeof(mask_path), "%s/%s/shareable_bits", 347 + INFO_PATH, cache_type); 348 + 349 + return get_bit_mask(mask_path, shareable_mask); 350 + } 351 + 352 + /* 353 + * get_mask_no_shareable - Get Cache Bit Mask (CBM) without shareable bits 354 + * @cache_type: Cache type as "L2" or "L3" 355 + * @mask: The largest exclusive portion of the cache out of the 356 + * full CBM, returned as unsigned long 357 + * 358 + * Parts of a cache may be shared with other devices such as GPU. This function 359 + * calculates the largest exclusive portion of the cache where no other devices 360 + * besides CPU have access to the cache portion. 361 + * 362 + * Return: = 0 on success, < 0 on failure. 363 + */ 364 + int get_mask_no_shareable(const char *cache_type, unsigned long *mask) 365 + { 366 + unsigned long full_mask, shareable_mask; 367 + unsigned int start, len; 368 + 369 + if (get_full_cbm(cache_type, &full_mask) < 0) 370 + return -1; 371 + if (get_shareable_mask(cache_type, &shareable_mask) < 0) 372 + return -1; 373 + 374 + len = count_contiguous_bits(full_mask & ~shareable_mask, &start); 375 + if (!len) 376 + return -1; 377 + 378 + *mask = create_bit_mask(start, len); 379 + 380 + return 0; 292 381 } 293 382 294 383 /* 295 384 * taskset_benchmark - Taskset PID (i.e. benchmark) to a specified cpu 296 - * @bm_pid: PID that should be binded 297 - * @cpu_no: CPU number at which the PID would be binded 385 + * @bm_pid: PID that should be binded 386 + * @cpu_no: CPU number at which the PID would be binded 387 + * @old_affinity: When not NULL, set to old CPU affinity 298 388 * 299 - * Return: 0 on success, non-zero on failure 389 + * Return: 0 on success, < 0 on error. 300 390 */ 301 - int taskset_benchmark(pid_t bm_pid, int cpu_no) 391 + int taskset_benchmark(pid_t bm_pid, int cpu_no, cpu_set_t *old_affinity) 302 392 { 303 393 cpu_set_t my_set; 394 + 395 + if (old_affinity) { 396 + CPU_ZERO(old_affinity); 397 + if (sched_getaffinity(bm_pid, sizeof(*old_affinity), 398 + old_affinity)) { 399 + ksft_perror("Unable to read CPU affinity"); 400 + return -1; 401 + } 402 + } 304 403 305 404 CPU_ZERO(&my_set); 306 405 CPU_SET(cpu_no, &my_set); 307 406 308 407 if (sched_setaffinity(bm_pid, sizeof(cpu_set_t), &my_set)) { 309 - perror("Unable to taskset benchmark"); 408 + ksft_perror("Unable to taskset benchmark"); 310 409 410 + return -1; 411 + } 412 + 413 + return 0; 414 + } 415 + 416 + /* 417 + * taskset_restore - Taskset PID to the earlier CPU affinity 418 + * @bm_pid: PID that should be reset 419 + * @old_affinity: The old CPU affinity to restore 420 + * 421 + * Return: 0 on success, < 0 on error. 422 + */ 423 + int taskset_restore(pid_t bm_pid, cpu_set_t *old_affinity) 424 + { 425 + if (sched_setaffinity(bm_pid, sizeof(*old_affinity), old_affinity)) { 426 + ksft_perror("Unable to restore CPU affinity"); 311 427 return -1; 312 428 } 313 429 ··· 456 300 * @grp: Full path and name of the group 457 301 * @parent_grp: Full path and name of the parent group 458 302 * 459 - * Return: 0 on success, non-zero on failure 303 + * Return: 0 on success, < 0 on error. 460 304 */ 461 305 static int create_grp(const char *grp_name, char *grp, const char *parent_grp) 462 306 { ··· 481 325 } 482 326 closedir(dp); 483 327 } else { 484 - perror("Unable to open resctrl for group"); 328 + ksft_perror("Unable to open resctrl for group"); 485 329 486 330 return -1; 487 331 } ··· 489 333 /* Requested grp doesn't exist, hence create it */ 490 334 if (found_grp == 0) { 491 335 if (mkdir(grp, 0) == -1) { 492 - perror("Unable to create group"); 336 + ksft_perror("Unable to create group"); 493 337 494 338 return -1; 495 339 } ··· 504 348 505 349 fp = fopen(tasks, "w"); 506 350 if (!fp) { 507 - perror("Failed to open tasks file"); 351 + ksft_perror("Failed to open tasks file"); 508 352 509 353 return -1; 510 354 } 511 355 if (fprintf(fp, "%d\n", pid) < 0) { 512 - perror("Failed to wr pid to tasks file"); 356 + ksft_print_msg("Failed to write pid to tasks file\n"); 513 357 fclose(fp); 514 358 515 359 return -1; ··· 532 376 * pid is not written, this means that pid is in con_mon grp and hence 533 377 * should consult con_mon grp's mon_data directory for results. 534 378 * 535 - * Return: 0 on success, non-zero on failure 379 + * Return: 0 on success, < 0 on error. 536 380 */ 537 381 int write_bm_pid_to_resctrl(pid_t bm_pid, char *ctrlgrp, char *mongrp, 538 382 char *resctrl_val) ··· 576 420 out: 577 421 ksft_print_msg("Writing benchmark parameters to resctrl FS\n"); 578 422 if (ret) 579 - perror("# writing to resctrlfs"); 423 + ksft_print_msg("Failed writing to resctrlfs\n"); 580 424 581 425 return ret; 582 426 } ··· 586 430 * @ctrlgrp: Name of the con_mon grp 587 431 * @schemata: Schemata that should be updated to 588 432 * @cpu_no: CPU number that the benchmark PID is binded to 589 - * @resctrl_val: Resctrl feature (Eg: mbm, mba.. etc) 433 + * @resource: Resctrl resource (Eg: MB, L3, L2, etc.) 590 434 * 591 - * Update schemata of a con_mon grp *only* if requested resctrl feature is 435 + * Update schemata of a con_mon grp *only* if requested resctrl resource is 592 436 * allocation type 593 437 * 594 - * Return: 0 on success, non-zero on failure 438 + * Return: 0 on success, < 0 on error. 595 439 */ 596 - int write_schemata(char *ctrlgrp, char *schemata, int cpu_no, char *resctrl_val) 440 + int write_schemata(char *ctrlgrp, char *schemata, int cpu_no, const char *resource) 597 441 { 598 442 char controlgroup[1024], reason[128], schema[1024] = {}; 599 - int resource_id, fd, schema_len = -1, ret = 0; 600 - 601 - if (strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR)) && 602 - strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) && 603 - strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)) && 604 - strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) 605 - return -ENOENT; 443 + int domain_id, fd, schema_len, ret = 0; 606 444 607 445 if (!schemata) { 608 446 ksft_print_msg("Skipping empty schemata update\n"); ··· 604 454 return -1; 605 455 } 606 456 607 - if (get_resource_id(cpu_no, &resource_id) < 0) { 608 - sprintf(reason, "Failed to get resource id"); 457 + if (get_domain_id(resource, cpu_no, &domain_id) < 0) { 458 + sprintf(reason, "Failed to get domain ID"); 609 459 ret = -1; 610 460 611 461 goto out; ··· 616 466 else 617 467 sprintf(controlgroup, "%s/schemata", RESCTRL_PATH); 618 468 619 - if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)) || 620 - !strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) 621 - schema_len = snprintf(schema, sizeof(schema), "%s%d%c%s\n", 622 - "L3:", resource_id, '=', schemata); 623 - if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR)) || 624 - !strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR))) 625 - schema_len = snprintf(schema, sizeof(schema), "%s%d%c%s\n", 626 - "MB:", resource_id, '=', schemata); 469 + schema_len = snprintf(schema, sizeof(schema), "%s:%d=%s\n", 470 + resource, domain_id, schemata); 627 471 if (schema_len < 0 || schema_len >= sizeof(schema)) { 628 472 snprintf(reason, sizeof(reason), 629 473 "snprintf() failed with return value : %d", schema_len); ··· 708 564 } 709 565 710 566 /* 711 - * validate_resctrl_feature_request - Check if requested feature is valid. 712 - * @resource: Required resource (e.g., MB, L3, L2, L3_MON, etc.) 713 - * @feature: Required monitor feature (in mon_features file). Can only be 714 - * set for L3_MON. Must be NULL for all other resources. 567 + * resctrl_resource_exists - Check if a resource is supported. 568 + * @resource: Resctrl resource (e.g., MB, L3, L2, L3_MON, etc.) 715 569 * 716 - * Return: True if the resource/feature is supported, else false. False is 570 + * Return: True if the resource is supported, else false. False is 717 571 * also returned if resctrl FS is not mounted. 718 572 */ 719 - bool validate_resctrl_feature_request(const char *resource, const char *feature) 573 + bool resctrl_resource_exists(const char *resource) 720 574 { 721 575 char res_path[PATH_MAX]; 722 576 struct stat statbuf; 723 - char *res; 724 - FILE *inf; 725 577 int ret; 726 578 727 579 if (!resource) ··· 732 592 if (stat(res_path, &statbuf)) 733 593 return false; 734 594 735 - if (!feature) 736 - return true; 595 + return true; 596 + } 597 + 598 + /* 599 + * resctrl_mon_feature_exists - Check if requested monitoring feature is valid. 600 + * @resource: Resource that uses the mon_features file. Currently only L3_MON 601 + * is valid. 602 + * @feature: Required monitor feature (in mon_features file). 603 + * 604 + * Return: True if the feature is supported, else false. 605 + */ 606 + bool resctrl_mon_feature_exists(const char *resource, const char *feature) 607 + { 608 + char res_path[PATH_MAX]; 609 + char *res; 610 + FILE *inf; 611 + 612 + if (!feature || !resource) 613 + return false; 737 614 738 615 snprintf(res_path, sizeof(res_path), "%s/%s/mon_features", INFO_PATH, resource); 739 616 inf = fopen(res_path, "r"); ··· 764 607 return !!res; 765 608 } 766 609 610 + /* 611 + * resource_info_file_exists - Check if a file is present inside 612 + * /sys/fs/resctrl/info/@resource. 613 + * @resource: Required resource (Eg: MB, L3, L2, etc.) 614 + * @file: Required file. 615 + * 616 + * Return: True if the /sys/fs/resctrl/info/@resource/@file exists, else false. 617 + */ 618 + bool resource_info_file_exists(const char *resource, const char *file) 619 + { 620 + char res_path[PATH_MAX]; 621 + struct stat statbuf; 622 + 623 + if (!file || !resource) 624 + return false; 625 + 626 + snprintf(res_path, sizeof(res_path), "%s/%s/%s", INFO_PATH, resource, 627 + file); 628 + 629 + if (stat(res_path, &statbuf)) 630 + return false; 631 + 632 + return true; 633 + } 634 + 635 + bool test_resource_feature_check(const struct resctrl_test *test) 636 + { 637 + return resctrl_resource_exists(test->resource); 638 + } 639 + 767 640 int filter_dmesg(void) 768 641 { 769 642 char line[1024]; ··· 804 617 805 618 ret = pipe(pipefds); 806 619 if (ret) { 807 - perror("pipe"); 620 + ksft_perror("pipe"); 808 621 return ret; 809 622 } 810 623 fflush(stdout); ··· 813 626 close(pipefds[0]); 814 627 dup2(pipefds[1], STDOUT_FILENO); 815 628 execlp("dmesg", "dmesg", NULL); 816 - perror("executing dmesg"); 629 + ksft_perror("Executing dmesg"); 817 630 exit(1); 818 631 } 819 632 close(pipefds[1]); 820 633 fp = fdopen(pipefds[0], "r"); 821 634 if (!fp) { 822 - perror("fdopen(pipe)"); 635 + ksft_perror("fdopen(pipe)"); 823 636 kill(pid, SIGTERM); 824 637 825 638 return -1;
+4
tools/testing/selftests/rust/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + TEST_PROGS += test_probe_samples.sh 3 + 4 + include ../lib.mk
+5
tools/testing/selftests/rust/config
··· 1 + CONFIG_RUST=y 2 + CONFIG_SAMPLES=y 3 + CONFIG_SAMPLES_RUST=y 4 + CONFIG_SAMPLE_RUST_MINIMAL=m 5 + CONFIG_SAMPLE_RUST_PRINT=m
+41
tools/testing/selftests/rust/test_probe_samples.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Copyright (c) 2023 Collabora Ltd 5 + # 6 + # This script tests whether the rust sample modules can 7 + # be added and removed correctly. 8 + # 9 + DIR="$(dirname "$(readlink -f "$0")")" 10 + 11 + KTAP_HELPERS="${DIR}/../kselftest/ktap_helpers.sh" 12 + if [ -e "$KTAP_HELPERS" ]; then 13 + source "$KTAP_HELPERS" 14 + else 15 + echo "$KTAP_HELPERS file not found [SKIP]" 16 + exit 4 17 + fi 18 + 19 + rust_sample_modules=("rust_minimal" "rust_print") 20 + 21 + ktap_print_header 22 + 23 + for sample in "${rust_sample_modules[@]}"; do 24 + if ! /sbin/modprobe -n -q "$sample"; then 25 + ktap_skip_all "module $sample is not found in /lib/modules/$(uname -r)" 26 + exit "$KSFT_SKIP" 27 + fi 28 + done 29 + 30 + ktap_set_plan "${#rust_sample_modules[@]}" 31 + 32 + for sample in "${rust_sample_modules[@]}"; do 33 + if /sbin/modprobe -q "$sample"; then 34 + /sbin/modprobe -q -r "$sample" 35 + ktap_test_pass "$sample" 36 + else 37 + ktap_test_fail "$sample" 38 + fi 39 + done 40 + 41 + ktap_finished
+1 -1
tools/testing/selftests/sched/cs_prctl_test.c
··· 276 276 if (setpgid(0, 0) != 0) 277 277 handle_error("process group"); 278 278 279 - printf("\n## Create a thread/process/process group hiearchy\n"); 279 + printf("\n## Create a thread/process/process group hierarchy\n"); 280 280 create_processes(num_processes, num_threads, procs); 281 281 need_cleanup = 1; 282 282 disp_processes(num_processes, procs);
+1
tools/testing/selftests/thermal/intel/power_floor/.gitignore
··· 1 + power_floor_test
+1
tools/testing/selftests/thermal/intel/workload_hint/.gitignore
··· 1 + workload_hint_test
+1
tools/testing/selftests/uevent/.gitignore
··· 1 + uevent_filtering