Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mm-nonmm-stable-2024-03-14-09-36' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:

- Kuan-Wei Chiu has developed the well-named series "lib min_heap: Min
heap optimizations".

- Kuan-Wei Chiu has also sped up the library sorting code in the series
"lib/sort: Optimize the number of swaps and comparisons".

- Alexey Gladkov has added the ability for code running within an IPC
namespace to alter its IPC and MQ limits. The series is "Allow to
change ipc/mq sysctls inside ipc namespace".

- Geert Uytterhoeven has contributed some dhrystone maintenance work in
the series "lib: dhry: miscellaneous cleanups".

- Ryusuke Konishi continues nilfs2 maintenance work in the series

"nilfs2: eliminate kmap and kmap_atomic calls"
"nilfs2: fix kernel bug at submit_bh_wbc()"

- Nathan Chancellor has updated our build tools requirements in the
series "Bump the minimum supported version of LLVM to 13.0.1".

- Muhammad Usama Anjum continues with the selftests maintenance work in
the series "selftests/mm: Improve run_vmtests.sh".

- Oleg Nesterov has done some maintenance work against the signal code
in the series "get_signal: minor cleanups and fix".

Plus the usual shower of singleton patches in various parts of the tree.
Please see the individual changelogs for details.

* tag 'mm-nonmm-stable-2024-03-14-09-36' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (77 commits)
nilfs2: prevent kernel bug at submit_bh_wbc()
nilfs2: fix failure to detect DAT corruption in btree and direct mappings
ocfs2: enable ocfs2_listxattr for special files
ocfs2: remove SLAB_MEM_SPREAD flag usage
assoc_array: fix the return value in assoc_array_insert_mid_shortcut()
buildid: use kmap_local_page()
watchdog/core: remove sysctl handlers from public header
nilfs2: use div64_ul() instead of do_div()
mul_u64_u64_div_u64: increase precision by conditionally swapping a and b
kexec: copy only happens before uchunk goes to zero
get_signal: don't initialize ksig->info if SIGNAL_GROUP_EXIT/group_exec_task
get_signal: hide_si_addr_tag_bits: fix the usage of uninitialized ksig
get_signal: don't abuse ksig->info.si_signo and ksig->sig
const_structs.checkpatch: add device_type
Normalise "name (ad@dr)" MODULE_AUTHORs to "name <ad@dr>"
dyndbg: replace kstrdup() + strchr() with kstrdup_and_replace()
list: leverage list_is_head() for list_entry_is_head()
nilfs2: MAINTAINERS: drop unreachable project mirror site
smp: make __smp_processor_id() 0-argument macro
fat: fix uninitialized field in nostale filehandles
...

+1053 -767
+1
Documentation/admin-guide/kernel-parameters.txt
··· 4211 4211 bit 4: print ftrace buffer 4212 4212 bit 5: print all printk messages in buffer 4213 4213 bit 6: print all CPUs backtrace (if available in the arch) 4214 + bit 7: print only tasks in uninterruptible (blocked) state 4214 4215 *Be aware* that this option may print a _lot_ of lines, 4215 4216 so there are risks of losing older messages in the log. 4216 4217 Use this option carefully, maybe worth to setup a
+12 -3
Documentation/admin-guide/sysctl/kernel.rst
··· 594 594 ``msgmni`` is the maximum number of IPC queues. 32000 by default 595 595 (``MSGMNI``). 596 596 597 + All of these parameters are set per ipc namespace. The maximum number of bytes 598 + in POSIX message queues is limited by ``RLIMIT_MSGQUEUE``. This limit is 599 + respected hierarchically in the each user namespace. 597 600 598 601 msg_next_id, sem_next_id, and shm_next_id (System V IPC) 599 602 ======================================================== ··· 853 850 bit 4 print ftrace buffer 854 851 bit 5 print all printk messages in buffer 855 852 bit 6 print all CPUs backtrace (if available in the arch) 853 + bit 7 print only tasks in uninterruptible (blocked) state 856 854 ===== ============================================ 857 855 858 856 So for example to print tasks and memory info on panic, user can:: ··· 1278 1274 shmall 1279 1275 ====== 1280 1276 1281 - This parameter sets the total amount of shared memory pages that 1282 - can be used system wide. Hence, ``shmall`` should always be at least 1283 - ``ceil(shmmax/PAGE_SIZE)``. 1277 + This parameter sets the total amount of shared memory pages that can be used 1278 + inside ipc namespace. The shared memory pages counting occurs for each ipc 1279 + namespace separately and is not inherited. Hence, ``shmall`` should always be at 1280 + least ``ceil(shmmax/PAGE_SIZE)``. 1284 1281 1285 1282 If you are not sure what the default ``PAGE_SIZE`` is on your Linux 1286 1283 system, you can run the following command:: 1287 1284 1288 1285 # getconf PAGE_SIZE 1289 1286 1287 + To reduce or disable the ability to allocate shared memory, you must create a 1288 + new ipc namespace, set this parameter to the required value and prohibit the 1289 + creation of a new ipc namespace in the current user namespace or cgroups can 1290 + be used. 1290 1291 1291 1292 shmmax 1292 1293 ======
+1 -1
Documentation/process/changes.rst
··· 30 30 Program Minimal version Command to check the version 31 31 ====================== =============== ======================================== 32 32 GNU C 5.1 gcc --version 33 - Clang/LLVM (optional) 11.0.0 clang --version 33 + Clang/LLVM (optional) 13.0.1 clang --version 34 34 Rust (optional) 1.76.0 rustc --version 35 35 bindgen (optional) 0.65.1 bindgen --version 36 36 GNU make 3.82 make --version
-1
MAINTAINERS
··· 15501 15501 L: linux-nilfs@vger.kernel.org 15502 15502 S: Supported 15503 15503 W: https://nilfs.sourceforge.io/ 15504 - W: https://nilfs.osdn.jp/ 15505 15504 T: git https://github.com/konis/nilfs2.git 15506 15505 F: Documentation/filesystems/nilfs2.rst 15507 15506 F: fs/nilfs2/
-8
Makefile
··· 950 950 951 951 # Limit inlining across translation units to reduce binary size 952 952 KBUILD_LDFLAGS += -mllvm -import-instr-limit=5 953 - 954 - # Check for frame size exceeding threshold during prolog/epilog insertion 955 - # when using lld < 13.0.0. 956 - ifneq ($(CONFIG_FRAME_WARN),0) 957 - ifeq ($(call test-lt, $(CONFIG_LLD_VERSION), 130000),y) 958 - KBUILD_LDFLAGS += -plugin-opt=-warn-stack-size=$(CONFIG_FRAME_WARN) 959 - endif 960 - endif 961 953 endif 962 954 963 955 ifdef CONFIG_LTO
+1 -7
arch/arm/include/asm/current.h
··· 18 18 { 19 19 struct task_struct *cur; 20 20 21 - #if __has_builtin(__builtin_thread_pointer) && \ 22 - defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) && \ 23 - !(defined(CONFIG_THUMB2_KERNEL) && \ 24 - defined(CONFIG_CC_IS_CLANG) && CONFIG_CLANG_VERSION < 130001) 21 + #if __has_builtin(__builtin_thread_pointer) && defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) 25 22 /* 26 23 * Use the __builtin helper when available - this results in better 27 24 * code, especially when using GCC in combination with the per-task 28 25 * stack protector, as the compiler will recognize that it needs to 29 26 * load the TLS register only once in every function. 30 - * 31 - * Clang < 13.0.1 gets this wrong for Thumb2 builds: 32 - * https://github.com/ClangBuiltLinux/linux/issues/1485 33 27 */ 34 28 cur = __builtin_thread_pointer(); 35 29 #elif defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) || defined(CONFIG_SMP)
+3 -6
arch/arm64/Kconfig
··· 379 379 config BUILTIN_RETURN_ADDRESS_STRIPS_PAC 380 380 bool 381 381 # Clang's __builtin_return_adddress() strips the PAC since 12.0.0 382 - # https://reviews.llvm.org/D75044 383 - default y if CC_IS_CLANG && (CLANG_VERSION >= 120000) 382 + # https://github.com/llvm/llvm-project/commit/2a96f47c5ffca84cd774ad402cacd137f4bf45e2 383 + default y if CC_IS_CLANG 384 384 # GCC's __builtin_return_address() strips the PAC since 11.1.0, 385 385 # and this was backported to 10.2.0, 9.4.0, 8.5.0, but not earlier 386 386 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94891 ··· 1387 1387 1388 1388 config CPU_BIG_ENDIAN 1389 1389 bool "Build big-endian kernel" 1390 - depends on !LD_IS_LLD || LLD_VERSION >= 130000 1391 1390 # https://github.com/llvm/llvm-project/commit/1379b150991f70a5782e9a143c2ba5308da1161c 1392 1391 depends on AS_IS_GNU || AS_VERSION >= 150000 1393 1392 help ··· 2017 2018 depends on !CC_IS_GCC || GCC_VERSION >= 100100 2018 2019 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106671 2019 2020 depends on !CC_IS_GCC 2020 - # https://github.com/llvm/llvm-project/commit/a88c722e687e6780dcd6a58718350dc76fcc4cc9 2021 - depends on !CC_IS_CLANG || CLANG_VERSION >= 120000 2022 2021 depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_ARGS) 2023 2022 help 2024 2023 Build the kernel with Branch Target Identification annotations ··· 2219 2222 2220 2223 config UNWIND_PATCH_PAC_INTO_SCS 2221 2224 bool "Enable shadow call stack dynamically using code patching" 2222 - # needs Clang with https://reviews.llvm.org/D111780 incorporated 2225 + # needs Clang with https://github.com/llvm/llvm-project/commit/de07cde67b5d205d58690be012106022aea6d2b3 incorporated 2223 2226 depends on CC_IS_CLANG && CLANG_VERSION >= 150000 2224 2227 depends on ARM64_PTR_AUTH_KERNEL && CC_HAS_BRANCH_PROT_PAC_RET 2225 2228 depends on SHADOW_CALL_STACK
-1
arch/powerpc/Kconfig
··· 333 333 config COMPAT 334 334 bool "Enable support for 32bit binaries" 335 335 depends on PPC64 336 - depends on !CC_IS_CLANG || CLANG_VERSION >= 120000 337 336 default y if !CPU_LITTLE_ENDIAN 338 337 select ARCH_WANT_OLD_COMPAT_IPC 339 338 select COMPAT_OLD_SIGACTION
+2 -2
arch/powerpc/Makefile
··· 144 144 CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mlong-double-128) 145 145 146 146 # Clang unconditionally reserves r2 on ppc32 and does not support the flag 147 - # https://bugs.llvm.org/show_bug.cgi?id=39555 147 + # https://llvm.org/pr39555 148 148 CFLAGS-$(CONFIG_PPC32) := $(call cc-option, -ffixed-r2) 149 149 150 150 # Clang doesn't support -mmultiple / -mno-multiple 151 - # https://bugs.llvm.org/show_bug.cgi?id=39556 151 + # https://llvm.org/pr39556 152 152 CFLAGS-$(CONFIG_PPC32) += $(call cc-option, $(MULTIPLEWORD)) 153 153 154 154 CFLAGS-$(CONFIG_PPC32) += $(call cc-option,-mno-readonly-in-sdata)
+1 -1
arch/powerpc/kvm/book3s_hv_nested.c
··· 55 55 hr->dawrx1 = vcpu->arch.dawrx1; 56 56 } 57 57 58 - /* Use noinline_for_stack due to https://bugs.llvm.org/show_bug.cgi?id=49610 */ 58 + /* Use noinline_for_stack due to https://llvm.org/pr49610 */ 59 59 static noinline_for_stack void byteswap_pt_regs(struct pt_regs *regs) 60 60 { 61 61 unsigned long *addr = (unsigned long *) regs;
+1 -3
arch/riscv/Kconfig
··· 175 175 176 176 config CLANG_SUPPORTS_DYNAMIC_FTRACE 177 177 def_bool CC_IS_CLANG 178 - # https://github.com/llvm/llvm-project/commit/6ab8927931851bb42b2c93a00801dc499d7d9b1e 179 - depends on CLANG_VERSION >= 130000 180 178 # https://github.com/ClangBuiltLinux/linux/issues/1817 181 179 depends on AS_IS_GNU || (AS_IS_LLVM && (LD_IS_LLD || LD_VERSION >= 23600)) 182 180 ··· 311 313 def_bool $(as-instr,.insn r 51$(comma) 0$(comma) 0$(comma) t0$(comma) t0$(comma) zero) 312 314 313 315 config AS_HAS_OPTION_ARCH 314 - # https://reviews.llvm.org/D123515 316 + # https://github.com/llvm/llvm-project/commit/9e8ed3403c191ab9c4903e8eeb8f732ff8a43cb4 315 317 def_bool y 316 318 depends on $(as-instr, .option arch$(comma) +m) 317 319
+2 -12
arch/riscv/include/asm/ftrace.h
··· 13 13 #endif 14 14 #define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR 15 15 16 - /* 17 - * Clang prior to 13 had "mcount" instead of "_mcount": 18 - * https://reviews.llvm.org/D98881 19 - */ 20 - #if defined(CONFIG_CC_IS_GCC) || CONFIG_CLANG_VERSION >= 130000 21 - #define MCOUNT_NAME _mcount 22 - #else 23 - #define MCOUNT_NAME mcount 24 - #endif 25 - 26 16 #define ARCH_SUPPORTS_FTRACE_OPS 1 27 17 #ifndef __ASSEMBLY__ 28 18 ··· 20 30 21 31 #define ftrace_return_address(n) return_address(n) 22 32 23 - void MCOUNT_NAME(void); 33 + void _mcount(void); 24 34 static inline unsigned long ftrace_call_adjust(unsigned long addr) 25 35 { 26 36 return addr; ··· 70 80 * both auipc and jalr at the same time. 71 81 */ 72 82 73 - #define MCOUNT_ADDR ((unsigned long)MCOUNT_NAME) 83 + #define MCOUNT_ADDR ((unsigned long)_mcount) 74 84 #define JALR_SIGN_MASK (0x00000800) 75 85 #define JALR_OFFSET_MASK (0x00000fff) 76 86 #define AUIPC_OFFSET_MASK (0xfffff000)
+5 -5
arch/riscv/kernel/mcount.S
··· 50 50 51 51 SYM_TYPED_FUNC_START(ftrace_stub) 52 52 #ifdef CONFIG_DYNAMIC_FTRACE 53 - .global MCOUNT_NAME 54 - .set MCOUNT_NAME, ftrace_stub 53 + .global _mcount 54 + .set _mcount, ftrace_stub 55 55 #endif 56 56 ret 57 57 SYM_FUNC_END(ftrace_stub) ··· 80 80 #endif 81 81 82 82 #ifndef CONFIG_DYNAMIC_FTRACE 83 - SYM_FUNC_START(MCOUNT_NAME) 83 + SYM_FUNC_START(_mcount) 84 84 la t4, ftrace_stub 85 85 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 86 86 la t0, ftrace_graph_return ··· 126 126 jalr t5 127 127 RESTORE_ABI_STATE 128 128 ret 129 - SYM_FUNC_END(MCOUNT_NAME) 129 + SYM_FUNC_END(_mcount) 130 130 #endif 131 - EXPORT_SYMBOL(MCOUNT_NAME) 131 + EXPORT_SYMBOL(_mcount)
+1 -1
arch/s390/include/asm/ftrace.h
··· 9 9 #ifndef __ASSEMBLY__ 10 10 11 11 #ifdef CONFIG_CC_IS_CLANG 12 - /* https://bugs.llvm.org/show_bug.cgi?id=41424 */ 12 + /* https://llvm.org/pr41424 */ 13 13 #define ftrace_return_address(n) 0UL 14 14 #else 15 15 #define ftrace_return_address(n) __builtin_return_address(n)
+1 -1
arch/sparc/kernel/chmc.c
··· 30 30 #define PFX DRV_MODULE_NAME ": " 31 31 #define DRV_MODULE_VERSION "0.2" 32 32 33 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 33 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 34 34 MODULE_DESCRIPTION("UltraSPARC-III memory controller driver"); 35 35 MODULE_LICENSE("GPL"); 36 36 MODULE_VERSION(DRV_MODULE_VERSION);
+1 -1
arch/sparc/kernel/ds.c
··· 33 33 34 34 static char version[] = 35 35 DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; 36 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 36 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 37 37 MODULE_DESCRIPTION("Sun LDOM domain services driver"); 38 38 MODULE_LICENSE("GPL"); 39 39 MODULE_VERSION(DRV_MODULE_VERSION);
-6
arch/x86/Makefile
··· 221 221 222 222 KBUILD_LDFLAGS += -m elf_$(UTS_MACHINE) 223 223 224 - ifdef CONFIG_LTO_CLANG 225 - ifeq ($(call test-lt, $(CONFIG_LLD_VERSION), 130000),y) 226 - KBUILD_LDFLAGS += -plugin-opt=-stack-alignment=$(if $(CONFIG_X86_32),4,8) 227 - endif 228 - endif 229 - 230 224 ifdef CONFIG_X86_NEED_RELOCS 231 225 LDFLAGS_vmlinux := --emit-relocs --discard-none 232 226 else
+1 -1
arch/x86/power/Makefile
··· 5 5 CFLAGS_cpu.o := -fno-stack-protector 6 6 7 7 # Clang may incorrectly inline functions with stack protector enabled into 8 - # __restore_processor_state(): https://bugs.llvm.org/show_bug.cgi?id=47479 8 + # __restore_processor_state(): https://llvm.org/pr47479 9 9 CFLAGS_REMOVE_cpu.o := $(CC_FLAGS_LTO) 10 10 11 11 obj-$(CONFIG_PM_SLEEP) += cpu.o
+1 -1
crypto/blake2b_generic.c
··· 102 102 ROUND(10); 103 103 ROUND(11); 104 104 #ifdef CONFIG_CC_IS_CLANG 105 - #pragma nounroll /* https://bugs.llvm.org/show_bug.cgi?id=45803 */ 105 + #pragma nounroll /* https://llvm.org/pr45803 */ 106 106 #endif 107 107 for (i = 0; i < 8; ++i) 108 108 S->h[i] = S->h[i] ^ v[i] ^ v[i + 8];
+1 -3
drivers/android/binder.c
··· 6086 6086 struct binder_work *w; 6087 6087 int count; 6088 6088 6089 - count = 0; 6090 - hlist_for_each_entry(ref, &node->refs, node_entry) 6091 - count++; 6089 + count = hlist_count_nodes(&node->refs); 6092 6090 6093 6091 seq_printf(m, " node %d: u%016llx c%016llx hs %d hw %d ls %d lw %d is %d iw %d tr %d", 6094 6092 node->debug_id, (u64)node->ptr, (u64)node->cookie,
+1 -1
drivers/block/sunvdc.c
··· 28 28 29 29 static char version[] = 30 30 DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; 31 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 31 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 32 32 MODULE_DESCRIPTION("Sun LDOM virtual disk client driver"); 33 33 MODULE_LICENSE("GPL"); 34 34 MODULE_VERSION(DRV_MODULE_VERSION);
+1 -1
drivers/char/hw_random/n2-drv.c
··· 29 29 static char version[] = 30 30 DRV_MODULE_NAME " v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; 31 31 32 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 32 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 33 33 MODULE_DESCRIPTION("Niagara2 RNG driver"); 34 34 MODULE_LICENSE("GPL"); 35 35 MODULE_VERSION(DRV_MODULE_VERSION);
+1 -1
drivers/char/tpm/st33zp24/i2c.c
··· 167 167 168 168 module_i2c_driver(st33zp24_i2c_driver); 169 169 170 - MODULE_AUTHOR("TPM support (TPMsupport@list.st.com)"); 170 + MODULE_AUTHOR("TPM support <TPMsupport@list.st.com>"); 171 171 MODULE_DESCRIPTION("STM TPM 1.2 I2C ST33 Driver"); 172 172 MODULE_VERSION("1.3.0"); 173 173 MODULE_LICENSE("GPL");
+1 -1
drivers/char/tpm/st33zp24/spi.c
··· 284 284 285 285 module_spi_driver(st33zp24_spi_driver); 286 286 287 - MODULE_AUTHOR("TPM support (TPMsupport@list.st.com)"); 287 + MODULE_AUTHOR("TPM support <TPMsupport@list.st.com>"); 288 288 MODULE_DESCRIPTION("STM TPM 1.2 SPI ST33 Driver"); 289 289 MODULE_VERSION("1.3.0"); 290 290 MODULE_LICENSE("GPL");
+1 -1
drivers/char/tpm/st33zp24/st33zp24.c
··· 582 582 EXPORT_SYMBOL(st33zp24_pm_resume); 583 583 #endif 584 584 585 - MODULE_AUTHOR("TPM support (TPMsupport@list.st.com)"); 585 + MODULE_AUTHOR("TPM support <TPMsupport@list.st.com>"); 586 586 MODULE_DESCRIPTION("ST33ZP24 TPM 1.2 driver"); 587 587 MODULE_VERSION("1.3.0"); 588 588 MODULE_LICENSE("GPL");
+1 -1
drivers/char/tpm/tpm-interface.c
··· 524 524 subsys_initcall(tpm_init); 525 525 module_exit(tpm_exit); 526 526 527 - MODULE_AUTHOR("Leendert van Doorn (leendert@watson.ibm.com)"); 527 + MODULE_AUTHOR("Leendert van Doorn <leendert@watson.ibm.com>"); 528 528 MODULE_DESCRIPTION("TPM Driver"); 529 529 MODULE_VERSION("2.0"); 530 530 MODULE_LICENSE("GPL");
+1 -1
drivers/char/tpm/tpm_atmel.c
··· 229 229 module_init(init_atmel); 230 230 module_exit(cleanup_atmel); 231 231 232 - MODULE_AUTHOR("Leendert van Doorn (leendert@watson.ibm.com)"); 232 + MODULE_AUTHOR("Leendert van Doorn <leendert@watson.ibm.com>"); 233 233 MODULE_DESCRIPTION("TPM Driver"); 234 234 MODULE_VERSION("2.0"); 235 235 MODULE_LICENSE("GPL");
+1 -1
drivers/char/tpm/tpm_i2c_nuvoton.c
··· 654 654 655 655 module_i2c_driver(i2c_nuvoton_driver); 656 656 657 - MODULE_AUTHOR("Dan Morav (dan.morav@nuvoton.com)"); 657 + MODULE_AUTHOR("Dan Morav <dan.morav@nuvoton.com>"); 658 658 MODULE_DESCRIPTION("Nuvoton TPM I2C Driver"); 659 659 MODULE_LICENSE("GPL");
+1 -1
drivers/char/tpm/tpm_nsc.c
··· 410 410 module_init(init_nsc); 411 411 module_exit(cleanup_nsc); 412 412 413 - MODULE_AUTHOR("Leendert van Doorn (leendert@watson.ibm.com)"); 413 + MODULE_AUTHOR("Leendert van Doorn <leendert@watson.ibm.com>"); 414 414 MODULE_DESCRIPTION("TPM Driver"); 415 415 MODULE_VERSION("2.0"); 416 416 MODULE_LICENSE("GPL");
+1 -1
drivers/char/tpm/tpm_tis.c
··· 429 429 430 430 module_init(init_tis); 431 431 module_exit(cleanup_tis); 432 - MODULE_AUTHOR("Leendert van Doorn (leendert@watson.ibm.com)"); 432 + MODULE_AUTHOR("Leendert van Doorn <leendert@watson.ibm.com>"); 433 433 MODULE_DESCRIPTION("TPM Driver"); 434 434 MODULE_VERSION("2.0"); 435 435 MODULE_LICENSE("GPL");
+1 -1
drivers/char/tpm/tpm_tis_core.c
··· 1360 1360 EXPORT_SYMBOL_GPL(tpm_tis_resume); 1361 1361 #endif 1362 1362 1363 - MODULE_AUTHOR("Leendert van Doorn (leendert@watson.ibm.com)"); 1363 + MODULE_AUTHOR("Leendert van Doorn <leendert@watson.ibm.com>"); 1364 1364 MODULE_DESCRIPTION("TPM Driver"); 1365 1365 MODULE_VERSION("2.0"); 1366 1366 MODULE_LICENSE("GPL");
+1 -1
drivers/char/tpm/tpm_vtpm_proxy.c
··· 711 711 module_init(vtpm_module_init); 712 712 module_exit(vtpm_module_exit); 713 713 714 - MODULE_AUTHOR("Stefan Berger (stefanb@us.ibm.com)"); 714 + MODULE_AUTHOR("Stefan Berger <stefanb@us.ibm.com>"); 715 715 MODULE_DESCRIPTION("vTPM Driver"); 716 716 MODULE_VERSION("0.1"); 717 717 MODULE_LICENSE("GPL");
+1 -1
drivers/crypto/n2_core.c
··· 41 41 static const char version[] = 42 42 DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; 43 43 44 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 44 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 45 45 MODULE_DESCRIPTION("Niagara2 Crypto driver"); 46 46 MODULE_LICENSE("GPL"); 47 47 MODULE_VERSION(DRV_MODULE_VERSION);
+1 -1
drivers/firmware/efi/libstub/Makefile
··· 105 105 # Even when -mbranch-protection=none is set, Clang will generate a 106 106 # .note.gnu.property for code-less object files (like lib/ctype.c), 107 107 # so work around this by explicitly removing the unwanted section. 108 - # https://bugs.llvm.org/show_bug.cgi?id=46480 108 + # https://llvm.org/pr46480 109 109 STUBCOPY_FLAGS-y += --remove-section=.note.gnu.property 110 110 111 111 STUBCOPY_RELOC-$(CONFIG_X86_32) := R_386_32
+1 -1
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
··· 612 612 /* Set ring buffer size in dwords */ 613 613 uint32_t rb_bufsz = order_base_2(ring->ring_size / 4); 614 614 615 - barrier(); /* work around https://bugs.llvm.org/show_bug.cgi?id=42576 */ 615 + barrier(); /* work around https://llvm.org/pr42576 */ 616 616 rb_cntl = REG_SET_FIELD(rb_cntl, SDMA_GFX_RB_CNTL, RB_SIZE, rb_bufsz); 617 617 #ifdef __BIG_ENDIAN 618 618 rb_cntl = REG_SET_FIELD(rb_cntl, SDMA_GFX_RB_CNTL, RB_SWAP_ENABLE, 1);
+1 -1
drivers/hwmon/dell-smm-hwmon.c
··· 108 108 struct dell_smm_data *data; 109 109 }; 110 110 111 - MODULE_AUTHOR("Massimo Dal Zotto (dz@debian.org)"); 111 + MODULE_AUTHOR("Massimo Dal Zotto <dz@debian.org>"); 112 112 MODULE_AUTHOR("Pali Rohár <pali@kernel.org>"); 113 113 MODULE_DESCRIPTION("Dell laptop SMM BIOS hwmon driver"); 114 114 MODULE_LICENSE("GPL");
+1 -1
drivers/hwmon/ultra45_env.c
··· 18 18 19 19 #define DRV_MODULE_VERSION "0.1" 20 20 21 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 21 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 22 22 MODULE_DESCRIPTION("Ultra45 environmental monitor driver"); 23 23 MODULE_LICENSE("GPL"); 24 24 MODULE_VERSION(DRV_MODULE_VERSION);
+1 -1
drivers/i2c/muxes/i2c-mux-mlxcpld.c
··· 187 187 188 188 module_platform_driver(mlxcpld_mux_driver); 189 189 190 - MODULE_AUTHOR("Michael Shych (michaels@mellanox.com)"); 190 + MODULE_AUTHOR("Michael Shych <michaels@mellanox.com>"); 191 191 MODULE_DESCRIPTION("Mellanox I2C-CPLD-MUX driver"); 192 192 MODULE_LICENSE("Dual BSD/GPL"); 193 193 MODULE_ALIAS("platform:i2c-mux-mlxcpld");
+1 -1
drivers/leds/leds-sunfire.c
··· 17 17 #include <asm/fhc.h> 18 18 #include <asm/upa.h> 19 19 20 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 20 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 21 21 MODULE_DESCRIPTION("Sun Fire LED driver"); 22 22 MODULE_LICENSE("GPL"); 23 23
+1 -7
drivers/md/bcache/sysfs.c
··· 702 702 for (h = c->bucket_hash; 703 703 h < c->bucket_hash + (1 << BUCKET_HASH_BITS); 704 704 h++) { 705 - unsigned int i = 0; 706 - struct hlist_node *p; 707 - 708 - hlist_for_each(p, h) 709 - i++; 710 - 711 - ret = max(ret, i); 705 + ret = max(ret, hlist_count_nodes(h)); 712 706 } 713 707 714 708 mutex_unlock(&c->bucket_lock);
+1 -1
drivers/media/common/siano/smscoreapi.c
··· 2155 2155 module_exit(smscore_module_exit); 2156 2156 2157 2157 MODULE_DESCRIPTION("Siano MDTV Core module"); 2158 - MODULE_AUTHOR("Siano Mobile Silicon, Inc. (uris@siano-ms.com)"); 2158 + MODULE_AUTHOR("Siano Mobile Silicon, Inc. <uris@siano-ms.com>"); 2159 2159 MODULE_LICENSE("GPL"); 2160 2160 2161 2161 /* This should match what's defined at smscoreapi.h */
+1 -1
drivers/media/common/siano/smsdvb-main.c
··· 1267 1267 module_exit(smsdvb_module_exit); 1268 1268 1269 1269 MODULE_DESCRIPTION("SMS DVB subsystem adaptation module"); 1270 - MODULE_AUTHOR("Siano Mobile Silicon, Inc. (uris@siano-ms.com)"); 1270 + MODULE_AUTHOR("Siano Mobile Silicon, Inc. <uris@siano-ms.com>"); 1271 1271 MODULE_LICENSE("GPL");
+1 -1
drivers/media/dvb-frontends/cx24117.c
··· 1647 1647 1648 1648 1649 1649 MODULE_DESCRIPTION("DVB Frontend module for Conexant cx24117/cx24132 hardware"); 1650 - MODULE_AUTHOR("Luis Alves (ljalvs@gmail.com)"); 1650 + MODULE_AUTHOR("Luis Alves <ljalvs@gmail.com>"); 1651 1651 MODULE_LICENSE("GPL"); 1652 1652 MODULE_VERSION("1.1"); 1653 1653 MODULE_FIRMWARE(CX24117_DEFAULT_FIRMWARE);
+1 -1
drivers/media/test-drivers/vicodec/codec-fwht.c
··· 49 49 50 50 /* 51 51 * noinline_for_stack to work around 52 - * https://bugs.llvm.org/show_bug.cgi?id=38809 52 + * https://llvm.org/pr38809 53 53 */ 54 54 static int noinline_for_stack 55 55 rlc(const s16 *in, __be16 *output, int blocktype)
+1 -1
drivers/media/usb/siano/smsusb.c
··· 724 724 module_usb_driver(smsusb_driver); 725 725 726 726 MODULE_DESCRIPTION("Driver for the Siano SMS1xxx USB dongle"); 727 - MODULE_AUTHOR("Siano Mobile Silicon, INC. (uris@siano-ms.com)"); 727 + MODULE_AUTHOR("Siano Mobile Silicon, Inc. <uris@siano-ms.com>"); 728 728 MODULE_LICENSE("GPL");
+1 -1
drivers/net/ethernet/broadcom/tg3.c
··· 221 221 #define FIRMWARE_TG3TSO "tigon/tg3_tso.bin" 222 222 #define FIRMWARE_TG3TSO5 "tigon/tg3_tso5.bin" 223 223 224 - MODULE_AUTHOR("David S. Miller (davem@redhat.com) and Jeff Garzik (jgarzik@pobox.com)"); 224 + MODULE_AUTHOR("David S. Miller <davem@redhat.com> and Jeff Garzik <jgarzik@pobox.com>"); 225 225 MODULE_DESCRIPTION("Broadcom Tigon3 ethernet driver"); 226 226 MODULE_LICENSE("GPL"); 227 227 MODULE_FIRMWARE(FIRMWARE_TG3);
+1 -1
drivers/net/ethernet/sun/cassini.c
··· 176 176 static int cassini_debug = -1; /* -1 == use CAS_DEF_MSG_ENABLE as value */ 177 177 static int link_mode; 178 178 179 - MODULE_AUTHOR("Adrian Sun (asun@darksunrising.com)"); 179 + MODULE_AUTHOR("Adrian Sun <asun@darksunrising.com>"); 180 180 MODULE_DESCRIPTION("Sun Cassini(+) ethernet driver"); 181 181 MODULE_LICENSE("GPL"); 182 182 MODULE_FIRMWARE("sun/cassini.bin");
+1 -1
drivers/net/ethernet/sun/niu.c
··· 61 61 static char version[] = 62 62 DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; 63 63 64 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 64 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 65 65 MODULE_DESCRIPTION("NIU ethernet driver"); 66 66 MODULE_LICENSE("GPL"); 67 67 MODULE_VERSION(DRV_MODULE_VERSION);
+1 -1
drivers/net/ethernet/sun/sunhme.c
··· 59 59 60 60 #define DRV_NAME "sunhme" 61 61 62 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 62 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 63 63 MODULE_DESCRIPTION("Sun HappyMealEthernet(HME) 10/100baseT ethernet driver"); 64 64 MODULE_LICENSE("GPL"); 65 65
+1 -1
drivers/net/ethernet/sun/sunvnet.c
··· 44 44 45 45 static char version[] = 46 46 DRV_MODULE_NAME " " DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")"; 47 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 47 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 48 48 MODULE_DESCRIPTION("Sun LDOM virtual network driver"); 49 49 MODULE_LICENSE("GPL"); 50 50 MODULE_VERSION(DRV_MODULE_VERSION);
+1 -1
drivers/net/ethernet/sun/sunvnet_common.c
··· 39 39 */ 40 40 #define VNET_MAX_RETRIES 10 41 41 42 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 42 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 43 43 MODULE_DESCRIPTION("Sun LDOM virtual network support library"); 44 44 MODULE_LICENSE("GPL"); 45 45 MODULE_VERSION("1.1");
+1 -1
drivers/net/ppp/pptp.c
··· 694 694 module_exit(pptp_exit_module); 695 695 696 696 MODULE_DESCRIPTION("Point-to-Point Tunneling Protocol"); 697 - MODULE_AUTHOR("D. Kozlov (xeb@mail.ru)"); 697 + MODULE_AUTHOR("D. Kozlov <xeb@mail.ru>"); 698 698 MODULE_LICENSE("GPL"); 699 699 MODULE_ALIAS_NET_PF_PROTO(PF_PPPOX, PX_PROTO_PPTP);
+1 -1
drivers/platform/x86/compal-laptop.c
··· 1107 1107 module_exit(compal_cleanup); 1108 1108 1109 1109 MODULE_AUTHOR("Cezary Jackiewicz"); 1110 - MODULE_AUTHOR("Roald Frederickx (roald.frederickx@gmail.com)"); 1110 + MODULE_AUTHOR("Roald Frederickx <roald.frederickx@gmail.com>"); 1111 1111 MODULE_DESCRIPTION("Compal Laptop Support"); 1112 1112 MODULE_VERSION(DRIVER_VERSION); 1113 1113 MODULE_LICENSE("GPL");
+1 -1
drivers/platform/x86/intel/oaktrail.c
··· 365 365 module_init(oaktrail_init); 366 366 module_exit(oaktrail_cleanup); 367 367 368 - MODULE_AUTHOR("Yin Kangkai (kangkai.yin@intel.com)"); 368 + MODULE_AUTHOR("Yin Kangkai <kangkai.yin@intel.com>"); 369 369 MODULE_DESCRIPTION("Intel Oaktrail Platform ACPI Extras"); 370 370 MODULE_VERSION(DRIVER_VERSION); 371 371 MODULE_LICENSE("GPL");
+1 -1
drivers/platform/x86/mlx-platform.c
··· 6659 6659 } 6660 6660 module_exit(mlxplat_exit); 6661 6661 6662 - MODULE_AUTHOR("Vadim Pasternak (vadimp@mellanox.com)"); 6662 + MODULE_AUTHOR("Vadim Pasternak <vadimp@mellanox.com>"); 6663 6663 MODULE_DESCRIPTION("Mellanox platform driver"); 6664 6664 MODULE_LICENSE("Dual BSD/GPL");
+1 -1
drivers/regulator/Kconfig
··· 288 288 config REGULATOR_DA903X 289 289 tristate "Dialog Semiconductor DA9030/DA9034 regulators" 290 290 depends on PMIC_DA903X 291 - depends on !CC_IS_CLANG # https://bugs.llvm.org/show_bug.cgi?id=38789 291 + depends on !CC_IS_CLANG # https://llvm.org/pr38789 292 292 help 293 293 Say y here to support the BUCKs and LDOs regulators found on 294 294 Dialog Semiconductor DA9030/DA9034 PMIC.
+1 -1
drivers/s390/net/fsm.c
··· 9 9 #include <linux/slab.h> 10 10 #include <linux/timer.h> 11 11 12 - MODULE_AUTHOR("(C) 2000 IBM Corp. by Fritz Elfert (felfert@millenux.com)"); 12 + MODULE_AUTHOR("(C) 2000 IBM Corp. by Fritz Elfert <felfert@millenux.com>"); 13 13 MODULE_DESCRIPTION("Finite state machine helper functions"); 14 14 MODULE_LICENSE("GPL"); 15 15
+1 -1
drivers/sbus/char/openprom.c
··· 33 33 #include <linux/pci.h> 34 34 #endif 35 35 36 - MODULE_AUTHOR("Thomas K. Dyas (tdyas@noc.rutgers.edu) and Eddie C. Dost (ecd@skynet.be)"); 36 + MODULE_AUTHOR("Thomas K. Dyas <tdyas@noc.rutgers.edu> and Eddie C. Dost <ecd@skynet.be>"); 37 37 MODULE_DESCRIPTION("OPENPROM Configuration Driver"); 38 38 MODULE_LICENSE("GPL"); 39 39 MODULE_VERSION("1.0");
+1 -1
drivers/scsi/esp_scsi.c
··· 2753 2753 } 2754 2754 2755 2755 MODULE_DESCRIPTION("ESP SCSI driver core"); 2756 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 2756 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 2757 2757 MODULE_LICENSE("GPL"); 2758 2758 MODULE_VERSION(DRV_VERSION); 2759 2759
+1 -1
drivers/scsi/jazz_esp.c
··· 204 204 module_platform_driver(esp_jazz_driver); 205 205 206 206 MODULE_DESCRIPTION("JAZZ ESP SCSI driver"); 207 - MODULE_AUTHOR("Thomas Bogendoerfer (tsbogend@alpha.franken.de)"); 207 + MODULE_AUTHOR("Thomas Bogendoerfer <tsbogend@alpha.franken.de>"); 208 208 MODULE_LICENSE("GPL"); 209 209 MODULE_VERSION(DRV_VERSION);
+1 -1
drivers/scsi/mesh.c
··· 54 54 #define KERN_DEBUG KERN_WARNING 55 55 #endif 56 56 57 - MODULE_AUTHOR("Paul Mackerras (paulus@samba.org)"); 57 + MODULE_AUTHOR("Paul Mackerras <paulus@samba.org>"); 58 58 MODULE_DESCRIPTION("PowerMac MESH SCSI driver"); 59 59 MODULE_LICENSE("GPL"); 60 60
+1 -1
drivers/scsi/qlogicpti.c
··· 1468 1468 module_platform_driver(qpti_sbus_driver); 1469 1469 1470 1470 MODULE_DESCRIPTION("QlogicISP SBUS driver"); 1471 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 1471 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 1472 1472 MODULE_LICENSE("GPL"); 1473 1473 MODULE_VERSION("2.1"); 1474 1474 MODULE_FIRMWARE("qlogic/isp1000.bin");
+1 -1
drivers/scsi/sun3x_esp.c
··· 273 273 module_platform_driver(esp_sun3x_driver); 274 274 275 275 MODULE_DESCRIPTION("Sun3x ESP SCSI driver"); 276 - MODULE_AUTHOR("Thomas Bogendoerfer (tsbogend@alpha.franken.de)"); 276 + MODULE_AUTHOR("Thomas Bogendoerfer <tsbogend@alpha.franken.de>"); 277 277 MODULE_LICENSE("GPL"); 278 278 MODULE_VERSION(DRV_VERSION); 279 279 MODULE_ALIAS("platform:sun3x_esp");
+1 -1
drivers/scsi/sun_esp.c
··· 608 608 module_platform_driver(esp_sbus_driver); 609 609 610 610 MODULE_DESCRIPTION("Sun ESP SCSI driver"); 611 - MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); 611 + MODULE_AUTHOR("David S. Miller <davem@davemloft.net>"); 612 612 MODULE_LICENSE("GPL"); 613 613 MODULE_VERSION(DRV_VERSION);
+1 -1
drivers/video/fbdev/hgafb.c
··· 670 670 * 671 671 * ------------------------------------------------------------------------- */ 672 672 673 - MODULE_AUTHOR("Ferenc Bakonyi (fero@drama.obuda.kando.hu)"); 673 + MODULE_AUTHOR("Ferenc Bakonyi <fero@drama.obuda.kando.hu>"); 674 674 MODULE_DESCRIPTION("FBDev driver for Hercules Graphics Adaptor"); 675 675 MODULE_LICENSE("GPL"); 676 676
+6
fs/fat/nfs.c
··· 130 130 fid->parent_i_gen = parent->i_generation; 131 131 type = FILEID_FAT_WITH_PARENT; 132 132 *lenp = FAT_FID_SIZE_WITH_PARENT; 133 + } else { 134 + /* 135 + * We need to initialize this field because the fh is actually 136 + * 12 bytes long 137 + */ 138 + fid->parent_i_pos_hi = 0; 133 139 } 134 140 135 141 return type;
+45 -44
fs/nilfs2/alloc.c
··· 525 525 ret = nilfs_palloc_get_desc_block(inode, group, 1, &desc_bh); 526 526 if (ret < 0) 527 527 return ret; 528 - desc_kaddr = kmap(desc_bh->b_page); 528 + desc_kaddr = kmap_local_page(desc_bh->b_page); 529 529 desc = nilfs_palloc_block_get_group_desc( 530 530 inode, group, desc_bh, desc_kaddr); 531 531 n = nilfs_palloc_rest_groups_in_desc_block(inode, group, 532 532 maxgroup); 533 - for (j = 0; j < n; j++, desc++, group++) { 533 + for (j = 0; j < n; j++, desc++, group++, group_offset = 0) { 534 534 lock = nilfs_mdt_bgl_lock(inode, group); 535 - if (nilfs_palloc_group_desc_nfrees(desc, lock) > 0) { 536 - ret = nilfs_palloc_get_bitmap_block( 537 - inode, group, 1, &bitmap_bh); 538 - if (ret < 0) 539 - goto out_desc; 540 - bitmap_kaddr = kmap(bitmap_bh->b_page); 541 - bitmap = bitmap_kaddr + bh_offset(bitmap_bh); 542 - pos = nilfs_palloc_find_available_slot( 543 - bitmap, group_offset, 544 - entries_per_group, lock); 545 - if (pos >= 0) { 546 - /* found a free entry */ 547 - nilfs_palloc_group_desc_add_entries( 548 - desc, lock, -1); 549 - req->pr_entry_nr = 550 - entries_per_group * group + pos; 551 - kunmap(desc_bh->b_page); 552 - kunmap(bitmap_bh->b_page); 535 + if (nilfs_palloc_group_desc_nfrees(desc, lock) == 0) 536 + continue; 553 537 554 - req->pr_desc_bh = desc_bh; 555 - req->pr_bitmap_bh = bitmap_bh; 556 - return 0; 557 - } 558 - kunmap(bitmap_bh->b_page); 559 - brelse(bitmap_bh); 538 + kunmap_local(desc_kaddr); 539 + ret = nilfs_palloc_get_bitmap_block(inode, group, 1, 540 + &bitmap_bh); 541 + if (unlikely(ret < 0)) { 542 + brelse(desc_bh); 543 + return ret; 560 544 } 561 545 562 - group_offset = 0; 546 + desc_kaddr = kmap_local_page(desc_bh->b_page); 547 + desc = nilfs_palloc_block_get_group_desc( 548 + inode, group, desc_bh, desc_kaddr); 549 + 550 + bitmap_kaddr = kmap_local_page(bitmap_bh->b_page); 551 + bitmap = bitmap_kaddr + bh_offset(bitmap_bh); 552 + pos = nilfs_palloc_find_available_slot( 553 + bitmap, group_offset, entries_per_group, lock); 554 + kunmap_local(bitmap_kaddr); 555 + if (pos >= 0) 556 + goto found; 557 + 558 + brelse(bitmap_bh); 563 559 } 564 560 565 - kunmap(desc_bh->b_page); 561 + kunmap_local(desc_kaddr); 566 562 brelse(desc_bh); 567 563 } 568 564 569 565 /* no entries left */ 570 566 return -ENOSPC; 571 567 572 - out_desc: 573 - kunmap(desc_bh->b_page); 574 - brelse(desc_bh); 575 - return ret; 568 + found: 569 + /* found a free entry */ 570 + nilfs_palloc_group_desc_add_entries(desc, lock, -1); 571 + req->pr_entry_nr = entries_per_group * group + pos; 572 + kunmap_local(desc_kaddr); 573 + 574 + req->pr_desc_bh = desc_bh; 575 + req->pr_bitmap_bh = bitmap_bh; 576 + return 0; 576 577 } 577 578 578 579 /** ··· 607 606 spinlock_t *lock; 608 607 609 608 group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); 610 - desc_kaddr = kmap(req->pr_desc_bh->b_page); 609 + desc_kaddr = kmap_local_page(req->pr_desc_bh->b_page); 611 610 desc = nilfs_palloc_block_get_group_desc(inode, group, 612 611 req->pr_desc_bh, desc_kaddr); 613 - bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); 612 + bitmap_kaddr = kmap_local_page(req->pr_bitmap_bh->b_page); 614 613 bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); 615 614 lock = nilfs_mdt_bgl_lock(inode, group); 616 615 ··· 622 621 else 623 622 nilfs_palloc_group_desc_add_entries(desc, lock, 1); 624 623 625 - kunmap(req->pr_bitmap_bh->b_page); 626 - kunmap(req->pr_desc_bh->b_page); 624 + kunmap_local(bitmap_kaddr); 625 + kunmap_local(desc_kaddr); 627 626 628 627 mark_buffer_dirty(req->pr_desc_bh); 629 628 mark_buffer_dirty(req->pr_bitmap_bh); ··· 648 647 spinlock_t *lock; 649 648 650 649 group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); 651 - desc_kaddr = kmap(req->pr_desc_bh->b_page); 650 + desc_kaddr = kmap_local_page(req->pr_desc_bh->b_page); 652 651 desc = nilfs_palloc_block_get_group_desc(inode, group, 653 652 req->pr_desc_bh, desc_kaddr); 654 - bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); 653 + bitmap_kaddr = kmap_local_page(req->pr_bitmap_bh->b_page); 655 654 bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); 656 655 lock = nilfs_mdt_bgl_lock(inode, group); 657 656 ··· 663 662 else 664 663 nilfs_palloc_group_desc_add_entries(desc, lock, 1); 665 664 666 - kunmap(req->pr_bitmap_bh->b_page); 667 - kunmap(req->pr_desc_bh->b_page); 665 + kunmap_local(bitmap_kaddr); 666 + kunmap_local(desc_kaddr); 668 667 669 668 brelse(req->pr_bitmap_bh); 670 669 brelse(req->pr_desc_bh); ··· 756 755 /* Get the first entry number of the group */ 757 756 group_min_nr = (__u64)group * epg; 758 757 759 - bitmap_kaddr = kmap(bitmap_bh->b_page); 758 + bitmap_kaddr = kmap_local_page(bitmap_bh->b_page); 760 759 bitmap = bitmap_kaddr + bh_offset(bitmap_bh); 761 760 lock = nilfs_mdt_bgl_lock(inode, group); 762 761 ··· 802 801 entry_start = rounddown(group_offset, epb); 803 802 } while (true); 804 803 805 - kunmap(bitmap_bh->b_page); 804 + kunmap_local(bitmap_kaddr); 806 805 mark_buffer_dirty(bitmap_bh); 807 806 brelse(bitmap_bh); 808 807 ··· 816 815 inode->i_ino); 817 816 } 818 817 819 - desc_kaddr = kmap_atomic(desc_bh->b_page); 818 + desc_kaddr = kmap_local_page(desc_bh->b_page); 820 819 desc = nilfs_palloc_block_get_group_desc( 821 820 inode, group, desc_bh, desc_kaddr); 822 821 nfree = nilfs_palloc_group_desc_add_entries(desc, lock, n); 823 - kunmap_atomic(desc_kaddr); 822 + kunmap_local(desc_kaddr); 824 823 mark_buffer_dirty(desc_bh); 825 824 nilfs_mdt_mark_dirty(inode); 826 825 brelse(desc_bh);
-3
fs/nilfs2/bmap.c
··· 548 548 */ 549 549 void nilfs_bmap_write(struct nilfs_bmap *bmap, struct nilfs_inode *raw_inode) 550 550 { 551 - down_write(&bmap->b_sem); 552 551 memcpy(raw_inode->i_bmap, bmap->b_u.u_data, 553 552 NILFS_INODE_BMAP_SIZE * sizeof(__le64)); 554 553 if (bmap->b_inode->i_ino == NILFS_DAT_INO) 555 554 bmap->b_last_allocated_ptr = NILFS_BMAP_NEW_PTR_INIT; 556 - 557 - up_write(&bmap->b_sem); 558 555 } 559 556 560 557 void nilfs_bmap_init_gc(struct nilfs_bmap *bmap)
+7 -2
fs/nilfs2/btree.c
··· 724 724 dat = nilfs_bmap_get_dat(btree); 725 725 ret = nilfs_dat_translate(dat, ptr, &blocknr); 726 726 if (ret < 0) 727 - goto out; 727 + goto dat_error; 728 728 ptr = blocknr; 729 729 } 730 730 cnt = 1; ··· 743 743 if (dat) { 744 744 ret = nilfs_dat_translate(dat, ptr2, &blocknr); 745 745 if (ret < 0) 746 - goto out; 746 + goto dat_error; 747 747 ptr2 = blocknr; 748 748 } 749 749 if (ptr2 != ptr + cnt || ++cnt == maxblocks) ··· 781 781 out: 782 782 nilfs_btree_free_path(path); 783 783 return ret; 784 + 785 + dat_error: 786 + if (ret == -ENOENT) 787 + ret = -EINVAL; /* Notify bmap layer of metadata corruption */ 788 + goto out; 784 789 } 785 790 786 791 static void nilfs_btree_promote_key(struct nilfs_bmap *btree,
+219 -106
fs/nilfs2/cpfile.c
··· 28 28 { 29 29 __u64 tcno = cno + NILFS_MDT(cpfile)->mi_first_entry_offset - 1; 30 30 31 - do_div(tcno, nilfs_cpfile_checkpoints_per_block(cpfile)); 31 + tcno = div64_ul(tcno, nilfs_cpfile_checkpoints_per_block(cpfile)); 32 32 return (unsigned long)tcno; 33 33 } 34 34 ··· 187 187 } 188 188 189 189 /** 190 - * nilfs_cpfile_get_checkpoint - get a checkpoint 191 - * @cpfile: inode of checkpoint file 192 - * @cno: checkpoint number 193 - * @create: create flag 194 - * @cpp: pointer to a checkpoint 195 - * @bhp: pointer to a buffer head 190 + * nilfs_cpfile_read_checkpoint - read a checkpoint entry in cpfile 191 + * @cpfile: checkpoint file inode 192 + * @cno: number of checkpoint entry to read 193 + * @root: nilfs root object 194 + * @ifile: ifile's inode to read and attach to @root 196 195 * 197 - * Description: nilfs_cpfile_get_checkpoint() acquires the checkpoint 198 - * specified by @cno. A new checkpoint will be created if @cno is the current 199 - * checkpoint number and @create is nonzero. 196 + * This function imports checkpoint information from the checkpoint file and 197 + * stores it to the inode file given by @ifile and the nilfs root object 198 + * given by @root. 200 199 * 201 - * Return Value: On success, 0 is returned, and the checkpoint and the 202 - * buffer head of the buffer on which the checkpoint is located are stored in 203 - * the place pointed by @cpp and @bhp, respectively. On error, one of the 204 - * following negative error codes is returned. 205 - * 206 - * %-EIO - I/O error. 207 - * 208 - * %-ENOMEM - Insufficient amount of memory available. 209 - * 210 - * %-ENOENT - No such checkpoint. 211 - * 212 - * %-EINVAL - invalid checkpoint. 200 + * Return: 0 on success, or the following negative error code on failure. 201 + * * %-EINVAL - Invalid checkpoint. 202 + * * %-ENOMEM - Insufficient memory available. 203 + * * %-EIO - I/O error (including metadata corruption). 213 204 */ 214 - int nilfs_cpfile_get_checkpoint(struct inode *cpfile, 215 - __u64 cno, 216 - int create, 217 - struct nilfs_checkpoint **cpp, 218 - struct buffer_head **bhp) 205 + int nilfs_cpfile_read_checkpoint(struct inode *cpfile, __u64 cno, 206 + struct nilfs_root *root, struct inode *ifile) 207 + { 208 + struct buffer_head *cp_bh; 209 + struct nilfs_checkpoint *cp; 210 + void *kaddr; 211 + int ret; 212 + 213 + if (cno < 1 || cno > nilfs_mdt_cno(cpfile)) 214 + return -EINVAL; 215 + 216 + down_read(&NILFS_MDT(cpfile)->mi_sem); 217 + ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 0, &cp_bh); 218 + if (unlikely(ret < 0)) { 219 + if (ret == -ENOENT) 220 + ret = -EINVAL; 221 + goto out_sem; 222 + } 223 + 224 + kaddr = kmap_local_page(cp_bh->b_page); 225 + cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 226 + if (nilfs_checkpoint_invalid(cp)) { 227 + ret = -EINVAL; 228 + goto put_cp; 229 + } 230 + 231 + ret = nilfs_read_inode_common(ifile, &cp->cp_ifile_inode); 232 + if (unlikely(ret)) { 233 + /* 234 + * Since this inode is on a checkpoint entry, treat errors 235 + * as metadata corruption. 236 + */ 237 + nilfs_err(cpfile->i_sb, 238 + "ifile inode (checkpoint number=%llu) corrupted", 239 + (unsigned long long)cno); 240 + ret = -EIO; 241 + goto put_cp; 242 + } 243 + 244 + /* Configure the nilfs root object */ 245 + atomic64_set(&root->inodes_count, le64_to_cpu(cp->cp_inodes_count)); 246 + atomic64_set(&root->blocks_count, le64_to_cpu(cp->cp_blocks_count)); 247 + root->ifile = ifile; 248 + 249 + put_cp: 250 + kunmap_local(kaddr); 251 + brelse(cp_bh); 252 + out_sem: 253 + up_read(&NILFS_MDT(cpfile)->mi_sem); 254 + return ret; 255 + } 256 + 257 + /** 258 + * nilfs_cpfile_create_checkpoint - create a checkpoint entry on cpfile 259 + * @cpfile: checkpoint file inode 260 + * @cno: number of checkpoint to set up 261 + * 262 + * This function creates a checkpoint with the number specified by @cno on 263 + * cpfile. If the specified checkpoint entry already exists due to a past 264 + * failure, it will be reused without returning an error. 265 + * In either case, the buffer of the block containing the checkpoint entry 266 + * and the cpfile inode are made dirty for inclusion in the write log. 267 + * 268 + * Return: 0 on success, or the following negative error code on failure. 269 + * * %-ENOMEM - Insufficient memory available. 270 + * * %-EIO - I/O error (including metadata corruption). 271 + * * %-EROFS - Read only filesystem 272 + */ 273 + int nilfs_cpfile_create_checkpoint(struct inode *cpfile, __u64 cno) 219 274 { 220 275 struct buffer_head *header_bh, *cp_bh; 221 276 struct nilfs_cpfile_header *header; ··· 278 223 void *kaddr; 279 224 int ret; 280 225 281 - if (unlikely(cno < 1 || cno > nilfs_mdt_cno(cpfile) || 282 - (cno < nilfs_mdt_cno(cpfile) && create))) 283 - return -EINVAL; 226 + if (WARN_ON_ONCE(cno < 1)) 227 + return -EIO; 284 228 285 229 down_write(&NILFS_MDT(cpfile)->mi_sem); 286 - 287 230 ret = nilfs_cpfile_get_header_block(cpfile, &header_bh); 288 - if (ret < 0) 231 + if (unlikely(ret < 0)) { 232 + if (ret == -ENOENT) { 233 + nilfs_error(cpfile->i_sb, 234 + "checkpoint creation failed due to metadata corruption."); 235 + ret = -EIO; 236 + } 289 237 goto out_sem; 290 - ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, create, &cp_bh); 291 - if (ret < 0) 238 + } 239 + ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 1, &cp_bh); 240 + if (unlikely(ret < 0)) 292 241 goto out_header; 293 - kaddr = kmap(cp_bh->b_page); 242 + 243 + kaddr = kmap_local_page(cp_bh->b_page); 294 244 cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 295 245 if (nilfs_checkpoint_invalid(cp)) { 296 - if (!create) { 297 - kunmap(cp_bh->b_page); 298 - brelse(cp_bh); 299 - ret = -ENOENT; 300 - goto out_header; 301 - } 302 246 /* a newly-created checkpoint */ 303 247 nilfs_checkpoint_clear_invalid(cp); 304 248 if (!nilfs_cpfile_is_in_first(cpfile, cno)) 305 249 nilfs_cpfile_block_add_valid_checkpoints(cpfile, cp_bh, 306 250 kaddr, 1); 307 - mark_buffer_dirty(cp_bh); 251 + kunmap_local(kaddr); 308 252 309 - kaddr = kmap_atomic(header_bh->b_page); 253 + kaddr = kmap_local_page(header_bh->b_page); 310 254 header = nilfs_cpfile_block_get_header(cpfile, header_bh, 311 255 kaddr); 312 256 le64_add_cpu(&header->ch_ncheckpoints, 1); 313 - kunmap_atomic(kaddr); 257 + kunmap_local(kaddr); 314 258 mark_buffer_dirty(header_bh); 315 - nilfs_mdt_mark_dirty(cpfile); 259 + } else { 260 + kunmap_local(kaddr); 316 261 } 317 262 318 - if (cpp != NULL) 319 - *cpp = cp; 320 - *bhp = cp_bh; 263 + /* Force the buffer and the inode to become dirty */ 264 + mark_buffer_dirty(cp_bh); 265 + brelse(cp_bh); 266 + nilfs_mdt_mark_dirty(cpfile); 321 267 322 - out_header: 268 + out_header: 323 269 brelse(header_bh); 324 270 325 - out_sem: 271 + out_sem: 326 272 up_write(&NILFS_MDT(cpfile)->mi_sem); 327 273 return ret; 328 274 } 329 275 330 276 /** 331 - * nilfs_cpfile_put_checkpoint - put a checkpoint 332 - * @cpfile: inode of checkpoint file 333 - * @cno: checkpoint number 334 - * @bh: buffer head 277 + * nilfs_cpfile_finalize_checkpoint - fill in a checkpoint entry in cpfile 278 + * @cpfile: checkpoint file inode 279 + * @cno: checkpoint number 280 + * @root: nilfs root object 281 + * @blkinc: number of blocks added by this checkpoint 282 + * @ctime: checkpoint creation time 283 + * @minor: minor checkpoint flag 335 284 * 336 - * Description: nilfs_cpfile_put_checkpoint() releases the checkpoint 337 - * specified by @cno. @bh must be the buffer head which has been returned by 338 - * a previous call to nilfs_cpfile_get_checkpoint() with @cno. 285 + * This function completes the checkpoint entry numbered by @cno in the 286 + * cpfile with the data given by the arguments @root, @blkinc, @ctime, and 287 + * @minor. 288 + * 289 + * Return: 0 on success, or the following negative error code on failure. 290 + * * %-ENOMEM - Insufficient memory available. 291 + * * %-EIO - I/O error (including metadata corruption). 339 292 */ 340 - void nilfs_cpfile_put_checkpoint(struct inode *cpfile, __u64 cno, 341 - struct buffer_head *bh) 293 + int nilfs_cpfile_finalize_checkpoint(struct inode *cpfile, __u64 cno, 294 + struct nilfs_root *root, __u64 blkinc, 295 + time64_t ctime, bool minor) 342 296 { 343 - kunmap(bh->b_page); 344 - brelse(bh); 297 + struct buffer_head *cp_bh; 298 + struct nilfs_checkpoint *cp; 299 + void *kaddr; 300 + int ret; 301 + 302 + if (WARN_ON_ONCE(cno < 1)) 303 + return -EIO; 304 + 305 + down_write(&NILFS_MDT(cpfile)->mi_sem); 306 + ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 0, &cp_bh); 307 + if (unlikely(ret < 0)) { 308 + if (ret == -ENOENT) 309 + goto error; 310 + goto out_sem; 311 + } 312 + 313 + kaddr = kmap_local_page(cp_bh->b_page); 314 + cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 315 + if (unlikely(nilfs_checkpoint_invalid(cp))) { 316 + kunmap_local(kaddr); 317 + brelse(cp_bh); 318 + goto error; 319 + } 320 + 321 + cp->cp_snapshot_list.ssl_next = 0; 322 + cp->cp_snapshot_list.ssl_prev = 0; 323 + cp->cp_inodes_count = cpu_to_le64(atomic64_read(&root->inodes_count)); 324 + cp->cp_blocks_count = cpu_to_le64(atomic64_read(&root->blocks_count)); 325 + cp->cp_nblk_inc = cpu_to_le64(blkinc); 326 + cp->cp_create = cpu_to_le64(ctime); 327 + cp->cp_cno = cpu_to_le64(cno); 328 + 329 + if (minor) 330 + nilfs_checkpoint_set_minor(cp); 331 + else 332 + nilfs_checkpoint_clear_minor(cp); 333 + 334 + nilfs_write_inode_common(root->ifile, &cp->cp_ifile_inode); 335 + nilfs_bmap_write(NILFS_I(root->ifile)->i_bmap, &cp->cp_ifile_inode); 336 + 337 + kunmap_local(kaddr); 338 + brelse(cp_bh); 339 + out_sem: 340 + up_write(&NILFS_MDT(cpfile)->mi_sem); 341 + return ret; 342 + 343 + error: 344 + nilfs_error(cpfile->i_sb, 345 + "checkpoint finalization failed due to metadata corruption."); 346 + ret = -EIO; 347 + goto out_sem; 345 348 } 346 349 347 350 /** ··· 460 347 continue; 461 348 } 462 349 463 - kaddr = kmap_atomic(cp_bh->b_page); 350 + kaddr = kmap_local_page(cp_bh->b_page); 464 351 cp = nilfs_cpfile_block_get_checkpoint( 465 352 cpfile, cno, cp_bh, kaddr); 466 353 nicps = 0; ··· 482 369 cpfile, cp_bh, kaddr, nicps); 483 370 if (count == 0) { 484 371 /* make hole */ 485 - kunmap_atomic(kaddr); 372 + kunmap_local(kaddr); 486 373 brelse(cp_bh); 487 374 ret = 488 375 nilfs_cpfile_delete_checkpoint_block( ··· 497 384 } 498 385 } 499 386 500 - kunmap_atomic(kaddr); 387 + kunmap_local(kaddr); 501 388 brelse(cp_bh); 502 389 } 503 390 504 391 if (tnicps > 0) { 505 - kaddr = kmap_atomic(header_bh->b_page); 392 + kaddr = kmap_local_page(header_bh->b_page); 506 393 header = nilfs_cpfile_block_get_header(cpfile, header_bh, 507 394 kaddr); 508 395 le64_add_cpu(&header->ch_ncheckpoints, -(u64)tnicps); 509 396 mark_buffer_dirty(header_bh); 510 397 nilfs_mdt_mark_dirty(cpfile); 511 - kunmap_atomic(kaddr); 398 + kunmap_local(kaddr); 512 399 } 513 400 514 401 brelse(header_bh); ··· 560 447 } 561 448 ncps = nilfs_cpfile_checkpoints_in_block(cpfile, cno, cur_cno); 562 449 563 - kaddr = kmap_atomic(bh->b_page); 450 + kaddr = kmap_local_page(bh->b_page); 564 451 cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, bh, kaddr); 565 452 for (i = 0; i < ncps && n < nci; i++, cp = (void *)cp + cpsz) { 566 453 if (!nilfs_checkpoint_invalid(cp)) { ··· 570 457 n++; 571 458 } 572 459 } 573 - kunmap_atomic(kaddr); 460 + kunmap_local(kaddr); 574 461 brelse(bh); 575 462 } 576 463 ··· 604 491 ret = nilfs_cpfile_get_header_block(cpfile, &bh); 605 492 if (ret < 0) 606 493 goto out; 607 - kaddr = kmap_atomic(bh->b_page); 494 + kaddr = kmap_local_page(bh->b_page); 608 495 header = nilfs_cpfile_block_get_header(cpfile, bh, kaddr); 609 496 curr = le64_to_cpu(header->ch_snapshot_list.ssl_next); 610 - kunmap_atomic(kaddr); 497 + kunmap_local(kaddr); 611 498 brelse(bh); 612 499 if (curr == 0) { 613 500 ret = 0; ··· 625 512 ret = 0; /* No snapshots (started from a hole block) */ 626 513 goto out; 627 514 } 628 - kaddr = kmap_atomic(bh->b_page); 515 + kaddr = kmap_local_page(bh->b_page); 629 516 while (n < nci) { 630 517 cp = nilfs_cpfile_block_get_checkpoint(cpfile, curr, bh, kaddr); 631 518 curr = ~(__u64)0; /* Terminator */ ··· 641 528 642 529 next_blkoff = nilfs_cpfile_get_blkoff(cpfile, next); 643 530 if (curr_blkoff != next_blkoff) { 644 - kunmap_atomic(kaddr); 531 + kunmap_local(kaddr); 645 532 brelse(bh); 646 533 ret = nilfs_cpfile_get_checkpoint_block(cpfile, next, 647 534 0, &bh); ··· 649 536 WARN_ON(ret == -ENOENT); 650 537 goto out; 651 538 } 652 - kaddr = kmap_atomic(bh->b_page); 539 + kaddr = kmap_local_page(bh->b_page); 653 540 } 654 541 curr = next; 655 542 curr_blkoff = next_blkoff; 656 543 } 657 - kunmap_atomic(kaddr); 544 + kunmap_local(kaddr); 658 545 brelse(bh); 659 546 *cnop = curr; 660 547 ret = n; ··· 763 650 ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 0, &cp_bh); 764 651 if (ret < 0) 765 652 goto out_sem; 766 - kaddr = kmap_atomic(cp_bh->b_page); 653 + kaddr = kmap_local_page(cp_bh->b_page); 767 654 cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 768 655 if (nilfs_checkpoint_invalid(cp)) { 769 656 ret = -ENOENT; 770 - kunmap_atomic(kaddr); 657 + kunmap_local(kaddr); 771 658 goto out_cp; 772 659 } 773 660 if (nilfs_checkpoint_snapshot(cp)) { 774 661 ret = 0; 775 - kunmap_atomic(kaddr); 662 + kunmap_local(kaddr); 776 663 goto out_cp; 777 664 } 778 - kunmap_atomic(kaddr); 665 + kunmap_local(kaddr); 779 666 780 667 ret = nilfs_cpfile_get_header_block(cpfile, &header_bh); 781 668 if (ret < 0) 782 669 goto out_cp; 783 - kaddr = kmap_atomic(header_bh->b_page); 670 + kaddr = kmap_local_page(header_bh->b_page); 784 671 header = nilfs_cpfile_block_get_header(cpfile, header_bh, kaddr); 785 672 list = &header->ch_snapshot_list; 786 673 curr_bh = header_bh; ··· 792 679 prev_blkoff = nilfs_cpfile_get_blkoff(cpfile, prev); 793 680 curr = prev; 794 681 if (curr_blkoff != prev_blkoff) { 795 - kunmap_atomic(kaddr); 682 + kunmap_local(kaddr); 796 683 brelse(curr_bh); 797 684 ret = nilfs_cpfile_get_checkpoint_block(cpfile, curr, 798 685 0, &curr_bh); 799 686 if (ret < 0) 800 687 goto out_header; 801 - kaddr = kmap_atomic(curr_bh->b_page); 688 + kaddr = kmap_local_page(curr_bh->b_page); 802 689 } 803 690 curr_blkoff = prev_blkoff; 804 691 cp = nilfs_cpfile_block_get_checkpoint( ··· 806 693 list = &cp->cp_snapshot_list; 807 694 prev = le64_to_cpu(list->ssl_prev); 808 695 } 809 - kunmap_atomic(kaddr); 696 + kunmap_local(kaddr); 810 697 811 698 if (prev != 0) { 812 699 ret = nilfs_cpfile_get_checkpoint_block(cpfile, prev, 0, ··· 818 705 get_bh(prev_bh); 819 706 } 820 707 821 - kaddr = kmap_atomic(curr_bh->b_page); 708 + kaddr = kmap_local_page(curr_bh->b_page); 822 709 list = nilfs_cpfile_block_get_snapshot_list( 823 710 cpfile, curr, curr_bh, kaddr); 824 711 list->ssl_prev = cpu_to_le64(cno); 825 - kunmap_atomic(kaddr); 712 + kunmap_local(kaddr); 826 713 827 - kaddr = kmap_atomic(cp_bh->b_page); 714 + kaddr = kmap_local_page(cp_bh->b_page); 828 715 cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 829 716 cp->cp_snapshot_list.ssl_next = cpu_to_le64(curr); 830 717 cp->cp_snapshot_list.ssl_prev = cpu_to_le64(prev); 831 718 nilfs_checkpoint_set_snapshot(cp); 832 - kunmap_atomic(kaddr); 719 + kunmap_local(kaddr); 833 720 834 - kaddr = kmap_atomic(prev_bh->b_page); 721 + kaddr = kmap_local_page(prev_bh->b_page); 835 722 list = nilfs_cpfile_block_get_snapshot_list( 836 723 cpfile, prev, prev_bh, kaddr); 837 724 list->ssl_next = cpu_to_le64(cno); 838 - kunmap_atomic(kaddr); 725 + kunmap_local(kaddr); 839 726 840 - kaddr = kmap_atomic(header_bh->b_page); 727 + kaddr = kmap_local_page(header_bh->b_page); 841 728 header = nilfs_cpfile_block_get_header(cpfile, header_bh, kaddr); 842 729 le64_add_cpu(&header->ch_nsnapshots, 1); 843 - kunmap_atomic(kaddr); 730 + kunmap_local(kaddr); 844 731 845 732 mark_buffer_dirty(prev_bh); 846 733 mark_buffer_dirty(curr_bh); ··· 881 768 ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 0, &cp_bh); 882 769 if (ret < 0) 883 770 goto out_sem; 884 - kaddr = kmap_atomic(cp_bh->b_page); 771 + kaddr = kmap_local_page(cp_bh->b_page); 885 772 cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 886 773 if (nilfs_checkpoint_invalid(cp)) { 887 774 ret = -ENOENT; 888 - kunmap_atomic(kaddr); 775 + kunmap_local(kaddr); 889 776 goto out_cp; 890 777 } 891 778 if (!nilfs_checkpoint_snapshot(cp)) { 892 779 ret = 0; 893 - kunmap_atomic(kaddr); 780 + kunmap_local(kaddr); 894 781 goto out_cp; 895 782 } 896 783 897 784 list = &cp->cp_snapshot_list; 898 785 next = le64_to_cpu(list->ssl_next); 899 786 prev = le64_to_cpu(list->ssl_prev); 900 - kunmap_atomic(kaddr); 787 + kunmap_local(kaddr); 901 788 902 789 ret = nilfs_cpfile_get_header_block(cpfile, &header_bh); 903 790 if (ret < 0) ··· 921 808 get_bh(prev_bh); 922 809 } 923 810 924 - kaddr = kmap_atomic(next_bh->b_page); 811 + kaddr = kmap_local_page(next_bh->b_page); 925 812 list = nilfs_cpfile_block_get_snapshot_list( 926 813 cpfile, next, next_bh, kaddr); 927 814 list->ssl_prev = cpu_to_le64(prev); 928 - kunmap_atomic(kaddr); 815 + kunmap_local(kaddr); 929 816 930 - kaddr = kmap_atomic(prev_bh->b_page); 817 + kaddr = kmap_local_page(prev_bh->b_page); 931 818 list = nilfs_cpfile_block_get_snapshot_list( 932 819 cpfile, prev, prev_bh, kaddr); 933 820 list->ssl_next = cpu_to_le64(next); 934 - kunmap_atomic(kaddr); 821 + kunmap_local(kaddr); 935 822 936 - kaddr = kmap_atomic(cp_bh->b_page); 823 + kaddr = kmap_local_page(cp_bh->b_page); 937 824 cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 938 825 cp->cp_snapshot_list.ssl_next = cpu_to_le64(0); 939 826 cp->cp_snapshot_list.ssl_prev = cpu_to_le64(0); 940 827 nilfs_checkpoint_clear_snapshot(cp); 941 - kunmap_atomic(kaddr); 828 + kunmap_local(kaddr); 942 829 943 - kaddr = kmap_atomic(header_bh->b_page); 830 + kaddr = kmap_local_page(header_bh->b_page); 944 831 header = nilfs_cpfile_block_get_header(cpfile, header_bh, kaddr); 945 832 le64_add_cpu(&header->ch_nsnapshots, -1); 946 - kunmap_atomic(kaddr); 833 + kunmap_local(kaddr); 947 834 948 835 mark_buffer_dirty(next_bh); 949 836 mark_buffer_dirty(prev_bh); ··· 1002 889 ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 0, &bh); 1003 890 if (ret < 0) 1004 891 goto out; 1005 - kaddr = kmap_atomic(bh->b_page); 892 + kaddr = kmap_local_page(bh->b_page); 1006 893 cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, bh, kaddr); 1007 894 if (nilfs_checkpoint_invalid(cp)) 1008 895 ret = -ENOENT; 1009 896 else 1010 897 ret = nilfs_checkpoint_snapshot(cp); 1011 - kunmap_atomic(kaddr); 898 + kunmap_local(kaddr); 1012 899 brelse(bh); 1013 900 1014 901 out: ··· 1085 972 ret = nilfs_cpfile_get_header_block(cpfile, &bh); 1086 973 if (ret < 0) 1087 974 goto out_sem; 1088 - kaddr = kmap_atomic(bh->b_page); 975 + kaddr = kmap_local_page(bh->b_page); 1089 976 header = nilfs_cpfile_block_get_header(cpfile, bh, kaddr); 1090 977 cpstat->cs_cno = nilfs_mdt_cno(cpfile); 1091 978 cpstat->cs_ncps = le64_to_cpu(header->ch_ncheckpoints); 1092 979 cpstat->cs_nsss = le64_to_cpu(header->ch_nsnapshots); 1093 - kunmap_atomic(kaddr); 980 + kunmap_local(kaddr); 1094 981 brelse(bh); 1095 982 1096 983 out_sem:
+6 -4
fs/nilfs2/cpfile.h
··· 16 16 #include <linux/nilfs2_ondisk.h> /* nilfs_inode, nilfs_checkpoint */ 17 17 18 18 19 - int nilfs_cpfile_get_checkpoint(struct inode *, __u64, int, 20 - struct nilfs_checkpoint **, 21 - struct buffer_head **); 22 - void nilfs_cpfile_put_checkpoint(struct inode *, __u64, struct buffer_head *); 19 + int nilfs_cpfile_read_checkpoint(struct inode *cpfile, __u64 cno, 20 + struct nilfs_root *root, struct inode *ifile); 21 + int nilfs_cpfile_create_checkpoint(struct inode *cpfile, __u64 cno); 22 + int nilfs_cpfile_finalize_checkpoint(struct inode *cpfile, __u64 cno, 23 + struct nilfs_root *root, __u64 blkinc, 24 + time64_t ctime, bool minor); 23 25 int nilfs_cpfile_delete_checkpoints(struct inode *, __u64, __u64); 24 26 int nilfs_cpfile_delete_checkpoint(struct inode *, __u64); 25 27 int nilfs_cpfile_change_cpmode(struct inode *, __u64, int);
+20 -20
fs/nilfs2/dat.c
··· 91 91 struct nilfs_dat_entry *entry; 92 92 void *kaddr; 93 93 94 - kaddr = kmap_atomic(req->pr_entry_bh->b_page); 94 + kaddr = kmap_local_page(req->pr_entry_bh->b_page); 95 95 entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 96 96 req->pr_entry_bh, kaddr); 97 97 entry->de_start = cpu_to_le64(NILFS_CNO_MIN); 98 98 entry->de_end = cpu_to_le64(NILFS_CNO_MAX); 99 99 entry->de_blocknr = cpu_to_le64(0); 100 - kunmap_atomic(kaddr); 100 + kunmap_local(kaddr); 101 101 102 102 nilfs_palloc_commit_alloc_entry(dat, req); 103 103 nilfs_dat_commit_entry(dat, req); ··· 115 115 struct nilfs_dat_entry *entry; 116 116 void *kaddr; 117 117 118 - kaddr = kmap_atomic(req->pr_entry_bh->b_page); 118 + kaddr = kmap_local_page(req->pr_entry_bh->b_page); 119 119 entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 120 120 req->pr_entry_bh, kaddr); 121 121 entry->de_start = cpu_to_le64(NILFS_CNO_MIN); 122 122 entry->de_end = cpu_to_le64(NILFS_CNO_MIN); 123 123 entry->de_blocknr = cpu_to_le64(0); 124 - kunmap_atomic(kaddr); 124 + kunmap_local(kaddr); 125 125 126 126 nilfs_dat_commit_entry(dat, req); 127 127 ··· 145 145 struct nilfs_dat_entry *entry; 146 146 void *kaddr; 147 147 148 - kaddr = kmap_atomic(req->pr_entry_bh->b_page); 148 + kaddr = kmap_local_page(req->pr_entry_bh->b_page); 149 149 entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 150 150 req->pr_entry_bh, kaddr); 151 151 entry->de_start = cpu_to_le64(nilfs_mdt_cno(dat)); 152 152 entry->de_blocknr = cpu_to_le64(blocknr); 153 - kunmap_atomic(kaddr); 153 + kunmap_local(kaddr); 154 154 155 155 nilfs_dat_commit_entry(dat, req); 156 156 } ··· 167 167 if (ret < 0) 168 168 return ret; 169 169 170 - kaddr = kmap_atomic(req->pr_entry_bh->b_page); 170 + kaddr = kmap_local_page(req->pr_entry_bh->b_page); 171 171 entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 172 172 req->pr_entry_bh, kaddr); 173 173 start = le64_to_cpu(entry->de_start); 174 174 blocknr = le64_to_cpu(entry->de_blocknr); 175 - kunmap_atomic(kaddr); 175 + kunmap_local(kaddr); 176 176 177 177 if (blocknr == 0) { 178 178 ret = nilfs_palloc_prepare_free_entry(dat, req); ··· 202 202 sector_t blocknr; 203 203 void *kaddr; 204 204 205 - kaddr = kmap_atomic(req->pr_entry_bh->b_page); 205 + kaddr = kmap_local_page(req->pr_entry_bh->b_page); 206 206 entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 207 207 req->pr_entry_bh, kaddr); 208 208 end = start = le64_to_cpu(entry->de_start); ··· 212 212 } 213 213 entry->de_end = cpu_to_le64(end); 214 214 blocknr = le64_to_cpu(entry->de_blocknr); 215 - kunmap_atomic(kaddr); 215 + kunmap_local(kaddr); 216 216 217 217 if (blocknr == 0) 218 218 nilfs_dat_commit_free(dat, req); ··· 227 227 sector_t blocknr; 228 228 void *kaddr; 229 229 230 - kaddr = kmap_atomic(req->pr_entry_bh->b_page); 230 + kaddr = kmap_local_page(req->pr_entry_bh->b_page); 231 231 entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 232 232 req->pr_entry_bh, kaddr); 233 233 start = le64_to_cpu(entry->de_start); 234 234 blocknr = le64_to_cpu(entry->de_blocknr); 235 - kunmap_atomic(kaddr); 235 + kunmap_local(kaddr); 236 236 237 237 if (start == nilfs_mdt_cno(dat) && blocknr == 0) 238 238 nilfs_palloc_abort_free_entry(dat, req); ··· 362 362 } 363 363 } 364 364 365 - kaddr = kmap_atomic(entry_bh->b_page); 365 + kaddr = kmap_local_page(entry_bh->b_page); 366 366 entry = nilfs_palloc_block_get_entry(dat, vblocknr, entry_bh, kaddr); 367 367 if (unlikely(entry->de_blocknr == cpu_to_le64(0))) { 368 368 nilfs_crit(dat->i_sb, ··· 370 370 __func__, (unsigned long long)vblocknr, 371 371 (unsigned long long)le64_to_cpu(entry->de_start), 372 372 (unsigned long long)le64_to_cpu(entry->de_end)); 373 - kunmap_atomic(kaddr); 373 + kunmap_local(kaddr); 374 374 brelse(entry_bh); 375 375 return -EINVAL; 376 376 } 377 377 WARN_ON(blocknr == 0); 378 378 entry->de_blocknr = cpu_to_le64(blocknr); 379 - kunmap_atomic(kaddr); 379 + kunmap_local(kaddr); 380 380 381 381 mark_buffer_dirty(entry_bh); 382 382 nilfs_mdt_mark_dirty(dat); ··· 426 426 } 427 427 } 428 428 429 - kaddr = kmap_atomic(entry_bh->b_page); 429 + kaddr = kmap_local_page(entry_bh->b_page); 430 430 entry = nilfs_palloc_block_get_entry(dat, vblocknr, entry_bh, kaddr); 431 431 blocknr = le64_to_cpu(entry->de_blocknr); 432 432 if (blocknr == 0) { ··· 436 436 *blocknrp = blocknr; 437 437 438 438 out: 439 - kunmap_atomic(kaddr); 439 + kunmap_local(kaddr); 440 440 brelse(entry_bh); 441 441 return ret; 442 442 } ··· 457 457 0, &entry_bh); 458 458 if (ret < 0) 459 459 return ret; 460 - kaddr = kmap_atomic(entry_bh->b_page); 460 + kaddr = kmap_local_page(entry_bh->b_page); 461 461 /* last virtual block number in this block */ 462 462 first = vinfo->vi_vblocknr; 463 - do_div(first, entries_per_block); 463 + first = div64_ul(first, entries_per_block); 464 464 first *= entries_per_block; 465 465 last = first + entries_per_block - 1; 466 466 for (j = i, n = 0; ··· 473 473 vinfo->vi_end = le64_to_cpu(entry->de_end); 474 474 vinfo->vi_blocknr = le64_to_cpu(entry->de_blocknr); 475 475 } 476 - kunmap_atomic(kaddr); 476 + kunmap_local(kaddr); 477 477 brelse(entry_bh); 478 478 } 479 479
+7 -2
fs/nilfs2/direct.c
··· 66 66 dat = nilfs_bmap_get_dat(direct); 67 67 ret = nilfs_dat_translate(dat, ptr, &blocknr); 68 68 if (ret < 0) 69 - return ret; 69 + goto dat_error; 70 70 ptr = blocknr; 71 71 } 72 72 ··· 79 79 if (dat) { 80 80 ret = nilfs_dat_translate(dat, ptr2, &blocknr); 81 81 if (ret < 0) 82 - return ret; 82 + goto dat_error; 83 83 ptr2 = blocknr; 84 84 } 85 85 if (ptr2 != ptr + cnt) ··· 87 87 } 88 88 *ptrp = ptr; 89 89 return cnt; 90 + 91 + dat_error: 92 + if (ret == -ENOENT) 93 + ret = -EINVAL; /* Notify bmap layer of metadata corruption */ 94 + return ret; 90 95 } 91 96 92 97 static __u64
+13 -8
fs/nilfs2/ifile.c
··· 15 15 #include "mdt.h" 16 16 #include "alloc.h" 17 17 #include "ifile.h" 18 + #include "cpfile.h" 18 19 19 20 /** 20 21 * struct nilfs_ifile_info - on-memory private data of ifile ··· 116 115 return ret; 117 116 } 118 117 119 - kaddr = kmap_atomic(req.pr_entry_bh->b_page); 118 + kaddr = kmap_local_page(req.pr_entry_bh->b_page); 120 119 raw_inode = nilfs_palloc_block_get_entry(ifile, req.pr_entry_nr, 121 120 req.pr_entry_bh, kaddr); 122 121 raw_inode->i_flags = 0; 123 - kunmap_atomic(kaddr); 122 + kunmap_local(kaddr); 124 123 125 124 mark_buffer_dirty(req.pr_entry_bh); 126 125 brelse(req.pr_entry_bh); ··· 174 173 * nilfs_ifile_read - read or get ifile inode 175 174 * @sb: super block instance 176 175 * @root: root object 176 + * @cno: number of checkpoint entry to read 177 177 * @inode_size: size of an inode 178 - * @raw_inode: on-disk ifile inode 179 - * @inodep: buffer to store the inode 178 + * 179 + * Return: 0 on success, or the following negative error code on failure. 180 + * * %-EINVAL - Invalid checkpoint. 181 + * * %-ENOMEM - Insufficient memory available. 182 + * * %-EIO - I/O error (including metadata corruption). 180 183 */ 181 184 int nilfs_ifile_read(struct super_block *sb, struct nilfs_root *root, 182 - size_t inode_size, struct nilfs_inode *raw_inode, 183 - struct inode **inodep) 185 + __u64 cno, size_t inode_size) 184 186 { 187 + struct the_nilfs *nilfs; 185 188 struct inode *ifile; 186 189 int err; 187 190 ··· 206 201 207 202 nilfs_palloc_setup_cache(ifile, &NILFS_IFILE_I(ifile)->palloc_cache); 208 203 209 - err = nilfs_read_inode_common(ifile, raw_inode); 204 + nilfs = sb->s_fs_info; 205 + err = nilfs_cpfile_read_checkpoint(nilfs->ns_cpfile, cno, root, ifile); 210 206 if (err) 211 207 goto failed; 212 208 213 209 unlock_new_inode(ifile); 214 210 out: 215 - *inodep = ifile; 216 211 return 0; 217 212 failed: 218 213 iget_failed(ifile);
+4 -6
fs/nilfs2/ifile.h
··· 21 21 static inline struct nilfs_inode * 22 22 nilfs_ifile_map_inode(struct inode *ifile, ino_t ino, struct buffer_head *ibh) 23 23 { 24 - void *kaddr = kmap(ibh->b_page); 24 + void *kaddr = kmap_local_page(ibh->b_page); 25 25 26 26 return nilfs_palloc_block_get_entry(ifile, ino, ibh, kaddr); 27 27 } 28 28 29 - static inline void nilfs_ifile_unmap_inode(struct inode *ifile, ino_t ino, 30 - struct buffer_head *ibh) 29 + static inline void nilfs_ifile_unmap_inode(struct nilfs_inode *raw_inode) 31 30 { 32 - kunmap(ibh->b_page); 31 + kunmap_local(raw_inode); 33 32 } 34 33 35 34 int nilfs_ifile_create_inode(struct inode *, ino_t *, struct buffer_head **); ··· 38 39 int nilfs_ifile_count_free_inodes(struct inode *, u64 *, u64 *); 39 40 40 41 int nilfs_ifile_read(struct super_block *sb, struct nilfs_root *root, 41 - size_t inode_size, struct nilfs_inode *raw_inode, 42 - struct inode **inodep); 42 + __u64 cno, size_t inode_size); 43 43 44 44 #endif /* _NILFS_IFILE_H */
+20 -26
fs/nilfs2/inode.c
··· 112 112 "%s (ino=%lu): a race condition while inserting a data block at offset=%llu", 113 113 __func__, inode->i_ino, 114 114 (unsigned long long)blkoff); 115 - err = 0; 115 + err = -EAGAIN; 116 116 } 117 117 nilfs_transaction_abort(inode->i_sb); 118 118 goto out; ··· 520 520 inode, inode->i_mode, 521 521 huge_decode_dev(le64_to_cpu(raw_inode->i_device_code))); 522 522 } 523 - nilfs_ifile_unmap_inode(root->ifile, ino, bh); 523 + nilfs_ifile_unmap_inode(raw_inode); 524 524 brelse(bh); 525 525 up_read(&NILFS_MDT(nilfs->ns_dat)->mi_sem); 526 526 nilfs_set_inode_flags(inode); ··· 529 529 return 0; 530 530 531 531 failed_unmap: 532 - nilfs_ifile_unmap_inode(root->ifile, ino, bh); 532 + nilfs_ifile_unmap_inode(raw_inode); 533 533 brelse(bh); 534 534 535 535 bad_inode: ··· 759 759 return s_inode; 760 760 } 761 761 762 + /** 763 + * nilfs_write_inode_common - export common inode information to on-disk inode 764 + * @inode: inode object 765 + * @raw_inode: on-disk inode 766 + * 767 + * This function writes standard information from the on-memory inode @inode 768 + * to @raw_inode on ifile, cpfile or a super root block. Since inode bmap 769 + * data is not exported, nilfs_bmap_write() must be called separately during 770 + * log writing. 771 + */ 762 772 void nilfs_write_inode_common(struct inode *inode, 763 - struct nilfs_inode *raw_inode, int has_bmap) 773 + struct nilfs_inode *raw_inode) 764 774 { 765 775 struct nilfs_inode_info *ii = NILFS_I(inode); 766 776 ··· 788 778 raw_inode->i_flags = cpu_to_le32(ii->i_flags); 789 779 raw_inode->i_generation = cpu_to_le32(inode->i_generation); 790 780 791 - if (NILFS_ROOT_METADATA_FILE(inode->i_ino)) { 792 - struct the_nilfs *nilfs = inode->i_sb->s_fs_info; 793 - 794 - /* zero-fill unused portion in the case of super root block */ 795 - raw_inode->i_xattr = 0; 796 - raw_inode->i_pad = 0; 797 - memset((void *)raw_inode + sizeof(*raw_inode), 0, 798 - nilfs->ns_inode_size - sizeof(*raw_inode)); 799 - } 800 - 801 - if (has_bmap) 802 - nilfs_bmap_write(ii->i_bmap, raw_inode); 803 - else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) 804 - raw_inode->i_device_code = 805 - cpu_to_le64(huge_encode_dev(inode->i_rdev)); 806 781 /* 807 782 * When extending inode, nilfs->ns_inode_size should be checked 808 783 * for substitutions of appended fields. ··· 808 813 if (flags & I_DIRTY_DATASYNC) 809 814 set_bit(NILFS_I_INODE_SYNC, &ii->i_state); 810 815 811 - nilfs_write_inode_common(inode, raw_inode, 0); 812 - /* 813 - * XXX: call with has_bmap = 0 is a workaround to avoid 814 - * deadlock of bmap. This delays update of i_bmap to just 815 - * before writing. 816 - */ 816 + nilfs_write_inode_common(inode, raw_inode); 817 817 818 - nilfs_ifile_unmap_inode(ifile, ino, ibh); 818 + if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) 819 + raw_inode->i_device_code = 820 + cpu_to_le64(huge_encode_dev(inode->i_rdev)); 821 + 822 + nilfs_ifile_unmap_inode(raw_inode); 819 823 } 820 824 821 825 #define NILFS_MAX_TRUNCATE_BLOCKS 16384 /* 64MB for 4KB block */
+2 -2
fs/nilfs2/ioctl.c
··· 1111 1111 segbytes = nilfs->ns_blocks_per_segment * nilfs->ns_blocksize; 1112 1112 1113 1113 minseg = range[0] + segbytes - 1; 1114 - do_div(minseg, segbytes); 1114 + minseg = div64_ul(minseg, segbytes); 1115 1115 1116 1116 if (range[1] < 4096) 1117 1117 goto out; ··· 1120 1120 if (maxseg < segbytes) 1121 1121 goto out; 1122 1122 1123 - do_div(maxseg, segbytes); 1123 + maxseg = div64_ul(maxseg, segbytes); 1124 1124 maxseg--; 1125 1125 1126 1126 ret = nilfs_sufile_set_alloc_range(nilfs->ns_sufile, minseg, maxseg);
+2 -2
fs/nilfs2/mdt.c
··· 47 47 48 48 set_buffer_mapped(bh); 49 49 50 - kaddr = kmap_atomic(bh->b_page); 50 + kaddr = kmap_local_page(bh->b_page); 51 51 memset(kaddr + bh_offset(bh), 0, i_blocksize(inode)); 52 52 if (init_block) 53 53 init_block(inode, bh, kaddr); 54 54 flush_dcache_page(bh->b_page); 55 - kunmap_atomic(kaddr); 55 + kunmap_local(kaddr); 56 56 57 57 set_buffer_uptodate(bh); 58 58 mark_buffer_dirty(bh);
+2 -1
fs/nilfs2/nilfs.h
··· 256 256 extern int nilfs_get_block(struct inode *, sector_t, struct buffer_head *, int); 257 257 extern void nilfs_set_inode_flags(struct inode *); 258 258 extern int nilfs_read_inode_common(struct inode *, struct nilfs_inode *); 259 - extern void nilfs_write_inode_common(struct inode *, struct nilfs_inode *, int); 259 + void nilfs_write_inode_common(struct inode *inode, 260 + struct nilfs_inode *raw_inode); 260 261 struct inode *nilfs_ilookup(struct super_block *sb, struct nilfs_root *root, 261 262 unsigned long ino); 262 263 struct inode *nilfs_iget_locked(struct super_block *sb, struct nilfs_root *root,
+4 -4
fs/nilfs2/page.c
··· 103 103 struct page *spage = sbh->b_page, *dpage = dbh->b_page; 104 104 struct buffer_head *bh; 105 105 106 - kaddr0 = kmap_atomic(spage); 107 - kaddr1 = kmap_atomic(dpage); 106 + kaddr0 = kmap_local_page(spage); 107 + kaddr1 = kmap_local_page(dpage); 108 108 memcpy(kaddr1 + bh_offset(dbh), kaddr0 + bh_offset(sbh), sbh->b_size); 109 - kunmap_atomic(kaddr1); 110 - kunmap_atomic(kaddr0); 109 + kunmap_local(kaddr1); 110 + kunmap_local(kaddr0); 111 111 112 112 dbh->b_state = sbh->b_state & NILFS_BUFFER_INHERENT_BITS; 113 113 dbh->b_blocknr = sbh->b_blocknr;
+2 -2
fs/nilfs2/recovery.c
··· 482 482 if (unlikely(!bh_org)) 483 483 return -EIO; 484 484 485 - kaddr = kmap_atomic(page); 485 + kaddr = kmap_local_page(page); 486 486 memcpy(kaddr + from, bh_org->b_data, bh_org->b_size); 487 - kunmap_atomic(kaddr); 487 + kunmap_local(kaddr); 488 488 brelse(bh_org); 489 489 return 0; 490 490 }
+2 -2
fs/nilfs2/segbuf.c
··· 220 220 crc = crc32_le(crc, bh->b_data, bh->b_size); 221 221 } 222 222 list_for_each_entry(bh, &segbuf->sb_payload_buffers, b_assoc_buffers) { 223 - kaddr = kmap_atomic(bh->b_page); 223 + kaddr = kmap_local_page(bh->b_page); 224 224 crc = crc32_le(crc, kaddr + bh_offset(bh), bh->b_size); 225 - kunmap_atomic(kaddr); 225 + kunmap_local(kaddr); 226 226 } 227 227 raw_sum->ss_datasum = cpu_to_le32(crc); 228 228 }
+42 -79
fs/nilfs2/segment.c
··· 880 880 nilfs_mdt_clear_dirty(nilfs->ns_dat); 881 881 } 882 882 883 - static int nilfs_segctor_create_checkpoint(struct nilfs_sc_info *sci) 884 - { 885 - struct the_nilfs *nilfs = sci->sc_super->s_fs_info; 886 - struct buffer_head *bh_cp; 887 - struct nilfs_checkpoint *raw_cp; 888 - int err; 889 - 890 - /* XXX: this interface will be changed */ 891 - err = nilfs_cpfile_get_checkpoint(nilfs->ns_cpfile, nilfs->ns_cno, 1, 892 - &raw_cp, &bh_cp); 893 - if (likely(!err)) { 894 - /* 895 - * The following code is duplicated with cpfile. But, it is 896 - * needed to collect the checkpoint even if it was not newly 897 - * created. 898 - */ 899 - mark_buffer_dirty(bh_cp); 900 - nilfs_mdt_mark_dirty(nilfs->ns_cpfile); 901 - nilfs_cpfile_put_checkpoint( 902 - nilfs->ns_cpfile, nilfs->ns_cno, bh_cp); 903 - } else if (err == -EINVAL || err == -ENOENT) { 904 - nilfs_error(sci->sc_super, 905 - "checkpoint creation failed due to metadata corruption."); 906 - err = -EIO; 907 - } 908 - return err; 909 - } 910 - 911 - static int nilfs_segctor_fill_in_checkpoint(struct nilfs_sc_info *sci) 912 - { 913 - struct the_nilfs *nilfs = sci->sc_super->s_fs_info; 914 - struct buffer_head *bh_cp; 915 - struct nilfs_checkpoint *raw_cp; 916 - int err; 917 - 918 - err = nilfs_cpfile_get_checkpoint(nilfs->ns_cpfile, nilfs->ns_cno, 0, 919 - &raw_cp, &bh_cp); 920 - if (unlikely(err)) { 921 - if (err == -EINVAL || err == -ENOENT) { 922 - nilfs_error(sci->sc_super, 923 - "checkpoint finalization failed due to metadata corruption."); 924 - err = -EIO; 925 - } 926 - goto failed_ibh; 927 - } 928 - raw_cp->cp_snapshot_list.ssl_next = 0; 929 - raw_cp->cp_snapshot_list.ssl_prev = 0; 930 - raw_cp->cp_inodes_count = 931 - cpu_to_le64(atomic64_read(&sci->sc_root->inodes_count)); 932 - raw_cp->cp_blocks_count = 933 - cpu_to_le64(atomic64_read(&sci->sc_root->blocks_count)); 934 - raw_cp->cp_nblk_inc = 935 - cpu_to_le64(sci->sc_nblk_inc + sci->sc_nblk_this_inc); 936 - raw_cp->cp_create = cpu_to_le64(sci->sc_seg_ctime); 937 - raw_cp->cp_cno = cpu_to_le64(nilfs->ns_cno); 938 - 939 - if (test_bit(NILFS_SC_HAVE_DELTA, &sci->sc_flags)) 940 - nilfs_checkpoint_clear_minor(raw_cp); 941 - else 942 - nilfs_checkpoint_set_minor(raw_cp); 943 - 944 - nilfs_write_inode_common(sci->sc_root->ifile, 945 - &raw_cp->cp_ifile_inode, 1); 946 - nilfs_cpfile_put_checkpoint(nilfs->ns_cpfile, nilfs->ns_cno, bh_cp); 947 - return 0; 948 - 949 - failed_ibh: 950 - return err; 951 - } 952 - 953 883 static void nilfs_fill_in_file_bmap(struct inode *ifile, 954 884 struct nilfs_inode_info *ii) 955 885 ··· 893 963 raw_inode = nilfs_ifile_map_inode(ifile, ii->vfs_inode.i_ino, 894 964 ibh); 895 965 nilfs_bmap_write(ii->i_bmap, raw_inode); 896 - nilfs_ifile_unmap_inode(ifile, ii->vfs_inode.i_ino, ibh); 966 + nilfs_ifile_unmap_inode(raw_inode); 897 967 } 898 968 } 899 969 ··· 905 975 nilfs_fill_in_file_bmap(sci->sc_root->ifile, ii); 906 976 set_bit(NILFS_I_COLLECTED, &ii->i_state); 907 977 } 978 + } 979 + 980 + /** 981 + * nilfs_write_root_mdt_inode - export root metadata inode information to 982 + * the on-disk inode 983 + * @inode: inode object of the root metadata file 984 + * @raw_inode: on-disk inode 985 + * 986 + * nilfs_write_root_mdt_inode() writes inode information and bmap data of 987 + * @inode to the inode area of the metadata file allocated on the super root 988 + * block created to finalize the log. Since super root blocks are configured 989 + * each time, this function zero-fills the unused area of @raw_inode. 990 + */ 991 + static void nilfs_write_root_mdt_inode(struct inode *inode, 992 + struct nilfs_inode *raw_inode) 993 + { 994 + struct the_nilfs *nilfs = inode->i_sb->s_fs_info; 995 + 996 + nilfs_write_inode_common(inode, raw_inode); 997 + 998 + /* zero-fill unused portion of raw_inode */ 999 + raw_inode->i_xattr = 0; 1000 + raw_inode->i_pad = 0; 1001 + memset((void *)raw_inode + sizeof(*raw_inode), 0, 1002 + nilfs->ns_inode_size - sizeof(*raw_inode)); 1003 + 1004 + nilfs_bmap_write(NILFS_I(inode)->i_bmap, raw_inode); 908 1005 } 909 1006 910 1007 static void nilfs_segctor_fill_in_super_root(struct nilfs_sc_info *sci, ··· 955 998 nilfs->ns_nongc_ctime : sci->sc_seg_ctime); 956 999 raw_sr->sr_flags = 0; 957 1000 958 - nilfs_write_inode_common(nilfs->ns_dat, (void *)raw_sr + 959 - NILFS_SR_DAT_OFFSET(isz), 1); 960 - nilfs_write_inode_common(nilfs->ns_cpfile, (void *)raw_sr + 961 - NILFS_SR_CPFILE_OFFSET(isz), 1); 962 - nilfs_write_inode_common(nilfs->ns_sufile, (void *)raw_sr + 963 - NILFS_SR_SUFILE_OFFSET(isz), 1); 1001 + nilfs_write_root_mdt_inode(nilfs->ns_dat, (void *)raw_sr + 1002 + NILFS_SR_DAT_OFFSET(isz)); 1003 + nilfs_write_root_mdt_inode(nilfs->ns_cpfile, (void *)raw_sr + 1004 + NILFS_SR_CPFILE_OFFSET(isz)); 1005 + nilfs_write_root_mdt_inode(nilfs->ns_sufile, (void *)raw_sr + 1006 + NILFS_SR_SUFILE_OFFSET(isz)); 1007 + 964 1008 memset((void *)raw_sr + srsz, 0, nilfs->ns_blocksize - srsz); 965 1009 set_buffer_uptodate(bh_sr); 966 1010 unlock_buffer(bh_sr); ··· 1188 1230 break; 1189 1231 nilfs_sc_cstage_inc(sci); 1190 1232 /* Creating a checkpoint */ 1191 - err = nilfs_segctor_create_checkpoint(sci); 1233 + err = nilfs_cpfile_create_checkpoint(nilfs->ns_cpfile, 1234 + nilfs->ns_cno); 1192 1235 if (unlikely(err)) 1193 1236 break; 1194 1237 fallthrough; ··· 2060 2101 2061 2102 if (mode == SC_LSEG_SR && 2062 2103 nilfs_sc_cstage_get(sci) >= NILFS_ST_CPFILE) { 2063 - err = nilfs_segctor_fill_in_checkpoint(sci); 2104 + err = nilfs_cpfile_finalize_checkpoint( 2105 + nilfs->ns_cpfile, nilfs->ns_cno, sci->sc_root, 2106 + sci->sc_nblk_inc + sci->sc_nblk_this_inc, 2107 + sci->sc_seg_ctime, 2108 + !test_bit(NILFS_SC_HAVE_DELTA, &sci->sc_flags)); 2064 2109 if (unlikely(err)) 2065 2110 goto failed_to_write; 2066 2111
+44 -44
fs/nilfs2/sufile.c
··· 48 48 { 49 49 __u64 t = segnum + NILFS_MDT(sufile)->mi_first_entry_offset; 50 50 51 - do_div(t, nilfs_sufile_segment_usages_per_block(sufile)); 51 + t = div64_ul(t, nilfs_sufile_segment_usages_per_block(sufile)); 52 52 return (unsigned long)t; 53 53 } 54 54 ··· 107 107 struct nilfs_sufile_header *header; 108 108 void *kaddr; 109 109 110 - kaddr = kmap_atomic(header_bh->b_page); 110 + kaddr = kmap_local_page(header_bh->b_page); 111 111 header = kaddr + bh_offset(header_bh); 112 112 le64_add_cpu(&header->sh_ncleansegs, ncleanadd); 113 113 le64_add_cpu(&header->sh_ndirtysegs, ndirtyadd); 114 - kunmap_atomic(kaddr); 114 + kunmap_local(kaddr); 115 115 116 116 mark_buffer_dirty(header_bh); 117 117 } ··· 315 315 ret = nilfs_sufile_get_header_block(sufile, &header_bh); 316 316 if (ret < 0) 317 317 goto out_sem; 318 - kaddr = kmap_atomic(header_bh->b_page); 318 + kaddr = kmap_local_page(header_bh->b_page); 319 319 header = kaddr + bh_offset(header_bh); 320 320 last_alloc = le64_to_cpu(header->sh_last_alloc); 321 - kunmap_atomic(kaddr); 321 + kunmap_local(kaddr); 322 322 323 323 nsegments = nilfs_sufile_get_nsegments(sufile); 324 324 maxsegnum = sui->allocmax; ··· 352 352 &su_bh); 353 353 if (ret < 0) 354 354 goto out_header; 355 - kaddr = kmap_atomic(su_bh->b_page); 355 + kaddr = kmap_local_page(su_bh->b_page); 356 356 su = nilfs_sufile_block_get_segment_usage( 357 357 sufile, segnum, su_bh, kaddr); 358 358 ··· 363 363 continue; 364 364 /* found a clean segment */ 365 365 nilfs_segment_usage_set_dirty(su); 366 - kunmap_atomic(kaddr); 366 + kunmap_local(kaddr); 367 367 368 - kaddr = kmap_atomic(header_bh->b_page); 368 + kaddr = kmap_local_page(header_bh->b_page); 369 369 header = kaddr + bh_offset(header_bh); 370 370 le64_add_cpu(&header->sh_ncleansegs, -1); 371 371 le64_add_cpu(&header->sh_ndirtysegs, 1); 372 372 header->sh_last_alloc = cpu_to_le64(segnum); 373 - kunmap_atomic(kaddr); 373 + kunmap_local(kaddr); 374 374 375 375 sui->ncleansegs--; 376 376 mark_buffer_dirty(header_bh); ··· 384 384 goto out_header; 385 385 } 386 386 387 - kunmap_atomic(kaddr); 387 + kunmap_local(kaddr); 388 388 brelse(su_bh); 389 389 } 390 390 ··· 406 406 struct nilfs_segment_usage *su; 407 407 void *kaddr; 408 408 409 - kaddr = kmap_atomic(su_bh->b_page); 409 + kaddr = kmap_local_page(su_bh->b_page); 410 410 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 411 411 if (unlikely(!nilfs_segment_usage_clean(su))) { 412 412 nilfs_warn(sufile->i_sb, "%s: segment %llu must be clean", 413 413 __func__, (unsigned long long)segnum); 414 - kunmap_atomic(kaddr); 414 + kunmap_local(kaddr); 415 415 return; 416 416 } 417 417 nilfs_segment_usage_set_dirty(su); 418 - kunmap_atomic(kaddr); 418 + kunmap_local(kaddr); 419 419 420 420 nilfs_sufile_mod_counter(header_bh, -1, 1); 421 421 NILFS_SUI(sufile)->ncleansegs--; ··· 432 432 void *kaddr; 433 433 int clean, dirty; 434 434 435 - kaddr = kmap_atomic(su_bh->b_page); 435 + kaddr = kmap_local_page(su_bh->b_page); 436 436 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 437 437 if (su->su_flags == cpu_to_le32(BIT(NILFS_SEGMENT_USAGE_DIRTY)) && 438 438 su->su_nblocks == cpu_to_le32(0)) { 439 - kunmap_atomic(kaddr); 439 + kunmap_local(kaddr); 440 440 return; 441 441 } 442 442 clean = nilfs_segment_usage_clean(su); ··· 446 446 su->su_lastmod = cpu_to_le64(0); 447 447 su->su_nblocks = cpu_to_le32(0); 448 448 su->su_flags = cpu_to_le32(BIT(NILFS_SEGMENT_USAGE_DIRTY)); 449 - kunmap_atomic(kaddr); 449 + kunmap_local(kaddr); 450 450 451 451 nilfs_sufile_mod_counter(header_bh, clean ? (u64)-1 : 0, dirty ? 0 : 1); 452 452 NILFS_SUI(sufile)->ncleansegs -= clean; ··· 463 463 void *kaddr; 464 464 int sudirty; 465 465 466 - kaddr = kmap_atomic(su_bh->b_page); 466 + kaddr = kmap_local_page(su_bh->b_page); 467 467 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 468 468 if (nilfs_segment_usage_clean(su)) { 469 469 nilfs_warn(sufile->i_sb, "%s: segment %llu is already clean", 470 470 __func__, (unsigned long long)segnum); 471 - kunmap_atomic(kaddr); 471 + kunmap_local(kaddr); 472 472 return; 473 473 } 474 474 if (unlikely(nilfs_segment_usage_error(su))) ··· 481 481 (unsigned long long)segnum); 482 482 483 483 nilfs_segment_usage_set_clean(su); 484 - kunmap_atomic(kaddr); 484 + kunmap_local(kaddr); 485 485 mark_buffer_dirty(su_bh); 486 486 487 487 nilfs_sufile_mod_counter(header_bh, 1, sudirty ? (u64)-1 : 0); ··· 509 509 if (ret) 510 510 goto out_sem; 511 511 512 - kaddr = kmap_atomic(bh->b_page); 512 + kaddr = kmap_local_page(bh->b_page); 513 513 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, bh, kaddr); 514 514 if (unlikely(nilfs_segment_usage_error(su))) { 515 515 struct the_nilfs *nilfs = sufile->i_sb->s_fs_info; 516 516 517 - kunmap_atomic(kaddr); 517 + kunmap_local(kaddr); 518 518 brelse(bh); 519 519 if (nilfs_segment_is_active(nilfs, segnum)) { 520 520 nilfs_error(sufile->i_sb, ··· 532 532 ret = -EIO; 533 533 } else { 534 534 nilfs_segment_usage_set_dirty(su); 535 - kunmap_atomic(kaddr); 535 + kunmap_local(kaddr); 536 536 mark_buffer_dirty(bh); 537 537 nilfs_mdt_mark_dirty(sufile); 538 538 brelse(bh); ··· 562 562 if (ret < 0) 563 563 goto out_sem; 564 564 565 - kaddr = kmap_atomic(bh->b_page); 565 + kaddr = kmap_local_page(bh->b_page); 566 566 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, bh, kaddr); 567 567 if (modtime) { 568 568 /* ··· 573 573 su->su_lastmod = cpu_to_le64(modtime); 574 574 } 575 575 su->su_nblocks = cpu_to_le32(nblocks); 576 - kunmap_atomic(kaddr); 576 + kunmap_local(kaddr); 577 577 578 578 mark_buffer_dirty(bh); 579 579 nilfs_mdt_mark_dirty(sufile); ··· 614 614 if (ret < 0) 615 615 goto out_sem; 616 616 617 - kaddr = kmap_atomic(header_bh->b_page); 617 + kaddr = kmap_local_page(header_bh->b_page); 618 618 header = kaddr + bh_offset(header_bh); 619 619 sustat->ss_nsegs = nilfs_sufile_get_nsegments(sufile); 620 620 sustat->ss_ncleansegs = le64_to_cpu(header->sh_ncleansegs); ··· 624 624 spin_lock(&nilfs->ns_last_segment_lock); 625 625 sustat->ss_prot_seq = nilfs->ns_prot_seq; 626 626 spin_unlock(&nilfs->ns_last_segment_lock); 627 - kunmap_atomic(kaddr); 627 + kunmap_local(kaddr); 628 628 brelse(header_bh); 629 629 630 630 out_sem: ··· 640 640 void *kaddr; 641 641 int suclean; 642 642 643 - kaddr = kmap_atomic(su_bh->b_page); 643 + kaddr = kmap_local_page(su_bh->b_page); 644 644 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 645 645 if (nilfs_segment_usage_error(su)) { 646 - kunmap_atomic(kaddr); 646 + kunmap_local(kaddr); 647 647 return; 648 648 } 649 649 suclean = nilfs_segment_usage_clean(su); 650 650 nilfs_segment_usage_set_error(su); 651 - kunmap_atomic(kaddr); 651 + kunmap_local(kaddr); 652 652 653 653 if (suclean) { 654 654 nilfs_sufile_mod_counter(header_bh, -1, 0); ··· 717 717 /* hole */ 718 718 continue; 719 719 } 720 - kaddr = kmap_atomic(su_bh->b_page); 720 + kaddr = kmap_local_page(su_bh->b_page); 721 721 su = nilfs_sufile_block_get_segment_usage( 722 722 sufile, segnum, su_bh, kaddr); 723 723 su2 = su; ··· 726 726 ~BIT(NILFS_SEGMENT_USAGE_ERROR)) || 727 727 nilfs_segment_is_active(nilfs, segnum + j)) { 728 728 ret = -EBUSY; 729 - kunmap_atomic(kaddr); 729 + kunmap_local(kaddr); 730 730 brelse(su_bh); 731 731 goto out_header; 732 732 } ··· 738 738 nc++; 739 739 } 740 740 } 741 - kunmap_atomic(kaddr); 741 + kunmap_local(kaddr); 742 742 if (nc > 0) { 743 743 mark_buffer_dirty(su_bh); 744 744 ncleaned += nc; ··· 823 823 sui->allocmin = 0; 824 824 } 825 825 826 - kaddr = kmap_atomic(header_bh->b_page); 826 + kaddr = kmap_local_page(header_bh->b_page); 827 827 header = kaddr + bh_offset(header_bh); 828 828 header->sh_ncleansegs = cpu_to_le64(sui->ncleansegs); 829 - kunmap_atomic(kaddr); 829 + kunmap_local(kaddr); 830 830 831 831 mark_buffer_dirty(header_bh); 832 832 nilfs_mdt_mark_dirty(sufile); ··· 891 891 continue; 892 892 } 893 893 894 - kaddr = kmap_atomic(su_bh->b_page); 894 + kaddr = kmap_local_page(su_bh->b_page); 895 895 su = nilfs_sufile_block_get_segment_usage( 896 896 sufile, segnum, su_bh, kaddr); 897 897 for (j = 0; j < n; ··· 904 904 si->sui_flags |= 905 905 BIT(NILFS_SEGMENT_USAGE_ACTIVE); 906 906 } 907 - kunmap_atomic(kaddr); 907 + kunmap_local(kaddr); 908 908 brelse(su_bh); 909 909 } 910 910 ret = nsegs; ··· 973 973 goto out_header; 974 974 975 975 for (;;) { 976 - kaddr = kmap_atomic(bh->b_page); 976 + kaddr = kmap_local_page(bh->b_page); 977 977 su = nilfs_sufile_block_get_segment_usage( 978 978 sufile, sup->sup_segnum, bh, kaddr); 979 979 ··· 1010 1010 su->su_flags = cpu_to_le32(sup->sup_sui.sui_flags); 1011 1011 } 1012 1012 1013 - kunmap_atomic(kaddr); 1013 + kunmap_local(kaddr); 1014 1014 1015 1015 sup = (void *)sup + supsz; 1016 1016 if (sup >= supend) ··· 1115 1115 continue; 1116 1116 } 1117 1117 1118 - kaddr = kmap_atomic(su_bh->b_page); 1118 + kaddr = kmap_local_page(su_bh->b_page); 1119 1119 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, 1120 1120 su_bh, kaddr); 1121 1121 for (i = 0; i < n; ++i, ++segnum, su = (void *)su + susz) { ··· 1145 1145 } 1146 1146 1147 1147 if (nblocks >= minlen) { 1148 - kunmap_atomic(kaddr); 1148 + kunmap_local(kaddr); 1149 1149 1150 1150 ret = blkdev_issue_discard(nilfs->ns_bdev, 1151 1151 start * sects_per_block, ··· 1157 1157 } 1158 1158 1159 1159 ndiscarded += nblocks; 1160 - kaddr = kmap_atomic(su_bh->b_page); 1160 + kaddr = kmap_local_page(su_bh->b_page); 1161 1161 su = nilfs_sufile_block_get_segment_usage( 1162 1162 sufile, segnum, su_bh, kaddr); 1163 1163 } ··· 1166 1166 start = seg_start; 1167 1167 nblocks = seg_end - seg_start + 1; 1168 1168 } 1169 - kunmap_atomic(kaddr); 1169 + kunmap_local(kaddr); 1170 1170 put_bh(su_bh); 1171 1171 } 1172 1172 ··· 1246 1246 goto failed; 1247 1247 1248 1248 sui = NILFS_SUI(sufile); 1249 - kaddr = kmap_atomic(header_bh->b_page); 1249 + kaddr = kmap_local_page(header_bh->b_page); 1250 1250 header = kaddr + bh_offset(header_bh); 1251 1251 sui->ncleansegs = le64_to_cpu(header->sh_ncleansegs); 1252 - kunmap_atomic(kaddr); 1252 + kunmap_local(kaddr); 1253 1253 brelse(header_bh); 1254 1254 1255 1255 sui->allocmax = nilfs_sufile_get_nsegments(sufile) - 1;
+6 -27
fs/nilfs2/super.c
··· 448 448 449 449 sb2off = NILFS_SB2_OFFSET_BYTES(newsize); 450 450 newnsegs = sb2off >> nilfs->ns_blocksize_bits; 451 - do_div(newnsegs, nilfs->ns_blocks_per_segment); 451 + newnsegs = div64_ul(newnsegs, nilfs->ns_blocks_per_segment); 452 452 453 453 ret = nilfs_sufile_resize(nilfs->ns_sufile, newnsegs); 454 454 up_write(&nilfs->ns_segctor_sem); ··· 544 544 { 545 545 struct the_nilfs *nilfs = sb->s_fs_info; 546 546 struct nilfs_root *root; 547 - struct nilfs_checkpoint *raw_cp; 548 - struct buffer_head *bh_cp; 549 547 int err = -ENOMEM; 550 548 551 549 root = nilfs_find_or_create_root( ··· 555 557 goto reuse; /* already attached checkpoint */ 556 558 557 559 down_read(&nilfs->ns_segctor_sem); 558 - err = nilfs_cpfile_get_checkpoint(nilfs->ns_cpfile, cno, 0, &raw_cp, 559 - &bh_cp); 560 + err = nilfs_ifile_read(sb, root, cno, nilfs->ns_inode_size); 560 561 up_read(&nilfs->ns_segctor_sem); 561 - if (unlikely(err)) { 562 - if (err == -ENOENT || err == -EINVAL) { 563 - nilfs_err(sb, 564 - "Invalid checkpoint (checkpoint number=%llu)", 565 - (unsigned long long)cno); 566 - err = -EINVAL; 567 - } 562 + if (unlikely(err)) 568 563 goto failed; 569 - } 570 - 571 - err = nilfs_ifile_read(sb, root, nilfs->ns_inode_size, 572 - &raw_cp->cp_ifile_inode, &root->ifile); 573 - if (err) 574 - goto failed_bh; 575 - 576 - atomic64_set(&root->inodes_count, 577 - le64_to_cpu(raw_cp->cp_inodes_count)); 578 - atomic64_set(&root->blocks_count, 579 - le64_to_cpu(raw_cp->cp_blocks_count)); 580 - 581 - nilfs_cpfile_put_checkpoint(nilfs->ns_cpfile, cno, bh_cp); 582 564 583 565 reuse: 584 566 *rootp = root; 585 567 return 0; 586 568 587 - failed_bh: 588 - nilfs_cpfile_put_checkpoint(nilfs->ns_cpfile, cno, bh_cp); 589 569 failed: 570 + if (err == -EINVAL) 571 + nilfs_err(sb, "Invalid checkpoint (checkpoint number=%llu)", 572 + (unsigned long long)cno); 590 573 nilfs_put_root(root); 591 574 592 575 return err;
+1 -1
fs/nilfs2/the_nilfs.c
··· 413 413 { 414 414 u64 max_count = U64_MAX; 415 415 416 - do_div(max_count, nilfs->ns_blocks_per_segment); 416 + max_count = div64_ul(max_count, nilfs->ns_blocks_per_segment); 417 417 return min_t(u64, max_count, ULONG_MAX); 418 418 } 419 419
+1 -1
fs/ocfs2/dlmglue.c
··· 1615 1615 unlock: 1616 1616 lockres_clear_flags(lockres, OCFS2_LOCK_UPCONVERT_FINISHING); 1617 1617 1618 - /* ocfs2_unblock_lock reques on seeing OCFS2_LOCK_UPCONVERT_FINISHING */ 1618 + /* ocfs2_unblock_lock request on seeing OCFS2_LOCK_UPCONVERT_FINISHING */ 1619 1619 kick_dc = (lockres->l_flags & OCFS2_LOCK_BLOCKED); 1620 1620 1621 1621 spin_unlock_irqrestore(&lockres->l_lock, flags);
+1
fs/ocfs2/file.c
··· 2763 2763 const struct inode_operations ocfs2_special_file_iops = { 2764 2764 .setattr = ocfs2_setattr, 2765 2765 .getattr = ocfs2_getattr, 2766 + .listxattr = ocfs2_listxattr, 2766 2767 .permission = ocfs2_permission, 2767 2768 .get_inode_acl = ocfs2_iop_get_acl, 2768 2769 .set_acl = ocfs2_iop_set_acl,
+2 -2
fs/ocfs2/super.c
··· 1711 1711 ocfs2_dquot_cachep = kmem_cache_create("ocfs2_dquot_cache", 1712 1712 sizeof(struct ocfs2_dquot), 1713 1713 0, 1714 - (SLAB_HWCACHE_ALIGN|SLAB_RECLAIM_ACCOUNT), 1714 + SLAB_HWCACHE_ALIGN|SLAB_RECLAIM_ACCOUNT, 1715 1715 NULL); 1716 1716 ocfs2_qf_chunk_cachep = kmem_cache_create("ocfs2_qf_chunk_cache", 1717 1717 sizeof(struct ocfs2_quota_chunk), 1718 1718 0, 1719 - (SLAB_RECLAIM_ACCOUNT), 1719 + SLAB_RECLAIM_ACCOUNT, 1720 1720 NULL); 1721 1721 if (!ocfs2_inode_cachep || !ocfs2_dquot_cachep || 1722 1722 !ocfs2_qf_chunk_cachep) {
+1 -1
include/asm-generic/vmlinux.lds.h
··· 984 984 * -fsanitize=thread produce unwanted sections (.eh_frame 985 985 * and .init_array.*), but CONFIG_CONSTRUCTORS wants to 986 986 * keep any .init_array.* sections. 987 - * https://bugs.llvm.org/show_bug.cgi?id=46478 987 + * https://llvm.org/pr46478 988 988 */ 989 989 #ifdef CONFIG_UNWIND_TABLES 990 990 #define DISCARD_EH_FRAME
+3 -7
include/linux/compiler-clang.h
··· 9 9 * Clang prior to 17 is being silly and considers many __cleanup() variables 10 10 * as unused (because they are, their sole purpose is to go out of scope). 11 11 * 12 - * https://reviews.llvm.org/D152180 12 + * https://github.com/llvm/llvm-project/commit/877210faa447f4cc7db87812f8ed80e398fedd61 13 13 */ 14 14 #undef __cleanup 15 15 #define __cleanup(func) __maybe_unused __attribute__((__cleanup__(func))) ··· 114 114 #define __diag_str(s) __diag_str1(s) 115 115 #define __diag(s) _Pragma(__diag_str(clang diagnostic s)) 116 116 117 - #if CONFIG_CLANG_VERSION >= 110000 118 - #define __diag_clang_11(s) __diag(s) 119 - #else 120 - #define __diag_clang_11(s) 121 - #endif 117 + #define __diag_clang_13(s) __diag(s) 122 118 123 119 #define __diag_ignore_all(option, comment) \ 124 - __diag_clang(11, ignore, option) 120 + __diag_clang(13, ignore, option)
-32
include/linux/flex_proportions.h
··· 39 39 bool fprop_new_period(struct fprop_global *p, int periods); 40 40 41 41 /* 42 - * ---- SINGLE ---- 43 - */ 44 - struct fprop_local_single { 45 - /* the local events counter */ 46 - unsigned long events; 47 - /* Period in which we last updated events */ 48 - unsigned int period; 49 - raw_spinlock_t lock; /* Protect period and numerator */ 50 - }; 51 - 52 - #define INIT_FPROP_LOCAL_SINGLE(name) \ 53 - { .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \ 54 - } 55 - 56 - int fprop_local_init_single(struct fprop_local_single *pl); 57 - void fprop_local_destroy_single(struct fprop_local_single *pl); 58 - void __fprop_inc_single(struct fprop_global *p, struct fprop_local_single *pl); 59 - void fprop_fraction_single(struct fprop_global *p, 60 - struct fprop_local_single *pl, unsigned long *numerator, 61 - unsigned long *denominator); 62 - 63 - static inline 64 - void fprop_inc_single(struct fprop_global *p, struct fprop_local_single *pl) 65 - { 66 - unsigned long flags; 67 - 68 - local_irq_save(flags); 69 - __fprop_inc_single(p, pl); 70 - local_irq_restore(flags); 71 - } 72 - 73 - /* 74 42 * ---- PERCPU ---- 75 43 */ 76 44 struct fprop_local_percpu {
+16 -1
include/linux/list.h
··· 766 766 * @member: the name of the list_head within the struct. 767 767 */ 768 768 #define list_entry_is_head(pos, head, member) \ 769 - (&pos->member == (head)) 769 + list_is_head(&pos->member, (head)) 770 770 771 771 /** 772 772 * list_for_each_entry - iterate over list of given type ··· 1194 1194 for (pos = hlist_entry_safe((head)->first, typeof(*pos), member);\ 1195 1195 pos && ({ n = pos->member.next; 1; }); \ 1196 1196 pos = hlist_entry_safe(n, typeof(*pos), member)) 1197 + 1198 + /** 1199 + * hlist_count_nodes - count nodes in the hlist 1200 + * @head: the head for your hlist. 1201 + */ 1202 + static inline size_t hlist_count_nodes(struct hlist_head *head) 1203 + { 1204 + struct hlist_node *pos; 1205 + size_t count = 0; 1206 + 1207 + hlist_for_each(pos, head) 1208 + count++; 1209 + 1210 + return count; 1211 + } 1197 1212 1198 1213 #endif
+22 -20
include/linux/min_heap.h
··· 35 35 void min_heapify(struct min_heap *heap, int pos, 36 36 const struct min_heap_callbacks *func) 37 37 { 38 - void *left, *right, *parent, *smallest; 38 + void *left, *right; 39 39 void *data = heap->data; 40 + void *root = data + pos * func->elem_size; 41 + int i = pos, j; 40 42 43 + /* Find the sift-down path all the way to the leaves. */ 41 44 for (;;) { 42 - if (pos * 2 + 1 >= heap->nr) 45 + if (i * 2 + 2 >= heap->nr) 43 46 break; 47 + left = data + (i * 2 + 1) * func->elem_size; 48 + right = data + (i * 2 + 2) * func->elem_size; 49 + i = func->less(left, right) ? i * 2 + 1 : i * 2 + 2; 50 + } 44 51 45 - left = data + ((pos * 2 + 1) * func->elem_size); 46 - parent = data + (pos * func->elem_size); 47 - smallest = parent; 48 - if (func->less(left, smallest)) 49 - smallest = left; 52 + /* Special case for the last leaf with no sibling. */ 53 + if (i * 2 + 2 == heap->nr) 54 + i = i * 2 + 1; 50 55 51 - if (pos * 2 + 2 < heap->nr) { 52 - right = data + ((pos * 2 + 2) * func->elem_size); 53 - if (func->less(right, smallest)) 54 - smallest = right; 55 - } 56 - if (smallest == parent) 57 - break; 58 - func->swp(smallest, parent); 59 - if (smallest == left) 60 - pos = (pos * 2) + 1; 61 - else 62 - pos = (pos * 2) + 2; 56 + /* Backtrack to the correct location. */ 57 + while (i != pos && func->less(root, data + i * func->elem_size)) 58 + i = (i - 1) / 2; 59 + 60 + /* Shift the element into its correct place. */ 61 + j = i; 62 + while (i != pos) { 63 + i = (i - 1) / 2; 64 + func->swp(data + i * func->elem_size, data + j * func->elem_size); 63 65 } 64 66 } 65 67 ··· 72 70 { 73 71 int i; 74 72 75 - for (i = heap->nr / 2; i >= 0; i--) 73 + for (i = heap->nr / 2 - 1; i >= 0; i--) 76 74 min_heapify(heap, i, func); 77 75 } 78 76
-7
include/linux/nmi.h
··· 216 216 static inline void watchdog_update_hrtimer_threshold(u64 period) { } 217 217 #endif 218 218 219 - struct ctl_table; 220 - int proc_watchdog(struct ctl_table *, int, void *, size_t *, loff_t *); 221 - int proc_nmi_watchdog(struct ctl_table *, int , void *, size_t *, loff_t *); 222 - int proc_soft_watchdog(struct ctl_table *, int , void *, size_t *, loff_t *); 223 - int proc_watchdog_thresh(struct ctl_table *, int , void *, size_t *, loff_t *); 224 - int proc_watchdog_cpumask(struct ctl_table *, int, void *, size_t *, loff_t *); 225 - 226 219 #ifdef CONFIG_HAVE_ACPI_APEI_NMI 227 220 #include <asm/nmi.h> 228 221 #endif
-2
include/linux/start_kernel.h
··· 9 9 up something else. */ 10 10 11 11 extern asmlinkage void __init __noreturn start_kernel(void); 12 - extern void __init __noreturn arch_call_rest_init(void); 13 - extern void __ref __noreturn rest_init(void); 14 12 15 13 #endif /* _LINUX_START_KERNEL_H */
+2 -2
include/linux/win_minmax.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 - /** 3 - * lib/minmax.c: windowed min/max tracker by Kathleen Nichols. 2 + /* 3 + * win_minmax.h: windowed min/max tracker by Kathleen Nichols. 4 4 * 5 5 */ 6 6 #ifndef MINMAX_H
+2 -7
init/main.c
··· 683 683 684 684 static __initdata DECLARE_COMPLETION(kthreadd_done); 685 685 686 - noinline void __ref __noreturn rest_init(void) 686 + static noinline void __ref __noreturn rest_init(void) 687 687 { 688 688 struct task_struct *tsk; 689 689 int pid; ··· 827 827 } 828 828 early_param("randomize_kstack_offset", early_randomize_kstack_offset); 829 829 #endif 830 - 831 - void __init __weak __noreturn arch_call_rest_init(void) 832 - { 833 - rest_init(); 834 - } 835 830 836 831 static void __init print_unknown_bootoptions(void) 837 832 { ··· 1071 1076 kcsan_init(); 1072 1077 1073 1078 /* Do the rest non-__init'ed, we're now alive */ 1074 - arch_call_rest_init(); 1079 + rest_init(); 1075 1080 1076 1081 /* 1077 1082 * Avoid stack canaries in callers of boot_init_stack_canary for gcc-10
+35 -2
ipc/ipc_sysctl.c
··· 14 14 #include <linux/ipc_namespace.h> 15 15 #include <linux/msg.h> 16 16 #include <linux/slab.h> 17 + #include <linux/cred.h> 17 18 #include "util.h" 18 19 19 20 static int proc_ipc_dointvec_minmax_orphans(struct ctl_table *table, int write, ··· 191 190 return &current->nsproxy->ipc_ns->ipc_set == set; 192 191 } 193 192 193 + static void ipc_set_ownership(struct ctl_table_header *head, 194 + struct ctl_table *table, 195 + kuid_t *uid, kgid_t *gid) 196 + { 197 + struct ipc_namespace *ns = 198 + container_of(head->set, struct ipc_namespace, ipc_set); 199 + 200 + kuid_t ns_root_uid = make_kuid(ns->user_ns, 0); 201 + kgid_t ns_root_gid = make_kgid(ns->user_ns, 0); 202 + 203 + *uid = uid_valid(ns_root_uid) ? ns_root_uid : GLOBAL_ROOT_UID; 204 + *gid = gid_valid(ns_root_gid) ? ns_root_gid : GLOBAL_ROOT_GID; 205 + } 206 + 194 207 static int ipc_permissions(struct ctl_table_header *head, struct ctl_table *table) 195 208 { 196 209 int mode = table->mode; 197 210 198 211 #ifdef CONFIG_CHECKPOINT_RESTORE 199 - struct ipc_namespace *ns = current->nsproxy->ipc_ns; 212 + struct ipc_namespace *ns = 213 + container_of(head->set, struct ipc_namespace, ipc_set); 200 214 201 215 if (((table->data == &ns->ids[IPC_SEM_IDS].next_id) || 202 216 (table->data == &ns->ids[IPC_MSG_IDS].next_id) || 203 217 (table->data == &ns->ids[IPC_SHM_IDS].next_id)) && 204 218 checkpoint_restore_ns_capable(ns->user_ns)) 205 219 mode = 0666; 220 + else 206 221 #endif 207 - return mode; 222 + { 223 + kuid_t ns_root_uid; 224 + kgid_t ns_root_gid; 225 + 226 + ipc_set_ownership(head, table, &ns_root_uid, &ns_root_gid); 227 + 228 + if (uid_eq(current_euid(), ns_root_uid)) 229 + mode >>= 6; 230 + 231 + else if (in_egroup_p(ns_root_gid)) 232 + mode >>= 3; 233 + } 234 + 235 + mode &= 7; 236 + 237 + return (mode << 6) | (mode << 3) | mode; 208 238 } 209 239 210 240 static struct ctl_table_root set_root = { 211 241 .lookup = set_lookup, 212 242 .permissions = ipc_permissions, 243 + .set_ownership = ipc_set_ownership, 213 244 }; 214 245 215 246 bool setup_ipc_sysctls(struct ipc_namespace *ns)
+36
ipc/mq_sysctl.c
··· 12 12 #include <linux/stat.h> 13 13 #include <linux/capability.h> 14 14 #include <linux/slab.h> 15 + #include <linux/cred.h> 15 16 16 17 static int msg_max_limit_min = MIN_MSGMAX; 17 18 static int msg_max_limit_max = HARD_MSGMAX; ··· 77 76 return &current->nsproxy->ipc_ns->mq_set == set; 78 77 } 79 78 79 + static void mq_set_ownership(struct ctl_table_header *head, 80 + struct ctl_table *table, 81 + kuid_t *uid, kgid_t *gid) 82 + { 83 + struct ipc_namespace *ns = 84 + container_of(head->set, struct ipc_namespace, mq_set); 85 + 86 + kuid_t ns_root_uid = make_kuid(ns->user_ns, 0); 87 + kgid_t ns_root_gid = make_kgid(ns->user_ns, 0); 88 + 89 + *uid = uid_valid(ns_root_uid) ? ns_root_uid : GLOBAL_ROOT_UID; 90 + *gid = gid_valid(ns_root_gid) ? ns_root_gid : GLOBAL_ROOT_GID; 91 + } 92 + 93 + static int mq_permissions(struct ctl_table_header *head, struct ctl_table *table) 94 + { 95 + int mode = table->mode; 96 + kuid_t ns_root_uid; 97 + kgid_t ns_root_gid; 98 + 99 + mq_set_ownership(head, table, &ns_root_uid, &ns_root_gid); 100 + 101 + if (uid_eq(current_euid(), ns_root_uid)) 102 + mode >>= 6; 103 + 104 + else if (in_egroup_p(ns_root_gid)) 105 + mode >>= 3; 106 + 107 + mode &= 7; 108 + 109 + return (mode << 6) | (mode << 3) | mode; 110 + } 111 + 80 112 static struct ctl_table_root set_root = { 81 113 .lookup = set_lookup, 114 + .permissions = mq_permissions, 115 + .set_ownership = mq_set_ownership, 82 116 }; 83 117 84 118 bool setup_mq_sysctls(struct ipc_namespace *ns)
+1 -1
kernel/bounds.c
··· 19 19 DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS); 20 20 DEFINE(MAX_NR_ZONES, __MAX_NR_ZONES); 21 21 #ifdef CONFIG_SMP 22 - DEFINE(NR_CPUS_BITS, ilog2(CONFIG_NR_CPUS)); 22 + DEFINE(NR_CPUS_BITS, bits_per(CONFIG_NR_CPUS)); 23 23 #endif 24 24 DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t)); 25 25 #ifdef CONFIG_LRU_GEN
+24 -20
kernel/kexec_core.c
··· 750 750 PAGE_SIZE - (maddr & ~PAGE_MASK)); 751 751 uchunk = min(ubytes, mchunk); 752 752 753 - /* For file based kexec, source pages are in kernel memory */ 754 - if (image->file_mode) 755 - memcpy(ptr, kbuf, uchunk); 756 - else 757 - result = copy_from_user(ptr, buf, uchunk); 753 + if (uchunk) { 754 + /* For file based kexec, source pages are in kernel memory */ 755 + if (image->file_mode) 756 + memcpy(ptr, kbuf, uchunk); 757 + else 758 + result = copy_from_user(ptr, buf, uchunk); 759 + ubytes -= uchunk; 760 + if (image->file_mode) 761 + kbuf += uchunk; 762 + else 763 + buf += uchunk; 764 + } 758 765 kunmap_local(ptr); 759 766 if (result) { 760 767 result = -EFAULT; 761 768 goto out; 762 769 } 763 - ubytes -= uchunk; 764 770 maddr += mchunk; 765 - if (image->file_mode) 766 - kbuf += mchunk; 767 - else 768 - buf += mchunk; 769 771 mbytes -= mchunk; 770 772 771 773 cond_resched(); ··· 819 817 memset(ptr + uchunk, 0, mchunk - uchunk); 820 818 } 821 819 822 - /* For file based kexec, source pages are in kernel memory */ 823 - if (image->file_mode) 824 - memcpy(ptr, kbuf, uchunk); 825 - else 826 - result = copy_from_user(ptr, buf, uchunk); 820 + if (uchunk) { 821 + /* For file based kexec, source pages are in kernel memory */ 822 + if (image->file_mode) 823 + memcpy(ptr, kbuf, uchunk); 824 + else 825 + result = copy_from_user(ptr, buf, uchunk); 826 + ubytes -= uchunk; 827 + if (image->file_mode) 828 + kbuf += uchunk; 829 + else 830 + buf += uchunk; 831 + } 827 832 kexec_flush_icache_page(page); 828 833 kunmap_local(ptr); 829 834 arch_kexec_pre_free_pages(page_address(page), 1); ··· 838 829 result = -EFAULT; 839 830 goto out; 840 831 } 841 - ubytes -= uchunk; 842 832 maddr += mchunk; 843 - if (image->file_mode) 844 - kbuf += mchunk; 845 - else 846 - buf += mchunk; 847 833 mbytes -= mchunk; 848 834 849 835 cond_resched();
+9
kernel/panic.c
··· 73 73 #define PANIC_PRINT_FTRACE_INFO 0x00000010 74 74 #define PANIC_PRINT_ALL_PRINTK_MSG 0x00000020 75 75 #define PANIC_PRINT_ALL_CPU_BT 0x00000040 76 + #define PANIC_PRINT_BLOCKED_TASKS 0x00000080 76 77 unsigned long panic_print; 77 78 78 79 ATOMIC_NOTIFIER_HEAD(panic_notifier_list); ··· 228 227 229 228 if (panic_print & PANIC_PRINT_FTRACE_INFO) 230 229 ftrace_dump(DUMP_ALL); 230 + 231 + if (panic_print & PANIC_PRINT_BLOCKED_TASKS) 232 + show_state_filter(TASK_UNINTERRUPTIBLE); 231 233 } 232 234 233 235 void check_panic_on_warn(const char *origin) ··· 678 674 pr_warn("WARNING: CPU: %d PID: %d at %pS\n", 679 675 raw_smp_processor_id(), current->pid, caller); 680 676 677 + #pragma GCC diagnostic push 678 + #ifndef __clang__ 679 + #pragma GCC diagnostic ignored "-Wsuggest-attribute=format" 680 + #endif 681 681 if (args) 682 682 vprintk(args->fmt, args->args); 683 + #pragma GCC diagnostic pop 683 684 684 685 print_modules(); 685 686
+5 -8
kernel/ptrace.c
··· 375 375 return 0; 376 376 } 377 377 378 - static inline void ptrace_set_stopped(struct task_struct *task) 378 + static inline void ptrace_set_stopped(struct task_struct *task, bool seize) 379 379 { 380 380 guard(spinlock)(&task->sighand->siglock); 381 381 382 + /* SEIZE doesn't trap tracee on attach */ 383 + if (!seize) 384 + send_signal_locked(SIGSTOP, SEND_SIG_PRIV, task, PIDTYPE_PID); 382 385 /* 383 386 * If the task is already STOPPED, set JOBCTL_TRAP_STOP and 384 387 * TRAPPING, and kick it so that it transits to TRACED. TRAPPING ··· 460 457 return -EPERM; 461 458 462 459 task->ptrace = flags; 463 - 464 460 ptrace_link(task, current); 465 - 466 - /* SEIZE doesn't trap tracee on attach */ 467 - if (!seize) 468 - send_sig_info(SIGSTOP, SEND_SIG_PRIV, task); 469 - 470 - ptrace_set_stopped(task); 461 + ptrace_set_stopped(task, seize); 471 462 } 472 463 } 473 464
+16 -12
kernel/signal.c
··· 2741 2741 /* Has this task already been marked for death? */ 2742 2742 if ((signal->flags & SIGNAL_GROUP_EXIT) || 2743 2743 signal->group_exec_task) { 2744 - clear_siginfo(&ksig->info); 2745 - ksig->info.si_signo = signr = SIGKILL; 2744 + signr = SIGKILL; 2746 2745 sigdelset(&current->pending.signal, SIGKILL); 2747 2746 trace_signal_deliver(SIGKILL, SEND_SIG_NOINFO, 2748 - &sighand->action[SIGKILL - 1]); 2747 + &sighand->action[SIGKILL-1]); 2749 2748 recalc_sigpending(); 2749 + /* 2750 + * implies do_group_exit() or return to PF_USER_WORKER, 2751 + * no need to initialize ksig->info/etc. 2752 + */ 2750 2753 goto fatal; 2751 2754 } 2752 2755 ··· 2859 2856 spin_lock_irq(&sighand->siglock); 2860 2857 } 2861 2858 2862 - if (likely(do_signal_stop(ksig->info.si_signo))) { 2859 + if (likely(do_signal_stop(signr))) { 2863 2860 /* It released the siglock. */ 2864 2861 goto relock; 2865 2862 } ··· 2883 2880 2884 2881 if (sig_kernel_coredump(signr)) { 2885 2882 if (print_fatal_signals) 2886 - print_fatal_signal(ksig->info.si_signo); 2883 + print_fatal_signal(signr); 2887 2884 proc_coredump_connector(current); 2888 2885 /* 2889 2886 * If it was able to dump core, this kills all ··· 2898 2895 2899 2896 /* 2900 2897 * PF_USER_WORKER threads will catch and exit on fatal signals 2901 - * themselves. They have cleanup that must be performed, so 2902 - * we cannot call do_exit() on their behalf. 2898 + * themselves. They have cleanup that must be performed, so we 2899 + * cannot call do_exit() on their behalf. Note that ksig won't 2900 + * be properly initialized, PF_USER_WORKER's shouldn't use it. 2903 2901 */ 2904 2902 if (current->flags & PF_USER_WORKER) 2905 2903 goto out; ··· 2908 2904 /* 2909 2905 * Death signals, no core dump. 2910 2906 */ 2911 - do_group_exit(ksig->info.si_signo); 2907 + do_group_exit(signr); 2912 2908 /* NOTREACHED */ 2913 2909 } 2914 2910 spin_unlock_irq(&sighand->siglock); 2915 - out: 2911 + 2916 2912 ksig->sig = signr; 2917 2913 2918 - if (!(ksig->ka.sa.sa_flags & SA_EXPOSE_TAGBITS)) 2914 + if (signr && !(ksig->ka.sa.sa_flags & SA_EXPOSE_TAGBITS)) 2919 2915 hide_si_addr_tag_bits(ksig); 2920 - 2921 - return ksig->sig > 0; 2916 + out: 2917 + return signr > 0; 2922 2918 } 2923 2919 2924 2920 /**
+1 -1
kernel/user_namespace.c
··· 931 931 struct uid_gid_map new_map; 932 932 unsigned idx; 933 933 struct uid_gid_extent extent; 934 - char *kbuf = NULL, *pos, *next_line; 934 + char *kbuf, *pos, *next_line; 935 935 ssize_t ret; 936 936 937 937 /* Only allow < page size writes at the beginning of the file */
+12 -10
kernel/watchdog.c
··· 796 796 /* 797 797 * /proc/sys/kernel/watchdog 798 798 */ 799 - int proc_watchdog(struct ctl_table *table, int write, 800 - void *buffer, size_t *lenp, loff_t *ppos) 799 + static int proc_watchdog(struct ctl_table *table, int write, 800 + void *buffer, size_t *lenp, loff_t *ppos) 801 801 { 802 802 return proc_watchdog_common(WATCHDOG_HARDLOCKUP_ENABLED | 803 803 WATCHDOG_SOFTOCKUP_ENABLED, ··· 807 807 /* 808 808 * /proc/sys/kernel/nmi_watchdog 809 809 */ 810 - int proc_nmi_watchdog(struct ctl_table *table, int write, 811 - void *buffer, size_t *lenp, loff_t *ppos) 810 + static int proc_nmi_watchdog(struct ctl_table *table, int write, 811 + void *buffer, size_t *lenp, loff_t *ppos) 812 812 { 813 813 if (!watchdog_hardlockup_available && write) 814 814 return -ENOTSUPP; ··· 816 816 table, write, buffer, lenp, ppos); 817 817 } 818 818 819 + #ifdef CONFIG_SOFTLOCKUP_DETECTOR 819 820 /* 820 821 * /proc/sys/kernel/soft_watchdog 821 822 */ 822 - int proc_soft_watchdog(struct ctl_table *table, int write, 823 - void *buffer, size_t *lenp, loff_t *ppos) 823 + static int proc_soft_watchdog(struct ctl_table *table, int write, 824 + void *buffer, size_t *lenp, loff_t *ppos) 824 825 { 825 826 return proc_watchdog_common(WATCHDOG_SOFTOCKUP_ENABLED, 826 827 table, write, buffer, lenp, ppos); 827 828 } 829 + #endif 828 830 829 831 /* 830 832 * /proc/sys/kernel/watchdog_thresh 831 833 */ 832 - int proc_watchdog_thresh(struct ctl_table *table, int write, 833 - void *buffer, size_t *lenp, loff_t *ppos) 834 + static int proc_watchdog_thresh(struct ctl_table *table, int write, 835 + void *buffer, size_t *lenp, loff_t *ppos) 834 836 { 835 837 int err, old; 836 838 ··· 854 852 * user to specify a mask that will include cpus that have not yet 855 853 * been brought online, if desired. 856 854 */ 857 - int proc_watchdog_cpumask(struct ctl_table *table, int write, 858 - void *buffer, size_t *lenp, loff_t *ppos) 855 + static int proc_watchdog_cpumask(struct ctl_table *table, int write, 856 + void *buffer, size_t *lenp, loff_t *ppos) 859 857 { 860 858 int err; 861 859
+2 -2
lib/Kconfig.debug
··· 2085 2085 depends on ARCH_HAS_KCOV 2086 2086 depends on CC_HAS_SANCOV_TRACE_PC || GCC_PLUGINS 2087 2087 depends on !ARCH_WANTS_NO_INSTR || HAVE_NOINSTR_HACK || \ 2088 - GCC_VERSION >= 120000 || CLANG_VERSION >= 130000 2088 + GCC_VERSION >= 120000 || CC_IS_CLANG 2089 2089 select DEBUG_FS 2090 2090 select GCC_PLUGIN_SANCOV if !CC_HAS_SANCOV_TRACE_PC 2091 2091 select OBJTOOL if HAVE_NOINSTR_HACK ··· 2142 2142 2143 2143 To run the benchmark, it needs to be enabled explicitly, either from 2144 2144 the kernel command line (when built-in), or from userspace (when 2145 - built-in or modular. 2145 + built-in or modular). 2146 2146 2147 2147 Run once during kernel boot: 2148 2148
+1 -1
lib/Kconfig.kasan
··· 158 158 out-of-bounds bugs in stack variables. 159 159 160 160 With Clang, stack instrumentation has a problem that causes excessive 161 - stack usage, see https://bugs.llvm.org/show_bug.cgi?id=38809. Thus, 161 + stack usage, see https://llvm.org/pr38809. Thus, 162 162 with Clang, this option is deemed unsafe. 163 163 164 164 This option is always disabled when compile-testing with Clang to
+1 -1
lib/assoc_array.c
··· 938 938 edit->leaf_p = &new_n0->slots[0]; 939 939 940 940 pr_devel("<--%s() = ok [split shortcut]\n", __func__); 941 - return edit; 941 + return true; 942 942 } 943 943 944 944 /**
+2 -2
lib/buildid.c
··· 140 140 return -EFAULT; /* page not mapped */ 141 141 142 142 ret = -EINVAL; 143 - page_addr = kmap_atomic(page); 143 + page_addr = kmap_local_page(page); 144 144 ehdr = (Elf32_Ehdr *)page_addr; 145 145 146 146 /* compare magic x7f "ELF" */ ··· 156 156 else if (ehdr->e_ident[EI_CLASS] == ELFCLASS64) 157 157 ret = get_build_id_64(page_addr, build_id, size); 158 158 out: 159 - kunmap_atomic(page_addr); 159 + kunmap_local(page_addr); 160 160 put_page(page); 161 161 return ret; 162 162 }
+1 -1
lib/dhry_1.c
··· 277 277 dhry_assert_string_eq(Str_1_Loc, "DHRYSTONE PROGRAM, 1'ST STRING"); 278 278 dhry_assert_string_eq(Str_2_Loc, "DHRYSTONE PROGRAM, 2'ND STRING"); 279 279 280 - User_Time = ktime_to_ms(ktime_sub(End_Time, Begin_Time)); 280 + User_Time = ktime_ms_delta(End_Time, Begin_Time); 281 281 282 282 kfree(Ptr_Glob); 283 283 kfree(Next_Ptr_Glob);
-1
lib/dhry_run.c
··· 10 10 #include <linux/kernel.h> 11 11 #include <linux/module.h> 12 12 #include <linux/moduleparam.h> 13 - #include <linux/mutex.h> 14 13 #include <linux/smp.h> 15 14 16 15 #define DHRY_VAX 1757
+3 -4
lib/dynamic_debug.c
··· 640 640 int cls_id, totct = 0; 641 641 bool wanted; 642 642 643 - cl_str = tmp = kstrdup(instr, GFP_KERNEL); 644 - p = strchr(cl_str, '\n'); 645 - if (p) 646 - *p = '\0'; 643 + cl_str = tmp = kstrdup_and_replace(instr, '\n', '\0', GFP_KERNEL); 644 + if (!tmp) 645 + return -ENOMEM; 647 646 648 647 /* start with previously set state-bits, then modify */ 649 648 curr_bits = old_bits = *dcp->bits;
-77
lib/flex_proportions.c
··· 84 84 } 85 85 86 86 /* 87 - * ---- SINGLE ---- 88 - */ 89 - 90 - int fprop_local_init_single(struct fprop_local_single *pl) 91 - { 92 - pl->events = 0; 93 - pl->period = 0; 94 - raw_spin_lock_init(&pl->lock); 95 - return 0; 96 - } 97 - 98 - void fprop_local_destroy_single(struct fprop_local_single *pl) 99 - { 100 - } 101 - 102 - static void fprop_reflect_period_single(struct fprop_global *p, 103 - struct fprop_local_single *pl) 104 - { 105 - unsigned int period = p->period; 106 - unsigned long flags; 107 - 108 - /* Fast path - period didn't change */ 109 - if (pl->period == period) 110 - return; 111 - raw_spin_lock_irqsave(&pl->lock, flags); 112 - /* Someone updated pl->period while we were spinning? */ 113 - if (pl->period >= period) { 114 - raw_spin_unlock_irqrestore(&pl->lock, flags); 115 - return; 116 - } 117 - /* Aging zeroed our fraction? */ 118 - if (period - pl->period < BITS_PER_LONG) 119 - pl->events >>= period - pl->period; 120 - else 121 - pl->events = 0; 122 - pl->period = period; 123 - raw_spin_unlock_irqrestore(&pl->lock, flags); 124 - } 125 - 126 - /* Event of type pl happened */ 127 - void __fprop_inc_single(struct fprop_global *p, struct fprop_local_single *pl) 128 - { 129 - fprop_reflect_period_single(p, pl); 130 - pl->events++; 131 - percpu_counter_add(&p->events, 1); 132 - } 133 - 134 - /* Return fraction of events of type pl */ 135 - void fprop_fraction_single(struct fprop_global *p, 136 - struct fprop_local_single *pl, 137 - unsigned long *numerator, unsigned long *denominator) 138 - { 139 - unsigned int seq; 140 - s64 num, den; 141 - 142 - do { 143 - seq = read_seqcount_begin(&p->sequence); 144 - fprop_reflect_period_single(p, pl); 145 - num = pl->events; 146 - den = percpu_counter_read_positive(&p->events); 147 - } while (read_seqcount_retry(&p->sequence, seq)); 148 - 149 - /* 150 - * Make fraction <= 1 and denominator > 0 even in presence of percpu 151 - * counter errors 152 - */ 153 - if (den <= num) { 154 - if (num) 155 - den = num; 156 - else 157 - den = 1; 158 - } 159 - *denominator = den; 160 - *numerator = num; 161 - } 162 - 163 - /* 164 87 * ---- PERCPU ---- 165 88 */ 166 89 #define PROP_BATCH (8*(1+ilog2(nr_cpu_ids)))
+15
lib/math/div64.c
··· 22 22 #include <linux/export.h> 23 23 #include <linux/math.h> 24 24 #include <linux/math64.h> 25 + #include <linux/minmax.h> 25 26 #include <linux/log2.h> 26 27 27 28 /* Not needed on 64bit architectures */ ··· 191 190 192 191 /* can a * b overflow ? */ 193 192 if (ilog2(a) + ilog2(b) > 62) { 193 + /* 194 + * Note that the algorithm after the if block below might lose 195 + * some precision and the result is more exact for b > a. So 196 + * exchange a and b if a is bigger than b. 197 + * 198 + * For example with a = 43980465100800, b = 100000000, c = 1000000000 199 + * the below calculation doesn't modify b at all because div == 0 200 + * and then shift becomes 45 + 26 - 62 = 9 and so the result 201 + * becomes 4398035251080. However with a and b swapped the exact 202 + * result is calculated (i.e. 4398046510080). 203 + */ 204 + if (a > b) 205 + swap(a, b); 206 + 194 207 /* 195 208 * (b * a) / c is equal to 196 209 *
+1 -1
lib/raid6/Makefile
··· 21 21 ifdef CONFIG_CC_IS_CLANG 22 22 # clang ppc port does not yet support -maltivec when -msoft-float is 23 23 # enabled. A future release of clang will resolve this 24 - # https://bugs.llvm.org/show_bug.cgi?id=31177 24 + # https://llvm.org/pr31177 25 25 CFLAGS_REMOVE_altivec1.o += -msoft-float 26 26 CFLAGS_REMOVE_altivec2.o += -msoft-float 27 27 CFLAGS_REMOVE_altivec4.o += -msoft-float
+15 -5
lib/sort.c
··· 215 215 /* pre-scale counters for performance */ 216 216 size_t n = num * size, a = (num/2) * size; 217 217 const unsigned int lsbit = size & -size; /* Used to find parent */ 218 + size_t shift = 0; 218 219 219 220 if (!a) /* num < 2 || size == 0 */ 220 221 return; ··· 243 242 for (;;) { 244 243 size_t b, c, d; 245 244 246 - if (a) /* Building heap: sift down --a */ 247 - a -= size; 248 - else if (n -= size) /* Sorting: Extract root to --n */ 245 + if (a) /* Building heap: sift down a */ 246 + a -= size << shift; 247 + else if (n > 3 * size) { /* Sorting: Extract two largest elements */ 248 + n -= size; 249 249 do_swap(base, base + n, size, swap_func, priv); 250 - else /* Sort complete */ 250 + shift = do_cmp(base + size, base + 2 * size, cmp_func, priv) <= 0; 251 + a = size << shift; 252 + n -= size; 253 + do_swap(base + a, base + n, size, swap_func, priv); 254 + } else if (n > size) { /* Sorting: Extract root */ 255 + n -= size; 256 + do_swap(base, base + n, size, swap_func, priv); 257 + } else { /* Sort complete */ 251 258 break; 259 + } 252 260 253 261 /* 254 262 * Sift element at "a" down into heap. This is the ··· 272 262 * average, 3/4 worst-case.) 273 263 */ 274 264 for (b = a; c = 2*b + size, (d = c + size) < n;) 275 - b = do_cmp(base + c, base + d, cmp_func, priv) >= 0 ? c : d; 265 + b = do_cmp(base + c, base + d, cmp_func, priv) > 0 ? c : d; 276 266 if (d == n) /* Special case last leaf with no sibling */ 277 267 b = c; 278 268
+1 -1
lib/stackinit_kunit.c
··· 417 417 * These are expected to fail for most configurations because neither 418 418 * GCC nor Clang have a way to perform initialization of variables in 419 419 * non-code areas (i.e. in a switch statement before the first "case"). 420 - * https://bugs.llvm.org/show_bug.cgi?id=44916 420 + * https://llvm.org/pr44916 421 421 */ 422 422 DEFINE_TEST_DRIVER(switch_1_none, uint64_t, SCALAR, ALWAYS_FAIL); 423 423 DEFINE_TEST_DRIVER(switch_2_none, uint64_t, SCALAR, ALWAYS_FAIL);
+1 -1
mm/slab_common.c
··· 655 655 656 656 struct kmem_cache * 657 657 kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1] __ro_after_init = 658 - { /* initialization for https://bugs.llvm.org/show_bug.cgi?id=42570 */ }; 658 + { /* initialization for https://llvm.org/pr42570 */ }; 659 659 EXPORT_SYMBOL(kmalloc_caches); 660 660 661 661 #ifdef CONFIG_RANDOM_KMALLOC_CACHES
+1 -1
net/bridge/br_multicast.c
··· 5053 5053 free_percpu(br->mcast_stats); 5054 5054 } 5055 5055 5056 - /* noinline for https://bugs.llvm.org/show_bug.cgi?id=45802#c9 */ 5056 + /* noinline for https://llvm.org/pr45802#c9 */ 5057 5057 static noinline_for_stack void mcast_stats_add_dir(u64 *dst, u64 *src) 5058 5058 { 5059 5059 dst[BR_MCAST_DIR_RX] += src[BR_MCAST_DIR_RX];
+1 -1
net/ipv4/gre_demux.c
··· 217 217 module_exit(gre_exit); 218 218 219 219 MODULE_DESCRIPTION("GRE over IPv4 demultiplexer driver"); 220 - MODULE_AUTHOR("D. Kozlov (xeb@mail.ru)"); 220 + MODULE_AUTHOR("D. Kozlov <xeb@mail.ru>"); 221 221 MODULE_LICENSE("GPL");
+1 -1
net/ipv6/ip6_gre.c
··· 2405 2405 module_init(ip6gre_init); 2406 2406 module_exit(ip6gre_fini); 2407 2407 MODULE_LICENSE("GPL"); 2408 - MODULE_AUTHOR("D. Kozlov (xeb@mail.ru)"); 2408 + MODULE_AUTHOR("D. Kozlov <xeb@mail.ru>"); 2409 2409 MODULE_DESCRIPTION("GRE over IPv6 tunneling device"); 2410 2410 MODULE_ALIAS_RTNL_LINK("ip6gre"); 2411 2411 MODULE_ALIAS_RTNL_LINK("ip6gretap");
+1 -1
net/iucv/iucv.c
··· 1904 1904 subsys_initcall(iucv_init); 1905 1905 module_exit(iucv_exit); 1906 1906 1907 - MODULE_AUTHOR("(C) 2001 IBM Corp. by Fritz Elfert (felfert@millenux.com)"); 1907 + MODULE_AUTHOR("(C) 2001 IBM Corp. by Fritz Elfert <felfert@millenux.com>"); 1908 1908 MODULE_DESCRIPTION("Linux for S/390 IUCV lowlevel driver"); 1909 1909 MODULE_LICENSE("GPL");
+1 -1
net/mpls/mpls_gso.c
··· 109 109 module_exit(mpls_gso_exit); 110 110 111 111 MODULE_DESCRIPTION("MPLS GSO support"); 112 - MODULE_AUTHOR("Simon Horman (horms@verge.net.au)"); 112 + MODULE_AUTHOR("Simon Horman <horms@verge.net.au>"); 113 113 MODULE_LICENSE("GPL");
+2
scripts/const_structs.checkpatch
··· 2 2 address_space_operations 3 3 backlight_ops 4 4 block_device_operations 5 + bus_type 5 6 clk_ops 6 7 comedi_lrange 7 8 component_ops 8 9 dentry_operations 9 10 dev_pm_ops 11 + device_type 10 12 dma_map_ops 11 13 driver_info 12 14 drm_connector_funcs
+1 -1
scripts/min-tool-version.sh
··· 29 29 elif [ "$SRCARCH" = loongarch ]; then 30 30 echo 18.0.0 31 31 else 32 - echo 11.0.0 32 + echo 13.0.1 33 33 fi 34 34 ;; 35 35 rustc)
+1 -1
scripts/recordmcount.pl
··· 352 352 $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s_mcount\$"; 353 353 } elsif ($arch eq "riscv") { 354 354 $function_regex = "^([0-9a-fA-F]+)\\s+<([^.0-9][0-9a-zA-Z_\\.]+)>:"; 355 - $mcount_regex = "^\\s*([0-9a-fA-F]+):\\sR_RISCV_CALL(_PLT)?\\s_?mcount\$"; 355 + $mcount_regex = "^\\s*([0-9a-fA-F]+):\\sR_RISCV_CALL(_PLT)?\\s_mcount\$"; 356 356 $type = ".quad"; 357 357 $alignment = 2; 358 358 } elsif ($arch eq "csky") {
-2
security/Kconfig
··· 142 142 config FORTIFY_SOURCE 143 143 bool "Harden common str/mem functions against buffer overflows" 144 144 depends on ARCH_HAS_FORTIFY_SOURCE 145 - # https://bugs.llvm.org/show_bug.cgi?id=41459 146 - depends on !CC_IS_CLANG || CLANG_VERSION >= 120001 147 145 # https://github.com/llvm/llvm-project/issues/53645 148 146 depends on !CC_IS_CLANG || !X86_32 149 147 help
-1
tools/objtool/noreturns.h
··· 13 13 NORETURN(__stack_chk_fail) 14 14 NORETURN(__tdx_hypercall_failed) 15 15 NORETURN(__ubsan_handle_builtin_unreachable) 16 - NORETURN(arch_call_rest_init) 17 16 NORETURN(arch_cpu_idle_dead) 18 17 NORETURN(bch2_trans_in_restart_error) 19 18 NORETURN(bch2_trans_restart_error)
+2
tools/testing/selftests/filesystems/eventfd/.gitignore
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + eventfd_test
+7
tools/testing/selftests/filesystems/eventfd/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + CFLAGS += $(KHDR_INCLUDES) 4 + LDLIBS += -lpthread 5 + TEST_GEN_PROGS := eventfd_test 6 + 7 + include ../../lib.mk
+186
tools/testing/selftests/filesystems/eventfd/eventfd_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #define _GNU_SOURCE 4 + #include <errno.h> 5 + #include <fcntl.h> 6 + #include <asm/unistd.h> 7 + #include <linux/time_types.h> 8 + #include <unistd.h> 9 + #include <assert.h> 10 + #include <signal.h> 11 + #include <pthread.h> 12 + #include <sys/epoll.h> 13 + #include <sys/eventfd.h> 14 + #include "../../kselftest_harness.h" 15 + 16 + struct error { 17 + int code; 18 + char msg[512]; 19 + }; 20 + 21 + static int error_set(struct error *err, int code, const char *fmt, ...) 22 + { 23 + va_list args; 24 + int r; 25 + 26 + if (code == 0 || !err || err->code != 0) 27 + return code; 28 + 29 + err->code = code; 30 + va_start(args, fmt); 31 + r = vsnprintf(err->msg, sizeof(err->msg), fmt, args); 32 + assert((size_t)r < sizeof(err->msg)); 33 + va_end(args); 34 + 35 + return code; 36 + } 37 + 38 + static inline int sys_eventfd2(unsigned int count, int flags) 39 + { 40 + return syscall(__NR_eventfd2, count, flags); 41 + } 42 + 43 + TEST(eventfd01) 44 + { 45 + int fd, flags; 46 + 47 + fd = sys_eventfd2(0, 0); 48 + ASSERT_GE(fd, 0); 49 + 50 + flags = fcntl(fd, F_GETFL); 51 + // since the kernel automatically added O_RDWR. 52 + EXPECT_EQ(flags, O_RDWR); 53 + 54 + close(fd); 55 + } 56 + 57 + TEST(eventfd02) 58 + { 59 + int fd, flags; 60 + 61 + fd = sys_eventfd2(0, EFD_CLOEXEC); 62 + ASSERT_GE(fd, 0); 63 + 64 + flags = fcntl(fd, F_GETFD); 65 + ASSERT_GT(flags, -1); 66 + EXPECT_EQ(flags, FD_CLOEXEC); 67 + 68 + close(fd); 69 + } 70 + 71 + TEST(eventfd03) 72 + { 73 + int fd, flags; 74 + 75 + fd = sys_eventfd2(0, EFD_NONBLOCK); 76 + ASSERT_GE(fd, 0); 77 + 78 + flags = fcntl(fd, F_GETFL); 79 + ASSERT_GT(flags, -1); 80 + EXPECT_EQ(flags & EFD_NONBLOCK, EFD_NONBLOCK); 81 + EXPECT_EQ(flags & O_RDWR, O_RDWR); 82 + 83 + close(fd); 84 + } 85 + 86 + TEST(eventfd04) 87 + { 88 + int fd, flags; 89 + 90 + fd = sys_eventfd2(0, EFD_CLOEXEC|EFD_NONBLOCK); 91 + ASSERT_GE(fd, 0); 92 + 93 + flags = fcntl(fd, F_GETFL); 94 + ASSERT_GT(flags, -1); 95 + EXPECT_EQ(flags & EFD_NONBLOCK, EFD_NONBLOCK); 96 + EXPECT_EQ(flags & O_RDWR, O_RDWR); 97 + 98 + flags = fcntl(fd, F_GETFD); 99 + ASSERT_GT(flags, -1); 100 + EXPECT_EQ(flags, FD_CLOEXEC); 101 + 102 + close(fd); 103 + } 104 + 105 + static inline void trim_newline(char *str) 106 + { 107 + char *pos = strrchr(str, '\n'); 108 + 109 + if (pos) 110 + *pos = '\0'; 111 + } 112 + 113 + static int verify_fdinfo(int fd, struct error *err, const char *prefix, 114 + size_t prefix_len, const char *expect, ...) 115 + { 116 + char buffer[512] = {0, }; 117 + char path[512] = {0, }; 118 + va_list args; 119 + FILE *f; 120 + char *line = NULL; 121 + size_t n = 0; 122 + int found = 0; 123 + int r; 124 + 125 + va_start(args, expect); 126 + r = vsnprintf(buffer, sizeof(buffer), expect, args); 127 + assert((size_t)r < sizeof(buffer)); 128 + va_end(args); 129 + 130 + snprintf(path, sizeof(path), "/proc/self/fdinfo/%d", fd); 131 + f = fopen(path, "re"); 132 + if (!f) 133 + return error_set(err, -1, "fdinfo open failed for %d", fd); 134 + 135 + while (getline(&line, &n, f) != -1) { 136 + char *val; 137 + 138 + if (strncmp(line, prefix, prefix_len)) 139 + continue; 140 + 141 + found = 1; 142 + 143 + val = line + prefix_len; 144 + r = strcmp(val, buffer); 145 + if (r != 0) { 146 + trim_newline(line); 147 + trim_newline(buffer); 148 + error_set(err, -1, "%s '%s' != '%s'", 149 + prefix, val, buffer); 150 + } 151 + break; 152 + } 153 + 154 + free(line); 155 + fclose(f); 156 + 157 + if (found == 0) 158 + return error_set(err, -1, "%s not found for fd %d", 159 + prefix, fd); 160 + 161 + return 0; 162 + } 163 + 164 + TEST(eventfd05) 165 + { 166 + struct error err = {0}; 167 + int fd, ret; 168 + 169 + fd = sys_eventfd2(0, EFD_SEMAPHORE); 170 + ASSERT_GE(fd, 0); 171 + 172 + ret = fcntl(fd, F_GETFL); 173 + ASSERT_GT(ret, -1); 174 + EXPECT_EQ(ret & O_RDWR, O_RDWR); 175 + 176 + // The semaphore could only be obtained from fdinfo. 177 + ret = verify_fdinfo(fd, &err, "eventfd-semaphore: ", 19, "1\n"); 178 + if (ret != 0) 179 + ksft_print_msg("eventfd-semaphore check failed, msg: %s\n", 180 + err.msg); 181 + EXPECT_EQ(ret, 0); 182 + 183 + close(fd); 184 + } 185 + 186 + TEST_HARNESS_MAIN
+5
tools/testing/selftests/mm/Makefile
··· 115 115 TEST_FILES := test_vmalloc.sh 116 116 TEST_FILES += test_hmm.sh 117 117 TEST_FILES += va_high_addr_switch.sh 118 + TEST_FILES += charge_reserved_hugetlb.sh 119 + TEST_FILES += hugetlb_reparenting_test.sh 120 + 121 + # required by charge_reserved_hugetlb.sh 122 + TEST_FILES += write_hugetlb_memory.sh 118 123 119 124 include ../lib.mk 120 125
+4
tools/testing/selftests/mm/charge_reserved_hugetlb.sh
··· 11 11 exit $ksft_skip 12 12 fi 13 13 14 + nr_hugepgs=$(cat /proc/sys/vm/nr_hugepages) 15 + 14 16 fault_limit_file=limit_in_bytes 15 17 reservation_limit_file=rsvd.limit_in_bytes 16 18 fault_usage_file=usage_in_bytes ··· 584 582 umount $cgroup_path 585 583 rmdir $cgroup_path 586 584 fi 585 + 586 + echo "$nr_hugepgs" > /proc/sys/vm/nr_hugepages
+7 -2
tools/testing/selftests/mm/hugetlb_reparenting_test.sh
··· 11 11 exit $ksft_skip 12 12 fi 13 13 14 + nr_hugepgs=$(cat /proc/sys/vm/nr_hugepages) 14 15 usage_file=usage_in_bytes 15 16 16 17 if [[ "$1" == "-cgroup-v2" ]]; then ··· 249 248 250 249 echo ALL PASS 251 250 252 - umount $CGROUP_ROOT 253 - rm -rf $CGROUP_ROOT 251 + if [[ $do_umount ]]; then 252 + umount $CGROUP_ROOT 253 + rm -rf $CGROUP_ROOT 254 + fi 255 + 256 + echo "$nr_hugepgs" > /proc/sys/vm/nr_hugepages
+18 -20
tools/testing/selftests/mm/on-fault-limit.c
··· 5 5 #include <string.h> 6 6 #include <sys/time.h> 7 7 #include <sys/resource.h> 8 + #include "../kselftest.h" 8 9 9 - static int test_limit(void) 10 + static void test_limit(void) 10 11 { 11 - int ret = 1; 12 12 struct rlimit lims; 13 13 void *map; 14 14 15 - if (getrlimit(RLIMIT_MEMLOCK, &lims)) { 16 - perror("getrlimit"); 17 - return ret; 18 - } 15 + if (getrlimit(RLIMIT_MEMLOCK, &lims)) 16 + ksft_exit_fail_msg("getrlimit: %s\n", strerror(errno)); 19 17 20 - if (mlockall(MCL_ONFAULT | MCL_FUTURE)) { 21 - perror("mlockall"); 22 - return ret; 23 - } 18 + if (mlockall(MCL_ONFAULT | MCL_FUTURE)) 19 + ksft_exit_fail_msg("mlockall: %s\n", strerror(errno)); 24 20 25 21 map = mmap(NULL, 2 * lims.rlim_max, PROT_READ | PROT_WRITE, 26 22 MAP_PRIVATE | MAP_ANONYMOUS | MAP_POPULATE, -1, 0); 27 - if (map != MAP_FAILED) 28 - printf("mmap should have failed, but didn't\n"); 29 - else { 30 - ret = 0; 31 - munmap(map, 2 * lims.rlim_max); 32 - } 33 23 24 + ksft_test_result(map == MAP_FAILED, "The map failed respecting mlock limits\n"); 25 + 26 + if (map != MAP_FAILED) 27 + munmap(map, 2 * lims.rlim_max); 34 28 munlockall(); 35 - return ret; 36 29 } 37 30 38 31 int main(int argc, char **argv) 39 32 { 40 - int ret = 0; 33 + ksft_print_header(); 34 + ksft_set_plan(1); 41 35 42 - ret += test_limit(); 43 - return ret; 36 + if (!getuid()) 37 + ksft_test_result_skip("The test must be run from a normal user\n"); 38 + else 39 + test_limit(); 40 + 41 + ksft_finished(); 44 42 }
+34
tools/testing/selftests/mm/protection_keys.c
··· 54 54 u64 shadow_pkey_reg; 55 55 int dprint_in_signal; 56 56 char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE]; 57 + char buf[256]; 57 58 58 59 void cat_into_file(char *str, char *file) 59 60 { ··· 1745 1744 shadow_pkey_reg = __read_pkey_reg(); 1746 1745 } 1747 1746 1747 + void restore_settings_atexit(void) 1748 + { 1749 + cat_into_file(buf, "/proc/sys/vm/nr_hugepages"); 1750 + } 1751 + 1752 + void save_settings(void) 1753 + { 1754 + int fd; 1755 + int err; 1756 + 1757 + if (geteuid()) 1758 + return; 1759 + 1760 + fd = open("/proc/sys/vm/nr_hugepages", O_RDONLY); 1761 + if (fd < 0) { 1762 + fprintf(stderr, "error opening\n"); 1763 + perror("error: "); 1764 + exit(__LINE__); 1765 + } 1766 + 1767 + /* -1 to guarantee leaving the trailing \0 */ 1768 + err = read(fd, buf, sizeof(buf)-1); 1769 + if (err < 0) { 1770 + fprintf(stderr, "error reading\n"); 1771 + perror("error: "); 1772 + exit(__LINE__); 1773 + } 1774 + 1775 + atexit(restore_settings_atexit); 1776 + close(fd); 1777 + } 1778 + 1748 1779 int main(void) 1749 1780 { 1750 1781 int nr_iterations = 22; ··· 1784 1751 1785 1752 srand((unsigned int)time(NULL)); 1786 1753 1754 + save_settings(); 1787 1755 setup_handlers(); 1788 1756 1789 1757 printf("has pkeys: %d\n", pkeys_supported);
+15 -2
tools/testing/selftests/mm/run_vmtests.sh
··· 15 15 cat <<EOF 16 16 usage: ${BASH_SOURCE[0]:-$0} [ options ] 17 17 18 - -a: run all tests, including extra ones 18 + -a: run all tests, including extra ones (other than destructive ones) 19 19 -t: specify specific categories to tests to run 20 20 -h: display this message 21 21 -n: disable TAP output 22 + -d: run destructive tests 22 23 23 24 The default behavior is to run required tests only. If -a is specified, 24 25 will run all tests. ··· 82 81 } 83 82 84 83 RUN_ALL=false 84 + RUN_DESTRUCTIVE=false 85 85 TAP_PREFIX="# " 86 86 87 87 while getopts "aht:n" OPT; do ··· 91 89 "h") usage ;; 92 90 "t") VM_SELFTEST_ITEMS=${OPTARG} ;; 93 91 "n") TAP_PREFIX= ;; 92 + "d") RUN_DESTRUCTIVE=true ;; 94 93 esac 95 94 done 96 95 shift $((OPTIND -1)) ··· 305 302 306 303 CATEGORY="compaction" run_test ./compaction_test 307 304 308 - CATEGORY="mlock" run_test sudo -u nobody ./on-fault-limit 305 + if command -v sudo &> /dev/null; 306 + then 307 + CATEGORY="mlock" run_test sudo -u nobody ./on-fault-limit 308 + else 309 + echo "# SKIP ./on-fault-limit" 310 + fi 309 311 310 312 CATEGORY="mmap" run_test ./map_populate 311 313 ··· 323 315 CATEGORY="mremap" run_test ./mremap_test 324 316 325 317 CATEGORY="hugetlb" run_test ./thuge-gen 318 + CATEGORY="hugetlb" run_test ./charge_reserved_hugetlb.sh -cgroup-v2 319 + CATEGORY="hugetlb" run_test ./hugetlb_reparenting_test.sh -cgroup-v2 320 + if $RUN_DESTRUCTIVE; then 321 + CATEGORY="hugetlb" run_test ./hugetlb-read-hwpoison 322 + fi 326 323 327 324 if [ $VADDR64 -ne 0 ]; then 328 325