Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mm-nonmm-stable-2025-01-24-23-16' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:
"Mainly individually changelogged singleton patches. The patch series
in this pull are:

- "lib min_heap: Improve min_heap safety, testing, and documentation"
from Kuan-Wei Chiu provides various tightenings to the min_heap
library code

- "xarray: extract __xa_cmpxchg_raw" from Tamir Duberstein preforms
some cleanup and Rust preparation in the xarray library code

- "Update reference to include/asm-<arch>" from Geert Uytterhoeven
fixes pathnames in some code comments

- "Converge on using secs_to_jiffies()" from Easwar Hariharan uses
the new secs_to_jiffies() in various places where that is
appropriate

- "ocfs2, dlmfs: convert to the new mount API" from Eric Sandeen
switches two filesystems to the new mount API

- "Convert ocfs2 to use folios" from Matthew Wilcox does that

- "Remove get_task_comm() and print task comm directly" from Yafang
Shao removes now-unneeded calls to get_task_comm() in various
places

- "squashfs: reduce memory usage and update docs" from Phillip
Lougher implements some memory savings in squashfs and performs
some maintainability work

- "lib: clarify comparison function requirements" from Kuan-Wei Chiu
tightens the sort code's behaviour and adds some maintenance work

- "nilfs2: protect busy buffer heads from being force-cleared" from
Ryusuke Konishi fixes an issues in nlifs when the fs is presented
with a corrupted image

- "nilfs2: fix kernel-doc comments for function return values" from
Ryusuke Konishi fixes some nilfs kerneldoc

- "nilfs2: fix issues with rename operations" from Ryusuke Konishi
addresses some nilfs BUG_ONs which syzbot was able to trigger

- "minmax.h: Cleanups and minor optimisations" from David Laight does
some maintenance work on the min/max library code

- "Fixes and cleanups to xarray" from Kemeng Shi does maintenance
work on the xarray library code"

* tag 'mm-nonmm-stable-2025-01-24-23-16' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (131 commits)
ocfs2: use str_yes_no() and str_no_yes() helper functions
include/linux/lz4.h: add some missing macros
Xarray: use xa_mark_t in xas_squash_marks() to keep code consistent
Xarray: remove repeat check in xas_squash_marks()
Xarray: distinguish large entries correctly in xas_split_alloc()
Xarray: move forward index correctly in xas_pause()
Xarray: do not return sibling entries from xas_find_marked()
ipc/util.c: complete the kernel-doc function descriptions
gcov: clang: use correct function param names
latencytop: use correct kernel-doc format for func params
minmax.h: remove some #defines that are only expanded once
minmax.h: simplify the variants of clamp()
minmax.h: move all the clamp() definitions after the min/max() ones
minmax.h: use BUILD_BUG_ON_MSG() for the lo < hi test in clamp()
minmax.h: reduce the #define expansion of min(), max() and clamp()
minmax.h: update some comments
minmax.h: add whitespace around operators and after commas
nilfs2: do not update mtime of renamed directory that is not moved
nilfs2: handle errors that nilfs_prepare_chunk() may return
CREDITS: fix spelling mistake
...

+2494 -2086
+1
.mailmap
··· 415 415 Linas Vepstas <linas@austin.ibm.com> 416 416 Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch> 417 417 Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de> 418 + Linus Lüssing <linus.luessing@c0d3.blue> <ll@simonwunderlich.de> 418 419 <linux-hardening@vger.kernel.org> <kernel-hardening@lists.openwall.com> 419 420 Li Yang <leoyang.li@nxp.com> <leoli@freescale.com> 420 421 Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org>
+1 -1
CREDITS
··· 4339 4339 D: Freescale QE SoC support and Ethernet driver 4340 4340 S: B-1206 Jingmao Guojigongyu 4341 4341 S: 16 Baliqiao Nanjie, Beijing 101100 4342 - S: People's Repulic of China 4342 + S: People's Republic of China 4343 4343 4344 4344 N: Vlad Yasevich 4345 4345 E: vyasevich@gmail.com
+19 -19
Documentation/accounting/delay-accounting.rst
··· 100 100 # ./getdelays -d -p 10 101 101 (output similar to next case) 102 102 103 - Get sum of delays, since system boot, for all pids with tgid 5:: 103 + Get sum and peak of delays, since system boot, for all pids with tgid 242:: 104 104 105 - # ./getdelays -d -t 5 105 + bash-4.4# ./getdelays -d -t 242 106 106 print delayacct stats ON 107 - TGID 5 107 + TGID 242 108 108 109 109 110 - CPU count real total virtual total delay total delay average 111 - 8 7000000 6872122 3382277 0.423ms 112 - IO count delay total delay average 113 - 0 0 0.000ms 114 - SWAP count delay total delay average 115 - 0 0 0.000ms 116 - RECLAIM count delay total delay average 117 - 0 0 0.000ms 118 - THRASHING count delay total delay average 119 - 0 0 0.000ms 120 - COMPACT count delay total delay average 121 - 0 0 0.000ms 122 - WPCOPY count delay total delay average 123 - 0 0 0.000ms 124 - IRQ count delay total delay average 125 - 0 0 0.000ms 110 + CPU count real total virtual total delay total delay average delay max delay min 111 + 39 156000000 156576579 2111069 0.054ms 0.212296ms 0.031307ms 112 + IO count delay total delay average delay max delay min 113 + 0 0 0.000ms 0.000000ms 0.000000ms 114 + SWAP count delay total delay average delay max delay min 115 + 0 0 0.000ms 0.000000ms 0.000000ms 116 + RECLAIM count delay total delay average delay max delay min 117 + 0 0 0.000ms 0.000000ms 0.000000ms 118 + THRASHING count delay total delay average delay max delay min 119 + 0 0 0.000ms 0.000000ms 0.000000ms 120 + COMPACT count delay total delay average delay max delay min 121 + 0 0 0.000ms 0.000000ms 0.000000ms 122 + WPCOPY count delay total delay average delay max delay min 123 + 156 11215873 0.072ms 0.207403ms 0.033913ms 124 + IRQ count delay total delay average delay max delay min 125 + 0 0 0.000ms 0.000000ms 0.000000ms 126 126 127 127 Get IO accounting for pid 1, it works only with -p:: 128 128
+2
Documentation/core-api/min_heap.rst
··· 4 4 Min Heap API 5 5 ============ 6 6 7 + :Author: Kuan-Wei Chiu <visitorckw@gmail.com> 8 + 7 9 Introduction 8 10 ============ 9 11
+13 -11
Documentation/core-api/xarray.rst
··· 42 42 to turn a tagged entry back into an untagged pointer and xa_pointer_tag() 43 43 to retrieve the tag of an entry. Tagged pointers use the same bits that 44 44 are used to distinguish value entries from normal pointers, so you must 45 - decide whether they want to store value entries or tagged pointers in 46 - any particular XArray. 45 + decide whether you want to store value entries or tagged pointers in any 46 + particular XArray. 47 47 48 48 The XArray does not support storing IS_ERR() pointers as some 49 49 conflict with value entries or internal entries. ··· 52 52 occupy a range of indices. Once stored to, looking up any index in 53 53 the range will return the same entry as looking up any other index in 54 54 the range. Storing to any index will store to all of them. Multi-index 55 - entries can be explicitly split into smaller entries, or storing ``NULL`` 56 - into any entry will cause the XArray to forget about the range. 55 + entries can be explicitly split into smaller entries. Unsetting (using 56 + xa_erase() or xa_store() with ``NULL``) any entry will cause the XArray 57 + to forget about the range. 57 58 58 59 Normal API 59 60 ========== ··· 64 63 allocated ones. A freshly-initialised XArray contains a ``NULL`` 65 64 pointer at every index. 66 65 67 - You can then set entries using xa_store() and get entries 68 - using xa_load(). xa_store will overwrite any entry with the 69 - new entry and return the previous entry stored at that index. You can 70 - use xa_erase() instead of calling xa_store() with a 71 - ``NULL`` entry. There is no difference between an entry that has never 72 - been stored to, one that has been erased and one that has most recently 73 - had ``NULL`` stored to it. 66 + You can then set entries using xa_store() and get entries using 67 + xa_load(). xa_store() will overwrite any entry with the new entry and 68 + return the previous entry stored at that index. You can unset entries 69 + using xa_erase() or by setting the entry to ``NULL`` using xa_store(). 70 + There is no difference between an entry that has never been stored to 71 + and one that has been erased with xa_erase(); an entry that has most 72 + recently had ``NULL`` stored to it is also equivalent except if the 73 + XArray was initialized with ``XA_FLAGS_ALLOC``. 74 74 75 75 You can conditionally replace an entry at an index by using 76 76 xa_cmpxchg(). Like cmpxchg(), it will only succeed if
+6 -8
Documentation/filesystems/squashfs.rst
··· 6 6 7 7 Squashfs is a compressed read-only filesystem for Linux. 8 8 9 - It uses zlib, lz4, lzo, or xz compression to compress files, inodes and 9 + It uses zlib, lz4, lzo, xz or zstd compression to compress files, inodes and 10 10 directories. Inodes in the system are very small and all blocks are packed to 11 11 minimise data overhead. Block sizes greater than 4K are supported up to a 12 12 maximum of 1Mbytes (default block size 128K). ··· 16 16 block device/memory systems (e.g. embedded systems) where low overhead is 17 17 needed. 18 18 19 - Mailing list: squashfs-devel@lists.sourceforge.net 20 - Web site: www.squashfs.org 19 + Mailing list (kernel code): linux-fsdevel@vger.kernel.org 20 + Web site: github.com/plougher/squashfs-tools 21 21 22 22 1. Filesystem Features 23 23 ---------------------- ··· 58 58 59 59 As squashfs is a read-only filesystem, the mksquashfs program must be used to 60 60 create populated squashfs filesystems. This and other squashfs utilities 61 - can be obtained from http://www.squashfs.org. Usage instructions can be 62 - obtained from this site also. 63 - 64 - The squashfs-tools development tree is now located on kernel.org 65 - git://git.kernel.org/pub/scm/fs/squashfs/squashfs-tools.git 61 + are very likely packaged by your linux distribution (called squashfs-tools). 62 + The source code can be obtained from github.com/plougher/squashfs-tools. 63 + Usage instructions can also be obtained from this site. 66 64 67 65 2.1 Mount options 68 66 -----------------
+5 -5
MAINTAINERS
··· 2864 2864 R: Chester Lin <chester62515@gmail.com> 2865 2865 R: Matthias Brugger <mbrugger@suse.com> 2866 2866 R: Ghennadi Procopciuc <ghennadi.procopciuc@oss.nxp.com> 2867 - L: NXP S32 Linux Team <s32@nxp.com> 2867 + R: NXP S32 Linux Team <s32@nxp.com> 2868 2868 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2869 2869 S: Maintained 2870 2870 F: arch/arm64/boot/dts/freescale/s32g*.dts* ··· 16705 16705 16706 16706 NITRO ENCLAVES (NE) 16707 16707 M: Alexandru Ciobotaru <alcioa@amazon.com> 16708 + R: The AWS Nitro Enclaves Team <aws-nitro-enclaves-devel@amazon.com> 16708 16709 L: linux-kernel@vger.kernel.org 16709 - L: The AWS Nitro Enclaves Team <aws-nitro-enclaves-devel@amazon.com> 16710 16710 S: Supported 16711 16711 W: https://aws.amazon.com/ec2/nitro/nitro-enclaves/ 16712 16712 F: Documentation/virt/ne_overview.rst ··· 16717 16717 16718 16718 NITRO SECURE MODULE (NSM) 16719 16719 M: Alexander Graf <graf@amazon.com> 16720 + R: The AWS Nitro Enclaves Team <aws-nitro-enclaves-devel@amazon.com> 16720 16721 L: linux-kernel@vger.kernel.org 16721 - L: The AWS Nitro Enclaves Team <aws-nitro-enclaves-devel@amazon.com> 16722 16722 S: Supported 16723 16723 W: https://aws.amazon.com/ec2/nitro/nitro-enclaves/ 16724 16724 F: drivers/misc/nsm.c ··· 18530 18530 M: Shawn Guo <shawnguo@kernel.org> 18531 18531 M: Jacky Bai <ping.bai@nxp.com> 18532 18532 R: Pengutronix Kernel Team <kernel@pengutronix.de> 18533 + R: NXP S32 Linux Team <s32@nxp.com> 18533 18534 L: linux-gpio@vger.kernel.org 18534 - L: NXP S32 Linux Team <s32@nxp.com> 18535 18535 S: Maintained 18536 18536 F: Documentation/devicetree/bindings/pinctrl/fsl,* 18537 18537 F: Documentation/devicetree/bindings/pinctrl/nxp,s32* ··· 19675 19675 19676 19676 RASPBERRY PI PISP BACK END 19677 19677 M: Jacopo Mondi <jacopo.mondi@ideasonboard.com> 19678 - L: Raspberry Pi Kernel Maintenance <kernel-list@raspberrypi.com> 19678 + R: Raspberry Pi Kernel Maintenance <kernel-list@raspberrypi.com> 19679 19679 L: linux-media@vger.kernel.org 19680 19680 S: Maintained 19681 19681 F: Documentation/devicetree/bindings/media/raspberrypi,pispbe.yaml
-1
arch/alpha/lib/fpreg.c
··· 10 10 #include <linux/preempt.h> 11 11 #include <asm/fpu.h> 12 12 #include <asm/thread_info.h> 13 - #include <asm/fpu.h> 14 13 15 14 #if defined(CONFIG_ALPHA_EV6) || defined(CONFIG_ALPHA_EV67) 16 15 #define STT(reg,val) asm volatile ("ftoit $f"#reg",%0" : "=r"(val));
+2 -3
arch/arc/kernel/unaligned.c
··· 200 200 struct callee_regs *cregs) 201 201 { 202 202 struct disasm_state state; 203 - char buf[TASK_COMM_LEN]; 204 203 205 204 /* handle user mode only and only if enabled by sysadmin */ 206 205 if (!user_mode(regs) || !unaligned_enabled) ··· 211 212 " performance significantly\n. To enable further" 212 213 " logging of such instances, please \n" 213 214 " echo 0 > /proc/sys/kernel/ignore-unaligned-usertrap\n", 214 - get_task_comm(buf, current), task_pid_nr(current)); 215 + current->comm, task_pid_nr(current)); 215 216 } else { 216 217 /* Add rate limiting if it gets down to it */ 217 218 pr_warn("%s(%d): unaligned access to/from 0x%lx by PC: 0x%lx\n", 218 - get_task_comm(buf, current), task_pid_nr(current), 219 + current->comm, task_pid_nr(current), 219 220 address, regs->ret); 220 221 221 222 }
+4 -4
arch/arm/mach-pxa/sharpsl_pm.c
··· 31 31 /* 32 32 * Constants 33 33 */ 34 - #define SHARPSL_CHARGE_ON_TIME_INTERVAL (msecs_to_jiffies(1*60*1000)) /* 1 min */ 35 - #define SHARPSL_CHARGE_FINISH_TIME (msecs_to_jiffies(10*60*1000)) /* 10 min */ 36 - #define SHARPSL_BATCHK_TIME (msecs_to_jiffies(15*1000)) /* 15 sec */ 37 - #define SHARPSL_BATCHK_TIME_SUSPEND (60*10) /* 10 min */ 34 + #define SHARPSL_CHARGE_ON_TIME_INTERVAL (secs_to_jiffies(60)) 35 + #define SHARPSL_CHARGE_FINISH_TIME (secs_to_jiffies(10*60)) 36 + #define SHARPSL_BATCHK_TIME (secs_to_jiffies(15)) 37 + #define SHARPSL_BATCHK_TIME_SUSPEND (60*10) /* 10 min */ 38 38 39 39 #define SHARPSL_WAIT_CO_TIME 15 /* 15 sec */ 40 40 #define SHARPSL_WAIT_DISCHARGE_ON 100 /* 100 msec */
-1
arch/m68k/configs/amiga_defconfig
··· 626 626 CONFIG_TEST_SCANF=m 627 627 CONFIG_TEST_BITMAP=m 628 628 CONFIG_TEST_UUID=m 629 - CONFIG_TEST_XARRAY=m 630 629 CONFIG_TEST_MAPLE_TREE=m 631 630 CONFIG_TEST_RHASHTABLE=m 632 631 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/apollo_defconfig
··· 583 583 CONFIG_TEST_SCANF=m 584 584 CONFIG_TEST_BITMAP=m 585 585 CONFIG_TEST_UUID=m 586 - CONFIG_TEST_XARRAY=m 587 586 CONFIG_TEST_MAPLE_TREE=m 588 587 CONFIG_TEST_RHASHTABLE=m 589 588 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/atari_defconfig
··· 603 603 CONFIG_TEST_SCANF=m 604 604 CONFIG_TEST_BITMAP=m 605 605 CONFIG_TEST_UUID=m 606 - CONFIG_TEST_XARRAY=m 607 606 CONFIG_TEST_MAPLE_TREE=m 608 607 CONFIG_TEST_RHASHTABLE=m 609 608 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/bvme6000_defconfig
··· 575 575 CONFIG_TEST_SCANF=m 576 576 CONFIG_TEST_BITMAP=m 577 577 CONFIG_TEST_UUID=m 578 - CONFIG_TEST_XARRAY=m 579 578 CONFIG_TEST_MAPLE_TREE=m 580 579 CONFIG_TEST_RHASHTABLE=m 581 580 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/hp300_defconfig
··· 585 585 CONFIG_TEST_SCANF=m 586 586 CONFIG_TEST_BITMAP=m 587 587 CONFIG_TEST_UUID=m 588 - CONFIG_TEST_XARRAY=m 589 588 CONFIG_TEST_MAPLE_TREE=m 590 589 CONFIG_TEST_RHASHTABLE=m 591 590 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/mac_defconfig
··· 602 602 CONFIG_TEST_SCANF=m 603 603 CONFIG_TEST_BITMAP=m 604 604 CONFIG_TEST_UUID=m 605 - CONFIG_TEST_XARRAY=m 606 605 CONFIG_TEST_MAPLE_TREE=m 607 606 CONFIG_TEST_RHASHTABLE=m 608 607 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/multi_defconfig
··· 689 689 CONFIG_TEST_SCANF=m 690 690 CONFIG_TEST_BITMAP=m 691 691 CONFIG_TEST_UUID=m 692 - CONFIG_TEST_XARRAY=m 693 692 CONFIG_TEST_MAPLE_TREE=m 694 693 CONFIG_TEST_RHASHTABLE=m 695 694 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/mvme147_defconfig
··· 575 575 CONFIG_TEST_SCANF=m 576 576 CONFIG_TEST_BITMAP=m 577 577 CONFIG_TEST_UUID=m 578 - CONFIG_TEST_XARRAY=m 579 578 CONFIG_TEST_MAPLE_TREE=m 580 579 CONFIG_TEST_RHASHTABLE=m 581 580 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/mvme16x_defconfig
··· 576 576 CONFIG_TEST_SCANF=m 577 577 CONFIG_TEST_BITMAP=m 578 578 CONFIG_TEST_UUID=m 579 - CONFIG_TEST_XARRAY=m 580 579 CONFIG_TEST_MAPLE_TREE=m 581 580 CONFIG_TEST_RHASHTABLE=m 582 581 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/q40_defconfig
··· 592 592 CONFIG_TEST_SCANF=m 593 593 CONFIG_TEST_BITMAP=m 594 594 CONFIG_TEST_UUID=m 595 - CONFIG_TEST_XARRAY=m 596 595 CONFIG_TEST_MAPLE_TREE=m 597 596 CONFIG_TEST_RHASHTABLE=m 598 597 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/sun3_defconfig
··· 572 572 CONFIG_TEST_SCANF=m 573 573 CONFIG_TEST_BITMAP=m 574 574 CONFIG_TEST_UUID=m 575 - CONFIG_TEST_XARRAY=m 576 575 CONFIG_TEST_MAPLE_TREE=m 577 576 CONFIG_TEST_RHASHTABLE=m 578 577 CONFIG_TEST_IDA=m
-1
arch/m68k/configs/sun3x_defconfig
··· 573 573 CONFIG_TEST_SCANF=m 574 574 CONFIG_TEST_BITMAP=m 575 575 CONFIG_TEST_UUID=m 576 - CONFIG_TEST_XARRAY=m 577 576 CONFIG_TEST_MAPLE_TREE=m 578 577 CONFIG_TEST_RHASHTABLE=m 579 578 CONFIG_TEST_IDA=m
-1
arch/powerpc/configs/ppc64_defconfig
··· 448 448 CONFIG_TEST_SCANF=m 449 449 CONFIG_TEST_BITMAP=m 450 450 CONFIG_TEST_UUID=m 451 - CONFIG_TEST_XARRAY=m 452 451 CONFIG_TEST_MAPLE_TREE=m 453 452 CONFIG_TEST_RHASHTABLE=m 454 453 CONFIG_TEST_IDA=m
+1 -1
arch/powerpc/kvm/book3s_hv.c
··· 4957 4957 * states are synchronized from L0 to L1. L1 needs to inform L0 about 4958 4958 * MER=1 only when there are pending external interrupts. 4959 4959 * In the above if check, MER bit is set if there are pending 4960 - * external interrupts. Hence, explicity mask off MER bit 4960 + * external interrupts. Hence, explicitly mask off MER bit 4961 4961 * here as otherwise it may generate spurious interrupts in L2 KVM 4962 4962 * causing an endless loop, which results in L2 guest getting hung. 4963 4963 */
+1 -1
arch/powerpc/platforms/pseries/papr_scm.c
··· 544 544 545 545 /* Jiffies offset for which the health data is assumed to be same */ 546 546 cache_timeout = p->lasthealth_jiffies + 547 - msecs_to_jiffies(MIN_HEALTH_QUERY_INTERVAL * 1000); 547 + secs_to_jiffies(MIN_HEALTH_QUERY_INTERVAL); 548 548 549 549 /* Fetch new health info is its older than MIN_HEALTH_QUERY_INTERVAL */ 550 550 if (time_after(jiffies, cache_timeout))
+1 -1
arch/s390/kernel/lgr.c
··· 166 166 */ 167 167 static void lgr_timer_set(void) 168 168 { 169 - mod_timer(&lgr_timer, jiffies + msecs_to_jiffies(LGR_TIMER_INTERVAL_SECS * MSEC_PER_SEC)); 169 + mod_timer(&lgr_timer, jiffies + secs_to_jiffies(LGR_TIMER_INTERVAL_SECS)); 170 170 } 171 171 172 172 /*
+2 -2
arch/s390/kernel/time.c
··· 662 662 if (ret < 0) 663 663 pr_err("failed to set leap second flags\n"); 664 664 /* arm Timer to clear leap second flags */ 665 - mod_timer(&stp_timer, jiffies + msecs_to_jiffies(14400 * MSEC_PER_SEC)); 665 + mod_timer(&stp_timer, jiffies + secs_to_jiffies(14400)); 666 666 } else { 667 667 /* The day the leap second is scheduled for hasn't been reached. Retry 668 668 * in one hour. 669 669 */ 670 - mod_timer(&stp_timer, jiffies + msecs_to_jiffies(3600 * MSEC_PER_SEC)); 670 + mod_timer(&stp_timer, jiffies + secs_to_jiffies(3600)); 671 671 } 672 672 } 673 673
+1 -1
arch/s390/kernel/topology.c
··· 371 371 if (atomic_add_unless(&topology_poll, -1, 0)) 372 372 mod_timer(&topology_timer, jiffies + msecs_to_jiffies(100)); 373 373 else 374 - mod_timer(&topology_timer, jiffies + msecs_to_jiffies(60 * MSEC_PER_SEC)); 374 + mod_timer(&topology_timer, jiffies + secs_to_jiffies(60)); 375 375 } 376 376 377 377 void topology_expect_change(void)
+1 -1
arch/s390/mm/cmm.c
··· 204 204 del_timer(&cmm_timer); 205 205 return; 206 206 } 207 - mod_timer(&cmm_timer, jiffies + msecs_to_jiffies(cmm_timeout_seconds * MSEC_PER_SEC)); 207 + mod_timer(&cmm_timer, jiffies + secs_to_jiffies(cmm_timeout_seconds)); 208 208 } 209 209 210 210 static void cmm_timer_fn(struct timer_list *unused)
+2 -3
arch/x86/kernel/vm86_32.c
··· 246 246 247 247 /* VM86_SCREEN_BITMAP had numerous bugs and appears to have no users. */ 248 248 if (v.flags & VM86_SCREEN_BITMAP) { 249 - char comm[TASK_COMM_LEN]; 250 - 251 - pr_info_once("vm86: '%s' uses VM86_SCREEN_BITMAP, which is no longer supported\n", get_task_comm(comm, current)); 249 + pr_info_once("vm86: '%s' uses VM86_SCREEN_BITMAP, which is no longer supported\n", 250 + current->comm); 252 251 return -EINVAL; 253 252 } 254 253
+1 -2
drivers/accel/habanalabs/common/context.c
··· 199 199 200 200 int hl_ctx_init(struct hl_device *hdev, struct hl_ctx *ctx, bool is_kernel_ctx) 201 201 { 202 - char task_comm[TASK_COMM_LEN]; 203 202 int rc = 0, i; 204 203 205 204 ctx->hdev = hdev; ··· 271 272 mutex_init(&ctx->ts_reg_lock); 272 273 273 274 dev_dbg(hdev->dev, "create user context, comm=\"%s\", asid=%u\n", 274 - get_task_comm(task_comm, current), ctx->asid); 275 + current->comm, ctx->asid); 275 276 } 276 277 277 278 return 0;
+1 -1
drivers/accel/habanalabs/common/device.c
··· 817 817 } 818 818 819 819 queue_delayed_work(hdev->reset_wq, &device_reset_work->reset_work, 820 - msecs_to_jiffies(HL_PENDING_RESET_PER_SEC * 1000)); 820 + secs_to_jiffies(HL_PENDING_RESET_PER_SEC)); 821 821 } 822 822 } 823 823
+1 -2
drivers/accel/habanalabs/common/habanalabs_drv.c
··· 361 361 * a different default timeout for Gaudi 362 362 */ 363 363 if (timeout == HL_DEFAULT_TIMEOUT_LOCKED) 364 - hdev->timeout_jiffies = msecs_to_jiffies(GAUDI_DEFAULT_TIMEOUT_LOCKED * 365 - MSEC_PER_SEC); 364 + hdev->timeout_jiffies = secs_to_jiffies(GAUDI_DEFAULT_TIMEOUT_LOCKED); 366 365 367 366 hdev->reset_upon_device_release = 0; 368 367 break;
+3 -8
drivers/accel/habanalabs/common/habanalabs_ioctl.c
··· 1279 1279 retcode = -EFAULT; 1280 1280 1281 1281 out_err: 1282 - if (retcode) { 1283 - char task_comm[TASK_COMM_LEN]; 1284 - 1282 + if (retcode) 1285 1283 dev_dbg_ratelimited(dev, 1286 1284 "error in ioctl: pid=%d, comm=\"%s\", cmd=%#010x, nr=%#04x\n", 1287 - task_pid_nr(current), get_task_comm(task_comm, current), cmd, nr); 1288 - } 1285 + task_pid_nr(current), current->comm, cmd, nr); 1289 1286 1290 1287 if (kdata != stack_kdata) 1291 1288 kfree(kdata); ··· 1305 1308 if (nr == _IOC_NR(DRM_IOCTL_HL_INFO)) { 1306 1309 ioctl = &hl_ioctls_control[nr - HL_COMMAND_START]; 1307 1310 } else { 1308 - char task_comm[TASK_COMM_LEN]; 1309 - 1310 1311 dev_dbg_ratelimited(hdev->dev_ctrl, 1311 1312 "invalid ioctl: pid=%d, comm=\"%s\", cmd=%#010x, nr=%#04x\n", 1312 - task_pid_nr(current), get_task_comm(task_comm, current), cmd, nr); 1313 + task_pid_nr(current), current->comm, cmd, nr); 1313 1314 return -ENOTTY; 1314 1315 } 1315 1316
+1 -1
drivers/block/xen-blkback/blkback.c
··· 544 544 ring->st_rd_req, ring->st_wr_req, 545 545 ring->st_f_req, ring->st_ds_req, 546 546 ring->persistent_gnt_c, max_pgrants); 547 - ring->st_print = jiffies + msecs_to_jiffies(10 * 1000); 547 + ring->st_print = jiffies + secs_to_jiffies(10); 548 548 ring->st_rd_req = 0; 549 549 ring->st_wr_req = 0; 550 550 ring->st_oo_req = 0;
+2 -4
drivers/gpu/drm/i915/display/intel_display_driver.c
··· 397 397 */ 398 398 bool intel_display_driver_check_access(struct intel_display *display) 399 399 { 400 - char comm[TASK_COMM_LEN]; 401 400 char current_task[TASK_COMM_LEN + 16]; 402 401 char allowed_task[TASK_COMM_LEN + 16] = "none"; 403 402 ··· 405 406 return true; 406 407 407 408 snprintf(current_task, sizeof(current_task), "%s[%d]", 408 - get_task_comm(comm, current), 409 - task_pid_vnr(current)); 409 + current->comm, task_pid_vnr(current)); 410 410 411 411 if (display->access.allowed_task) 412 412 snprintf(allowed_task, sizeof(allowed_task), "%s[%d]", 413 - get_task_comm(comm, display->access.allowed_task), 413 + display->access.allowed_task->comm, 414 414 task_pid_vnr(display->access.allowed_task)); 415 415 416 416 drm_dbg_kms(display->drm,
+1 -3
drivers/gpu/drm/nouveau/nouveau_chan.c
··· 279 279 const u64 plength = 0x10000; 280 280 const u64 ioffset = plength; 281 281 const u64 ilength = 0x02000; 282 - char name[TASK_COMM_LEN]; 283 282 int cid, ret; 284 283 u64 size; 285 284 ··· 337 338 chan->userd = &chan->user; 338 339 } 339 340 340 - get_task_comm(name, current); 341 - snprintf(args.name, sizeof(args.name), "%s[%d]", name, task_pid_nr(current)); 341 + snprintf(args.name, sizeof(args.name), "%s[%d]", current->comm, task_pid_nr(current)); 342 342 343 343 ret = nvif_object_ctor(&device->object, "abi16ChanUser", 0, hosts[cid].oclass, 344 344 &args, sizeof(args), &chan->user);
+2 -3
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 1175 1175 { 1176 1176 struct nouveau_drm *drm = nouveau_drm(dev); 1177 1177 struct nouveau_cli *cli; 1178 - char name[32], tmpname[TASK_COMM_LEN]; 1178 + char name[32]; 1179 1179 int ret; 1180 1180 1181 1181 /* need to bring up power immediately if opening device */ ··· 1185 1185 return ret; 1186 1186 } 1187 1187 1188 - get_task_comm(tmpname, current); 1189 1188 rcu_read_lock(); 1190 1189 snprintf(name, sizeof(name), "%s[%d]", 1191 - tmpname, pid_nr(rcu_dereference(fpriv->pid))); 1190 + current->comm, pid_nr(rcu_dereference(fpriv->pid))); 1192 1191 rcu_read_unlock(); 1193 1192 1194 1193 if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL))) {
+1 -1
drivers/gpu/drm/xe/xe_device.c
··· 521 521 drm_dbg(&xe->drm, "Waiting for lmem initialization\n"); 522 522 523 523 start = jiffies; 524 - timeout = start + msecs_to_jiffies(60 * 1000); /* 60 sec! */ 524 + timeout = start + secs_to_jiffies(60); /* 60 sec! */ 525 525 526 526 do { 527 527 if (signal_pending(current))
+1 -1
drivers/infiniband/hw/hfi1/iowait.h
··· 92 92 * 93 93 * The lock field is used by waiters to record 94 94 * the seqlock_t that guards the list head. 95 - * Waiters explicity know that, but the destroy 95 + * Waiters explicitly know that, but the destroy 96 96 * code that unwaits QPs does not. 97 97 */ 98 98 struct iowait {
+1 -1
drivers/infiniband/hw/usnic/usnic_abi.h
··· 72 72 u64 bar_bus_addr; 73 73 u32 bar_len; 74 74 /* 75 - * WQ, RQ, CQ are explicity specified bc exposing a generic resources inteface 75 + * WQ, RQ, CQ are explicitly specified bc exposing a generic resources inteface 76 76 * expands the scope of ABI to many files. 77 77 */ 78 78 u32 wq_cnt;
+1 -1
drivers/net/wireless/ath/ath11k/debugfs.c
··· 178 178 * received 'update stats' event, we keep a 3 seconds timeout in case, 179 179 * fw_stats_done is not marked yet 180 180 */ 181 - timeout = jiffies + msecs_to_jiffies(3 * 1000); 181 + timeout = jiffies + secs_to_jiffies(3); 182 182 183 183 ath11k_debugfs_fw_stats_reset(ar); 184 184
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
··· 1810 1810 rfi->cur_idx = cur_idx; 1811 1811 } 1812 1812 } else { 1813 - /* explicity window move updating the expected index */ 1813 + /* explicitly window move updating the expected index */ 1814 1814 exp_idx = reorder_data[BRCMF_RXREORDER_EXPIDX_OFFSET]; 1815 1815 1816 1816 brcmf_dbg(DATA, "flow-%d (0x%x): change expected: %d -> %d\n",
+1 -1
drivers/scsi/arcmsr/arcmsr_hba.c
··· 1045 1045 static void arcmsr_init_set_datetime_timer(struct AdapterControlBlock *pacb) 1046 1046 { 1047 1047 timer_setup(&pacb->refresh_timer, arcmsr_set_iop_datetime, 0); 1048 - pacb->refresh_timer.expires = jiffies + msecs_to_jiffies(60 * 1000); 1048 + pacb->refresh_timer.expires = jiffies + secs_to_jiffies(60); 1049 1049 add_timer(&pacb->refresh_timer); 1050 1050 } 1051 1051
+1 -1
drivers/scsi/cxlflash/superpipe.c
··· 966 966 * 967 967 * This routine is the release handler for the fops registered with 968 968 * the CXL services on an initial attach for a context. It is called 969 - * when a close (explicity by the user or as part of a process tear 969 + * when a close (explicitly by the user or as part of a process tear 970 970 * down) is performed on the adapter file descriptor returned to the 971 971 * user. The user should be aware that explicitly performing a close 972 972 * considered catastrophic and subsequent usage of the superpipe API
+9 -9
drivers/scsi/lpfc/lpfc_init.c
··· 598 598 jiffies + msecs_to_jiffies(1000 * timeout)); 599 599 /* Set up heart beat (HB) timer */ 600 600 mod_timer(&phba->hb_tmofunc, 601 - jiffies + msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL)); 601 + jiffies + secs_to_jiffies(LPFC_HB_MBOX_INTERVAL)); 602 602 clear_bit(HBA_HBEAT_INP, &phba->hba_flag); 603 603 clear_bit(HBA_HBEAT_TMO, &phba->hba_flag); 604 604 phba->last_completion_time = jiffies; ··· 1267 1267 !test_bit(FC_UNLOADING, &phba->pport->load_flag)) 1268 1268 mod_timer(&phba->hb_tmofunc, 1269 1269 jiffies + 1270 - msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL)); 1270 + secs_to_jiffies(LPFC_HB_MBOX_INTERVAL)); 1271 1271 return; 1272 1272 } 1273 1273 ··· 1555 1555 /* If IOs are completing, no need to issue a MBX_HEARTBEAT */ 1556 1556 spin_lock_irq(&phba->pport->work_port_lock); 1557 1557 if (time_after(phba->last_completion_time + 1558 - msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL), 1558 + secs_to_jiffies(LPFC_HB_MBOX_INTERVAL), 1559 1559 jiffies)) { 1560 1560 spin_unlock_irq(&phba->pport->work_port_lock); 1561 1561 if (test_bit(HBA_HBEAT_INP, &phba->hba_flag)) ··· 3354 3354 spin_unlock_irqrestore(&phba->hbalock, iflag); 3355 3355 if (mbx_action == LPFC_MBX_NO_WAIT) 3356 3356 return; 3357 - timeout = msecs_to_jiffies(LPFC_MBOX_TMO * 1000) + jiffies; 3357 + timeout = secs_to_jiffies(LPFC_MBOX_TMO) + jiffies; 3358 3358 spin_lock_irqsave(&phba->hbalock, iflag); 3359 3359 if (phba->sli.mbox_active) { 3360 3360 actcmd = phba->sli.mbox_active->u.mb.mbxCommand; ··· 4924 4924 stat = 1; 4925 4925 goto finished; 4926 4926 } 4927 - if (time >= msecs_to_jiffies(30 * 1000)) { 4927 + if (time >= secs_to_jiffies(30)) { 4928 4928 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 4929 4929 "0461 Scanning longer than 30 " 4930 4930 "seconds. Continuing initialization\n"); 4931 4931 stat = 1; 4932 4932 goto finished; 4933 4933 } 4934 - if (time >= msecs_to_jiffies(15 * 1000) && 4934 + if (time >= secs_to_jiffies(15) && 4935 4935 phba->link_state <= LPFC_LINK_DOWN) { 4936 4936 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 4937 4937 "0465 Link down longer than 15 " ··· 4945 4945 if (vport->num_disc_nodes || vport->fc_prli_sent) 4946 4946 goto finished; 4947 4947 if (!atomic_read(&vport->fc_map_cnt) && 4948 - time < msecs_to_jiffies(2 * 1000)) 4948 + time < secs_to_jiffies(2)) 4949 4949 goto finished; 4950 4950 if ((phba->sli.sli_flag & LPFC_SLI_MBOX_ACTIVE) != 0) 4951 4951 goto finished; ··· 5179 5179 lpfc_worker_wake_up(phba); 5180 5180 5181 5181 /* restart the timer for the next iteration */ 5182 - mod_timer(&phba->inactive_vmid_poll, jiffies + msecs_to_jiffies(1000 * 5183 - LPFC_VMID_TIMER)); 5182 + mod_timer(&phba->inactive_vmid_poll, 5183 + jiffies + secs_to_jiffies(LPFC_VMID_TIMER)); 5184 5184 } 5185 5185 5186 5186 /**
+4 -4
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 911 911 (ndlp->nlp_state >= NLP_STE_ADISC_ISSUE || 912 912 ndlp->nlp_state <= NLP_STE_PRLI_ISSUE)) { 913 913 mod_timer(&ndlp->nlp_delayfunc, 914 - jiffies + msecs_to_jiffies(1000 * 1)); 914 + jiffies + secs_to_jiffies(1)); 915 915 set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); 916 916 ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; 917 917 lpfc_printf_vlog(vport, KERN_INFO, ··· 1337 1337 } 1338 1338 1339 1339 /* Put ndlp in npr state set plogi timer for 1 sec */ 1340 - mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000 * 1)); 1340 + mod_timer(&ndlp->nlp_delayfunc, jiffies + secs_to_jiffies(1)); 1341 1341 set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); 1342 1342 ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; 1343 1343 ndlp->nlp_prev_state = NLP_STE_PLOGI_ISSUE; ··· 1941 1941 1942 1942 /* Put ndlp in npr state set plogi timer for 1 sec */ 1943 1943 mod_timer(&ndlp->nlp_delayfunc, 1944 - jiffies + msecs_to_jiffies(1000 * 1)); 1944 + jiffies + secs_to_jiffies(1)); 1945 1945 set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); 1946 1946 ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; 1947 1947 ··· 2750 2750 2751 2751 if (!test_bit(NLP_DELAY_TMO, &ndlp->nlp_flag)) { 2752 2752 mod_timer(&ndlp->nlp_delayfunc, 2753 - jiffies + msecs_to_jiffies(1000 * 1)); 2753 + jiffies + secs_to_jiffies(1)); 2754 2754 set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); 2755 2755 clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); 2756 2756 ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
+1 -1
drivers/scsi/lpfc/lpfc_nvme.c
··· 2237 2237 * wait. Print a message if a 10 second wait expires and renew the 2238 2238 * wait. This is unexpected. 2239 2239 */ 2240 - wait_tmo = msecs_to_jiffies(LPFC_NVME_WAIT_TMO * 1000); 2240 + wait_tmo = secs_to_jiffies(LPFC_NVME_WAIT_TMO); 2241 2241 while (true) { 2242 2242 ret = wait_for_completion_timeout(lport_unreg_cmp, wait_tmo); 2243 2243 if (unlikely(!ret)) {
+2 -2
drivers/scsi/lpfc/lpfc_sli.c
··· 9012 9012 9013 9013 /* Start heart beat timer */ 9014 9014 mod_timer(&phba->hb_tmofunc, 9015 - jiffies + msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL)); 9015 + jiffies + secs_to_jiffies(LPFC_HB_MBOX_INTERVAL)); 9016 9016 clear_bit(HBA_HBEAT_INP, &phba->hba_flag); 9017 9017 clear_bit(HBA_HBEAT_TMO, &phba->hba_flag); 9018 9018 phba->last_completion_time = jiffies; ··· 13323 13323 lpfc_sli_mbox_sys_flush(phba); 13324 13324 return; 13325 13325 } 13326 - timeout = msecs_to_jiffies(LPFC_MBOX_TMO * 1000) + jiffies; 13326 + timeout = secs_to_jiffies(LPFC_MBOX_TMO) + jiffies; 13327 13327 13328 13328 /* Disable softirqs, including timers from obtaining phba->hbalock */ 13329 13329 local_bh_disable();
+1 -1
drivers/scsi/lpfc/lpfc_vmid.c
··· 278 278 if (!(vport->phba->pport->vmid_flag & LPFC_VMID_TIMER_ENBLD)) { 279 279 mod_timer(&vport->phba->inactive_vmid_poll, 280 280 jiffies + 281 - msecs_to_jiffies(1000 * LPFC_VMID_TIMER)); 281 + secs_to_jiffies(LPFC_VMID_TIMER)); 282 282 vport->phba->pport->vmid_flag |= LPFC_VMID_TIMER_ENBLD; 283 283 } 284 284 }
+1 -1
drivers/scsi/pm8001/pm8001_init.c
··· 736 736 return -EIO; 737 737 } 738 738 time_remaining = wait_for_completion_timeout(&completion, 739 - msecs_to_jiffies(60*1000)); // 1 min 739 + secs_to_jiffies(60)); // 1 min 740 740 if (!time_remaining) { 741 741 kfree(payload.func_specific); 742 742 pm8001_dbg(pm8001_ha, FAIL, "get_nvmd_req timeout\n");
+1 -1
drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
··· 59 59 60 60 if (wait) { 61 61 if (!wait_for_completion_timeout(&instance->msg_avail_comp, 62 - msecs_to_jiffies(10 * 1000))) { 62 + secs_to_jiffies(10))) { 63 63 dev_err(instance->dev, 64 64 "vchi message timeout, msg=%d\n", m->type); 65 65 return -ETIMEDOUT;
+1 -2
drivers/tty/tty_io.c
··· 2622 2622 2623 2623 static int tty_set_serial(struct tty_struct *tty, struct serial_struct *ss) 2624 2624 { 2625 - char comm[TASK_COMM_LEN]; 2626 2625 int flags; 2627 2626 2628 2627 flags = ss->flags & ASYNC_DEPRECATED; 2629 2628 2630 2629 if (flags) 2631 2630 pr_warn_ratelimited("%s: '%s' is using deprecated serial flags (with no effect): %.8x\n", 2632 - __func__, get_task_comm(comm, current), flags); 2631 + __func__, current->comm, flags); 2633 2632 2634 2633 if (!tty->ops->set_serial) 2635 2634 return -ENOTTY;
+1 -1
fs/ceph/quota.c
··· 166 166 if (IS_ERR(in)) { 167 167 doutc(cl, "Can't lookup inode %llx (err: %ld)\n", realm->ino, 168 168 PTR_ERR(in)); 169 - qri->timeout = jiffies + msecs_to_jiffies(60 * 1000); /* XXX */ 169 + qri->timeout = jiffies + secs_to_jiffies(60); /* XXX */ 170 170 } else { 171 171 qri->timeout = 0; 172 172 qri->inode = in;
-7
fs/erofs/decompressor.c
··· 7 7 #include "compress.h" 8 8 #include <linux/lz4.h> 9 9 10 - #ifndef LZ4_DISTANCE_MAX /* history window size */ 11 - #define LZ4_DISTANCE_MAX 65535 /* set to maximum value by default */ 12 - #endif 13 - 14 10 #define LZ4_MAX_DISTANCE_PAGES (DIV_ROUND_UP(LZ4_DISTANCE_MAX, PAGE_SIZE) + 1) 15 - #ifndef LZ4_DECOMPRESS_INPLACE_MARGIN 16 - #define LZ4_DECOMPRESS_INPLACE_MARGIN(srcsize) (((srcsize) >> 8) + 32) 17 - #endif 18 11 19 12 struct z_erofs_lz4_decompress_ctx { 20 13 struct z_erofs_decompress_req *rq;
+65 -2
fs/nilfs2/alloc.c
··· 21 21 * nilfs_palloc_groups_per_desc_block - get the number of groups that a group 22 22 * descriptor block can maintain 23 23 * @inode: inode of metadata file using this allocator 24 + * 25 + * Return: Number of groups that a group descriptor block can maintain. 24 26 */ 25 27 static inline unsigned long 26 28 nilfs_palloc_groups_per_desc_block(const struct inode *inode) ··· 34 32 /** 35 33 * nilfs_palloc_groups_count - get maximum number of groups 36 34 * @inode: inode of metadata file using this allocator 35 + * 36 + * Return: Maximum number of groups. 37 37 */ 38 38 static inline unsigned long 39 39 nilfs_palloc_groups_count(const struct inode *inode) ··· 47 43 * nilfs_palloc_init_blockgroup - initialize private variables for allocator 48 44 * @inode: inode of metadata file using this allocator 49 45 * @entry_size: size of the persistent object 46 + * 47 + * Return: 0 on success, or a negative error code on failure. 50 48 */ 51 49 int nilfs_palloc_init_blockgroup(struct inode *inode, unsigned int entry_size) 52 50 { ··· 84 78 * @inode: inode of metadata file using this allocator 85 79 * @nr: serial number of the entry (e.g. inode number) 86 80 * @offset: pointer to store offset number in the group 81 + * 82 + * Return: Number of the group that contains the entry with the index 83 + * specified by @nr. 87 84 */ 88 85 static unsigned long nilfs_palloc_group(const struct inode *inode, __u64 nr, 89 86 unsigned long *offset) ··· 102 93 * @inode: inode of metadata file using this allocator 103 94 * @group: group number 104 95 * 105 - * nilfs_palloc_desc_blkoff() returns block offset of the descriptor 106 - * block which contains a descriptor of the specified group. 96 + * Return: Index number in the metadata file of the descriptor block of 97 + * the group specified by @group. 107 98 */ 108 99 static unsigned long 109 100 nilfs_palloc_desc_blkoff(const struct inode *inode, unsigned long group) ··· 120 111 * 121 112 * nilfs_palloc_bitmap_blkoff() returns block offset of the bitmap 122 113 * block used to allocate/deallocate entries in the specified group. 114 + * 115 + * Return: Index number in the metadata file of the bitmap block of 116 + * the group specified by @group. 123 117 */ 124 118 static unsigned long 125 119 nilfs_palloc_bitmap_blkoff(const struct inode *inode, unsigned long group) ··· 137 125 * nilfs_palloc_group_desc_nfrees - get the number of free entries in a group 138 126 * @desc: pointer to descriptor structure for the group 139 127 * @lock: spin lock protecting @desc 128 + * 129 + * Return: Number of free entries written in the group descriptor @desc. 140 130 */ 141 131 static unsigned long 142 132 nilfs_palloc_group_desc_nfrees(const struct nilfs_palloc_group_desc *desc, ··· 157 143 * @desc: pointer to descriptor structure for the group 158 144 * @lock: spin lock protecting @desc 159 145 * @n: delta to be added 146 + * 147 + * Return: Number of free entries after adjusting the group descriptor 148 + * @desc. 160 149 */ 161 150 static u32 162 151 nilfs_palloc_group_desc_add_entries(struct nilfs_palloc_group_desc *desc, ··· 178 161 * nilfs_palloc_entry_blkoff - get block offset of an entry block 179 162 * @inode: inode of metadata file using this allocator 180 163 * @nr: serial number of the entry (e.g. inode number) 164 + * 165 + * Return: Index number in the metadata file of the block containing 166 + * the entry specified by @nr. 181 167 */ 182 168 static unsigned long 183 169 nilfs_palloc_entry_blkoff(const struct inode *inode, __u64 nr) ··· 258 238 * @blkoff: block offset 259 239 * @prev: nilfs_bh_assoc struct of the last used buffer 260 240 * @lock: spin lock protecting @prev 241 + * 242 + * Return: 0 on success, or one of the following negative error codes on 243 + * failure: 244 + * * %-EIO - I/O error (including metadata corruption). 245 + * * %-ENOENT - Non-existent block. 246 + * * %-ENOMEM - Insufficient memory available. 261 247 */ 262 248 static int nilfs_palloc_delete_block(struct inode *inode, unsigned long blkoff, 263 249 struct nilfs_bh_assoc *prev, ··· 284 258 * @group: group number 285 259 * @create: create flag 286 260 * @bhp: pointer to store the resultant buffer head 261 + * 262 + * Return: 0 on success, or a negative error code on failure. 287 263 */ 288 264 static int nilfs_palloc_get_desc_block(struct inode *inode, 289 265 unsigned long group, ··· 305 277 * @group: group number 306 278 * @create: create flag 307 279 * @bhp: pointer to store the resultant buffer head 280 + * 281 + * Return: 0 on success, or a negative error code on failure. 308 282 */ 309 283 static int nilfs_palloc_get_bitmap_block(struct inode *inode, 310 284 unsigned long group, ··· 324 294 * nilfs_palloc_delete_bitmap_block - delete a bitmap block 325 295 * @inode: inode of metadata file using this allocator 326 296 * @group: group number 297 + * 298 + * Return: 0 on success, or a negative error code on failure. 327 299 */ 328 300 static int nilfs_palloc_delete_bitmap_block(struct inode *inode, 329 301 unsigned long group) ··· 344 312 * @nr: serial number of the entry (e.g. inode number) 345 313 * @create: create flag 346 314 * @bhp: pointer to store the resultant buffer head 315 + * 316 + * Return: 0 on success, or a negative error code on failure. 347 317 */ 348 318 int nilfs_palloc_get_entry_block(struct inode *inode, __u64 nr, 349 319 int create, struct buffer_head **bhp) ··· 362 328 * nilfs_palloc_delete_entry_block - delete an entry block 363 329 * @inode: inode of metadata file using this allocator 364 330 * @nr: serial number of the entry 331 + * 332 + * Return: 0 on success, or a negative error code on failure. 365 333 */ 366 334 static int nilfs_palloc_delete_entry_block(struct inode *inode, __u64 nr) 367 335 { ··· 433 397 * @bsize: size in bits 434 398 * @lock: spin lock protecting @bitmap 435 399 * @wrap: whether to wrap around 400 + * 401 + * Return: Offset number within the group of the found free entry, or 402 + * %-ENOSPC if not found. 436 403 */ 437 404 static int nilfs_palloc_find_available_slot(unsigned char *bitmap, 438 405 unsigned long target, ··· 477 438 * @inode: inode of metadata file using this allocator 478 439 * @curr: current group number 479 440 * @max: maximum number of groups 441 + * 442 + * Return: Number of remaining descriptors (= groups) managed by the descriptor 443 + * block. 480 444 */ 481 445 static unsigned long 482 446 nilfs_palloc_rest_groups_in_desc_block(const struct inode *inode, ··· 495 453 * nilfs_palloc_count_desc_blocks - count descriptor blocks number 496 454 * @inode: inode of metadata file using this allocator 497 455 * @desc_blocks: descriptor blocks number [out] 456 + * 457 + * Return: 0 on success, or a negative error code on failure. 498 458 */ 499 459 static int nilfs_palloc_count_desc_blocks(struct inode *inode, 500 460 unsigned long *desc_blocks) ··· 517 473 * MDT file growing 518 474 * @inode: inode of metadata file using this allocator 519 475 * @desc_blocks: known current descriptor blocks count 476 + * 477 + * Return: true if a group can be added in the metadata file, false if not. 520 478 */ 521 479 static inline bool nilfs_palloc_mdt_file_can_grow(struct inode *inode, 522 480 unsigned long desc_blocks) ··· 533 487 * @inode: inode of metadata file using this allocator 534 488 * @nused: current number of used entries 535 489 * @nmaxp: max number of entries [out] 490 + * 491 + * Return: 0 on success, or one of the following negative error codes on 492 + * failure: 493 + * * %-EIO - I/O error (including metadata corruption). 494 + * * %-ENOMEM - Insufficient memory available. 495 + * * %-ERANGE - Number of entries in use is out of range. 536 496 */ 537 497 int nilfs_palloc_count_max_entries(struct inode *inode, u64 nused, u64 *nmaxp) 538 498 { ··· 570 518 * @inode: inode of metadata file using this allocator 571 519 * @req: nilfs_palloc_req structure exchanged for the allocation 572 520 * @wrap: whether to wrap around 521 + * 522 + * Return: 0 on success, or one of the following negative error codes on 523 + * failure: 524 + * * %-EIO - I/O error (including metadata corruption). 525 + * * %-ENOMEM - Insufficient memory available. 526 + * * %-ENOSPC - Entries exhausted (No entries available for allocation). 527 + * * %-EROFS - Read only filesystem 573 528 */ 574 529 int nilfs_palloc_prepare_alloc_entry(struct inode *inode, 575 530 struct nilfs_palloc_req *req, bool wrap) ··· 769 710 * nilfs_palloc_prepare_free_entry - prepare to deallocate a persistent object 770 711 * @inode: inode of metadata file using this allocator 771 712 * @req: nilfs_palloc_req structure exchanged for the removal 713 + * 714 + * Return: 0 on success, or a negative error code on failure. 772 715 */ 773 716 int nilfs_palloc_prepare_free_entry(struct inode *inode, 774 717 struct nilfs_palloc_req *req) ··· 815 754 * @inode: inode of metadata file using this allocator 816 755 * @entry_nrs: array of entry numbers to be deallocated 817 756 * @nitems: number of entries stored in @entry_nrs 757 + * 758 + * Return: 0 on success, or a negative error code on failure. 818 759 */ 819 760 int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems) 820 761 {
+2
fs/nilfs2/alloc.h
··· 21 21 * 22 22 * The number of entries per group is defined by the number of bits 23 23 * that a bitmap block can maintain. 24 + * 25 + * Return: Number of entries per group. 24 26 */ 25 27 static inline unsigned long 26 28 nilfs_palloc_entries_per_group(const struct inode *inode)
+51 -69
fs/nilfs2/bmap.c
··· 47 47 * @ptrp: place to store the value associated to @key 48 48 * 49 49 * Description: nilfs_bmap_lookup_at_level() finds a record whose key 50 - * matches @key in the block at @level of the bmap. 50 + * matches @key in the block at @level of the bmap. The record associated 51 + * with @key is stored in the place pointed to by @ptrp. 51 52 * 52 - * Return Value: On success, 0 is returned and the record associated with @key 53 - * is stored in the place pointed by @ptrp. On error, one of the following 54 - * negative error codes is returned. 55 - * 56 - * %-EIO - I/O error. 57 - * 58 - * %-ENOMEM - Insufficient amount of memory available. 59 - * 60 - * %-ENOENT - A record associated with @key does not exist. 53 + * Return: 0 on success, or one of the following negative error codes on 54 + * failure: 55 + * * %-EIO - I/O error (including metadata corruption). 56 + * * %-ENOENT - A record associated with @key does not exist. 57 + * * %-ENOMEM - Insufficient memory available. 61 58 */ 62 59 int nilfs_bmap_lookup_at_level(struct nilfs_bmap *bmap, __u64 key, int level, 63 60 __u64 *ptrp) ··· 135 138 * Description: nilfs_bmap_insert() inserts the new key-record pair specified 136 139 * by @key and @rec into @bmap. 137 140 * 138 - * Return Value: On success, 0 is returned. On error, one of the following 139 - * negative error codes is returned. 140 - * 141 - * %-EIO - I/O error. 142 - * 143 - * %-ENOMEM - Insufficient amount of memory available. 144 - * 145 - * %-EEXIST - A record associated with @key already exist. 141 + * Return: 0 on success, or one of the following negative error codes on 142 + * failure: 143 + * * %-EEXIST - A record associated with @key already exists. 144 + * * %-EIO - I/O error (including metadata corruption). 145 + * * %-ENOMEM - Insufficient memory available. 146 146 */ 147 147 int nilfs_bmap_insert(struct nilfs_bmap *bmap, __u64 key, unsigned long rec) 148 148 { ··· 187 193 * Description: nilfs_bmap_seek_key() seeks a valid key on @bmap 188 194 * starting from @start, and stores it to @keyp if found. 189 195 * 190 - * Return Value: On success, 0 is returned. On error, one of the following 191 - * negative error codes is returned. 192 - * 193 - * %-EIO - I/O error. 194 - * 195 - * %-ENOMEM - Insufficient amount of memory available. 196 - * 197 - * %-ENOENT - No valid entry was found 196 + * Return: 0 on success, or one of the following negative error codes on 197 + * failure: 198 + * * %-EIO - I/O error (including metadata corruption). 199 + * * %-ENOENT - No valid entry was found. 200 + * * %-ENOMEM - Insufficient memory available. 198 201 */ 199 202 int nilfs_bmap_seek_key(struct nilfs_bmap *bmap, __u64 start, __u64 *keyp) 200 203 { ··· 227 236 * Description: nilfs_bmap_delete() deletes the key-record pair specified by 228 237 * @key from @bmap. 229 238 * 230 - * Return Value: On success, 0 is returned. On error, one of the following 231 - * negative error codes is returned. 232 - * 233 - * %-EIO - I/O error. 234 - * 235 - * %-ENOMEM - Insufficient amount of memory available. 236 - * 237 - * %-ENOENT - A record associated with @key does not exist. 239 + * Return: 0 on success, or one of the following negative error codes on 240 + * failure: 241 + * * %-EIO - I/O error (including metadata corruption). 242 + * * %-ENOENT - A record associated with @key does not exist. 243 + * * %-ENOMEM - Insufficient memory available. 238 244 */ 239 245 int nilfs_bmap_delete(struct nilfs_bmap *bmap, __u64 key) 240 246 { ··· 278 290 * Description: nilfs_bmap_truncate() removes key-record pairs whose keys are 279 291 * greater than or equal to @key from @bmap. 280 292 * 281 - * Return Value: On success, 0 is returned. On error, one of the following 282 - * negative error codes is returned. 283 - * 284 - * %-EIO - I/O error. 285 - * 286 - * %-ENOMEM - Insufficient amount of memory available. 293 + * Return: 0 on success, or one of the following negative error codes on 294 + * failure: 295 + * * %-EIO - I/O error (including metadata corruption). 296 + * * %-ENOMEM - Insufficient memory available. 287 297 */ 288 298 int nilfs_bmap_truncate(struct nilfs_bmap *bmap, __u64 key) 289 299 { ··· 316 330 * Description: nilfs_bmap_propagate() marks the buffers that directly or 317 331 * indirectly refer to the block specified by @bh dirty. 318 332 * 319 - * Return Value: On success, 0 is returned. On error, one of the following 320 - * negative error codes is returned. 321 - * 322 - * %-EIO - I/O error. 323 - * 324 - * %-ENOMEM - Insufficient amount of memory available. 333 + * Return: 0 on success, or one of the following negative error codes on 334 + * failure: 335 + * * %-EIO - I/O error (including metadata corruption). 336 + * * %-ENOMEM - Insufficient memory available. 325 337 */ 326 338 int nilfs_bmap_propagate(struct nilfs_bmap *bmap, struct buffer_head *bh) 327 339 { ··· 346 362 347 363 /** 348 364 * nilfs_bmap_assign - assign a new block number to a block 349 - * @bmap: bmap 350 - * @bh: pointer to buffer head 365 + * @bmap: bmap 366 + * @bh: place to store a pointer to the buffer head to which a block 367 + * address is assigned (in/out) 351 368 * @blocknr: block number 352 - * @binfo: block information 369 + * @binfo: block information 353 370 * 354 371 * Description: nilfs_bmap_assign() assigns the block number @blocknr to the 355 - * buffer specified by @bh. 372 + * buffer specified by @bh. The block information is stored in the memory 373 + * pointed to by @binfo, and the buffer head may be replaced as a block 374 + * address is assigned, in which case a pointer to the new buffer head is 375 + * stored in the memory pointed to by @bh. 356 376 * 357 - * Return Value: On success, 0 is returned and the buffer head of a newly 358 - * create buffer and the block information associated with the buffer are 359 - * stored in the place pointed by @bh and @binfo, respectively. On error, one 360 - * of the following negative error codes is returned. 361 - * 362 - * %-EIO - I/O error. 363 - * 364 - * %-ENOMEM - Insufficient amount of memory available. 377 + * Return: 0 on success, or one of the following negative error codes on 378 + * failure: 379 + * * %-EIO - I/O error (including metadata corruption). 380 + * * %-ENOMEM - Insufficient memory available. 365 381 */ 366 382 int nilfs_bmap_assign(struct nilfs_bmap *bmap, 367 383 struct buffer_head **bh, ··· 386 402 * Description: nilfs_bmap_mark() marks the block specified by @key and @level 387 403 * as dirty. 388 404 * 389 - * Return Value: On success, 0 is returned. On error, one of the following 390 - * negative error codes is returned. 391 - * 392 - * %-EIO - I/O error. 393 - * 394 - * %-ENOMEM - Insufficient amount of memory available. 405 + * Return: 0 on success, or one of the following negative error codes on 406 + * failure: 407 + * * %-EIO - I/O error (including metadata corruption). 408 + * * %-ENOMEM - Insufficient memory available. 395 409 */ 396 410 int nilfs_bmap_mark(struct nilfs_bmap *bmap, __u64 key, int level) 397 411 { ··· 412 430 * Description: nilfs_test_and_clear() is the atomic operation to test and 413 431 * clear the dirty state of @bmap. 414 432 * 415 - * Return Value: 1 is returned if @bmap is dirty, or 0 if clear. 433 + * Return: 1 if @bmap is dirty, or 0 if clear. 416 434 */ 417 435 int nilfs_bmap_test_and_clear_dirty(struct nilfs_bmap *bmap) 418 436 { ··· 472 490 * 473 491 * Description: nilfs_bmap_read() initializes the bmap @bmap. 474 492 * 475 - * Return Value: On success, 0 is returned. On error, the following negative 476 - * error code is returned. 477 - * 478 - * %-ENOMEM - Insufficient amount of memory available. 493 + * Return: 0 on success, or one of the following negative error codes on 494 + * failure: 495 + * * %-EIO - I/O error (corrupted bmap). 496 + * * %-ENOMEM - Insufficient memory available. 479 497 */ 480 498 int nilfs_bmap_read(struct nilfs_bmap *bmap, struct nilfs_inode *raw_inode) 481 499 {
+2 -1
fs/nilfs2/btnode.c
··· 201 201 * Note that the current implementation does not support folio sizes larger 202 202 * than the page size. 203 203 * 204 - * Return: 0 on success, or the following negative error code on failure. 204 + * Return: 0 on success, or one of the following negative error codes on 205 + * failure: 205 206 * * %-EIO - I/O error (metadata corruption). 206 207 * * %-ENOMEM - Insufficient memory available. 207 208 */
+3 -4
fs/nilfs2/btree.c
··· 334 334 * @inode: host inode of btree 335 335 * @blocknr: block number 336 336 * 337 - * Return Value: If node is broken, 1 is returned. Otherwise, 0 is returned. 337 + * Return: 0 if normal, 1 if the node is broken. 338 338 */ 339 339 static int nilfs_btree_node_broken(const struct nilfs_btree_node *node, 340 340 size_t size, struct inode *inode, ··· 366 366 * @node: btree root node to be examined 367 367 * @inode: host inode of btree 368 368 * 369 - * Return Value: If node is broken, 1 is returned. Otherwise, 0 is returned. 369 + * Return: 0 if normal, 1 if the root node is broken. 370 370 */ 371 371 static int nilfs_btree_root_broken(const struct nilfs_btree_node *node, 372 372 struct inode *inode) ··· 652 652 * @minlevel: start level 653 653 * @nextkey: place to store the next valid key 654 654 * 655 - * Return Value: If a next key was found, 0 is returned. Otherwise, 656 - * -ENOENT is returned. 655 + * Return: 0 if the next key was found, %-ENOENT if not found. 657 656 */ 658 657 static int nilfs_btree_get_next_key(const struct nilfs_bmap *btree, 659 658 const struct nilfs_btree_path *path,
+32 -37
fs/nilfs2/cpfile.c
··· 191 191 * @cnop: place to store the next checkpoint number 192 192 * @bhp: place to store a pointer to buffer_head struct 193 193 * 194 - * Return Value: On success, it returns 0. On error, the following negative 195 - * error code is returned. 196 - * 197 - * %-ENOMEM - Insufficient memory available. 198 - * 199 - * %-EIO - I/O error 200 - * 201 - * %-ENOENT - no block exists in the range. 194 + * Return: 0 on success, or one of the following negative error codes on 195 + * failure: 196 + * * %-EIO - I/O error (including metadata corruption). 197 + * * %-ENOENT - no block exists in the range. 198 + * * %-ENOMEM - Insufficient memory available. 202 199 */ 203 200 static int nilfs_cpfile_find_checkpoint_block(struct inode *cpfile, 204 201 __u64 start_cno, __u64 end_cno, ··· 236 239 * stores it to the inode file given by @ifile and the nilfs root object 237 240 * given by @root. 238 241 * 239 - * Return: 0 on success, or the following negative error code on failure. 242 + * Return: 0 on success, or one of the following negative error codes on 243 + * failure: 240 244 * * %-EINVAL - Invalid checkpoint. 241 245 * * %-ENOMEM - Insufficient memory available. 242 246 * * %-EIO - I/O error (including metadata corruption). ··· 305 307 * In either case, the buffer of the block containing the checkpoint entry 306 308 * and the cpfile inode are made dirty for inclusion in the write log. 307 309 * 308 - * Return: 0 on success, or the following negative error code on failure. 310 + * Return: 0 on success, or one of the following negative error codes on 311 + * failure: 309 312 * * %-ENOMEM - Insufficient memory available. 310 313 * * %-EIO - I/O error (including metadata corruption). 311 314 * * %-EROFS - Read only filesystem ··· 375 376 * cpfile with the data given by the arguments @root, @blkinc, @ctime, and 376 377 * @minor. 377 378 * 378 - * Return: 0 on success, or the following negative error code on failure. 379 + * Return: 0 on success, or one of the following negative error codes on 380 + * failure: 379 381 * * %-ENOMEM - Insufficient memory available. 380 382 * * %-EIO - I/O error (including metadata corruption). 381 383 */ ··· 447 447 * the period from @start to @end, excluding @end itself. The checkpoints 448 448 * which have been already deleted are ignored. 449 449 * 450 - * Return Value: On success, 0 is returned. On error, one of the following 451 - * negative error codes is returned. 452 - * 453 - * %-EIO - I/O error. 454 - * 455 - * %-ENOMEM - Insufficient amount of memory available. 456 - * 457 - * %-EINVAL - invalid checkpoints. 450 + * Return: 0 on success, or one of the following negative error codes on 451 + * failure: 452 + * * %-EINVAL - Invalid checkpoints. 453 + * * %-EIO - I/O error (including metadata corruption). 454 + * * %-ENOMEM - Insufficient memory available. 458 455 */ 459 456 int nilfs_cpfile_delete_checkpoints(struct inode *cpfile, 460 457 __u64 start, ··· 715 718 * number to continue searching. 716 719 * 717 720 * Return: Count of checkpoint info items stored in the output buffer on 718 - * success, or the following negative error code on failure. 721 + * success, or one of the following negative error codes on failure: 719 722 * * %-EINVAL - Invalid checkpoint mode. 720 723 * * %-ENOMEM - Insufficient memory available. 721 724 * * %-EIO - I/O error (including metadata corruption). ··· 740 743 * @cpfile: checkpoint file inode 741 744 * @cno: checkpoint number to delete 742 745 * 743 - * Return: 0 on success, or the following negative error code on failure. 746 + * Return: 0 on success, or one of the following negative error codes on 747 + * failure: 744 748 * * %-EBUSY - Checkpoint in use (snapshot specified). 745 749 * * %-EIO - I/O error (including metadata corruption). 746 750 * * %-ENOENT - No valid checkpoint found. ··· 1009 1011 * @cno: checkpoint number 1010 1012 * 1011 1013 * Return: 1 if the checkpoint specified by @cno is a snapshot, 0 if not, or 1012 - * the following negative error code on failure. 1014 + * one of the following negative error codes on failure: 1013 1015 * * %-EIO - I/O error (including metadata corruption). 1014 1016 * * %-ENOENT - No such checkpoint. 1015 1017 * * %-ENOMEM - Insufficient memory available. ··· 1056 1058 * Description: nilfs_change_cpmode() changes the mode of the checkpoint 1057 1059 * specified by @cno. The mode @mode is NILFS_CHECKPOINT or NILFS_SNAPSHOT. 1058 1060 * 1059 - * Return Value: On success, 0 is returned. On error, one of the following 1060 - * negative error codes is returned. 1061 - * 1062 - * %-EIO - I/O error. 1063 - * 1064 - * %-ENOMEM - Insufficient amount of memory available. 1065 - * 1066 - * %-ENOENT - No such checkpoint. 1061 + * Return: 0 on success, or one of the following negative error codes on 1062 + * failure: 1063 + * * %-EIO - I/O error (including metadata corruption). 1064 + * * %-ENOENT - No such checkpoint. 1065 + * * %-ENOMEM - Insufficient memory available. 1067 1066 */ 1068 1067 int nilfs_cpfile_change_cpmode(struct inode *cpfile, __u64 cno, int mode) 1069 1068 { ··· 1092 1097 * @cpstat: pointer to a structure of checkpoint statistics 1093 1098 * 1094 1099 * Description: nilfs_cpfile_get_stat() returns information about checkpoints. 1100 + * The checkpoint statistics are stored in the location pointed to by @cpstat. 1095 1101 * 1096 - * Return Value: On success, 0 is returned, and checkpoints information is 1097 - * stored in the place pointed by @cpstat. On error, one of the following 1098 - * negative error codes is returned. 1099 - * 1100 - * %-EIO - I/O error. 1101 - * 1102 - * %-ENOMEM - Insufficient amount of memory available. 1102 + * Return: 0 on success, or one of the following negative error codes on 1103 + * failure: 1104 + * * %-EIO - I/O error (including metadata corruption). 1105 + * * %-ENOMEM - Insufficient memory available. 1103 1106 */ 1104 1107 int nilfs_cpfile_get_stat(struct inode *cpfile, struct nilfs_cpstat *cpstat) 1105 1108 { ··· 1128 1135 * @cpsize: size of a checkpoint entry 1129 1136 * @raw_inode: on-disk cpfile inode 1130 1137 * @inodep: buffer to store the inode 1138 + * 1139 + * Return: 0 on success, or a negative error code on failure. 1131 1140 */ 1132 1141 int nilfs_cpfile_read(struct super_block *sb, size_t cpsize, 1133 1142 struct nilfs_inode *raw_inode, struct inode **inodep)
+20 -25
fs/nilfs2/dat.c
··· 276 276 * @dat: DAT file inode 277 277 * @vblocknr: virtual block number 278 278 * 279 - * Return: 0 on success, or the following negative error code on failure. 279 + * Return: 0 on success, or one of the following negative error codes on 280 + * failure: 280 281 * * %-EINVAL - Invalid DAT entry (internal code). 281 282 * * %-EIO - I/O error (including metadata corruption). 282 283 * * %-ENOMEM - Insufficient memory available. ··· 303 302 * Description: nilfs_dat_freev() frees the virtual block numbers specified by 304 303 * @vblocknrs and @nitems. 305 304 * 306 - * Return Value: On success, 0 is returned. On error, one of the following 307 - * negative error codes is returned. 308 - * 309 - * %-EIO - I/O error. 310 - * 311 - * %-ENOMEM - Insufficient amount of memory available. 312 - * 313 - * %-ENOENT - The virtual block number have not been allocated. 305 + * Return: 0 on success, or one of the following negative error codes on 306 + * failure: 307 + * * %-EIO - I/O error (including metadata corruption). 308 + * * %-ENOENT - The virtual block number have not been allocated. 309 + * * %-ENOMEM - Insufficient memory available. 314 310 */ 315 311 int nilfs_dat_freev(struct inode *dat, __u64 *vblocknrs, size_t nitems) 316 312 { ··· 323 325 * Description: nilfs_dat_move() changes the block number associated with 324 326 * @vblocknr to @blocknr. 325 327 * 326 - * Return Value: On success, 0 is returned. On error, one of the following 327 - * negative error codes is returned. 328 - * 329 - * %-EIO - I/O error. 330 - * 331 - * %-ENOMEM - Insufficient amount of memory available. 328 + * Return: 0 on success, or one of the following negative error codes on 329 + * failure: 330 + * * %-EIO - I/O error (including metadata corruption). 331 + * * %-ENOMEM - Insufficient memory available. 332 332 */ 333 333 int nilfs_dat_move(struct inode *dat, __u64 vblocknr, sector_t blocknr) 334 334 { ··· 386 390 * @blocknrp: pointer to a block number 387 391 * 388 392 * Description: nilfs_dat_translate() maps the virtual block number @vblocknr 389 - * to the corresponding block number. 393 + * to the corresponding block number. The block number associated with 394 + * @vblocknr is stored in the place pointed to by @blocknrp. 390 395 * 391 - * Return Value: On success, 0 is returned and the block number associated 392 - * with @vblocknr is stored in the place pointed by @blocknrp. On error, one 393 - * of the following negative error codes is returned. 394 - * 395 - * %-EIO - I/O error. 396 - * 397 - * %-ENOMEM - Insufficient amount of memory available. 398 - * 399 - * %-ENOENT - A block number associated with @vblocknr does not exist. 396 + * Return: 0 on success, or one of the following negative error codes on 397 + * failure: 398 + * * %-EIO - I/O error (including metadata corruption). 399 + * * %-ENOENT - A block number associated with @vblocknr does not exist. 400 + * * %-ENOMEM - Insufficient memory available. 400 401 */ 401 402 int nilfs_dat_translate(struct inode *dat, __u64 vblocknr, sector_t *blocknrp) 402 403 { ··· 482 489 * @entry_size: size of a dat entry 483 490 * @raw_inode: on-disk dat inode 484 491 * @inodep: buffer to store the inode 492 + * 493 + * Return: 0 on success, or a negative error code on failure. 485 494 */ 486 495 int nilfs_dat_read(struct super_block *sb, size_t entry_size, 487 496 struct nilfs_inode *raw_inode, struct inode **inodep)
+10 -3
fs/nilfs2/dir.c
··· 400 400 return 0; 401 401 } 402 402 403 - void nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de, 403 + int nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de, 404 404 struct folio *folio, struct inode *inode) 405 405 { 406 406 size_t from = offset_in_folio(folio, de); ··· 410 410 411 411 folio_lock(folio); 412 412 err = nilfs_prepare_chunk(folio, from, to); 413 - BUG_ON(err); 413 + if (unlikely(err)) { 414 + folio_unlock(folio); 415 + return err; 416 + } 414 417 de->inode = cpu_to_le64(inode->i_ino); 415 418 de->file_type = fs_umode_to_ftype(inode->i_mode); 416 419 nilfs_commit_chunk(folio, mapping, from, to); 417 420 inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir)); 421 + return 0; 418 422 } 419 423 420 424 /* ··· 547 543 from = (char *)pde - kaddr; 548 544 folio_lock(folio); 549 545 err = nilfs_prepare_chunk(folio, from, to); 550 - BUG_ON(err); 546 + if (unlikely(err)) { 547 + folio_unlock(folio); 548 + goto out; 549 + } 551 550 if (pde) 552 551 pde->rec_len = nilfs_rec_len_to_disk(to - from); 553 552 dir->inode = 0;
+10 -14
fs/nilfs2/gcinode.c
··· 46 46 * specified by @pbn to the GC pagecache with the key @blkoff. 47 47 * This function sets @vbn (@pbn if @vbn is zero) in b_blocknr of the buffer. 48 48 * 49 - * Return Value: On success, 0 is returned. On Error, one of the following 50 - * negative error code is returned. 51 - * 52 - * %-EIO - I/O error. 53 - * 54 - * %-ENOMEM - Insufficient amount of memory available. 55 - * 56 - * %-ENOENT - The block specified with @pbn does not exist. 49 + * Return: 0 on success, or one of the following negative error codes on 50 + * failure: 51 + * * %-EIO - I/O error (including metadata corruption). 52 + * * %-ENOENT - The block specified with @pbn does not exist. 53 + * * %-ENOMEM - Insufficient memory available. 57 54 */ 58 55 int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff, 59 56 sector_t pbn, __u64 vbn, ··· 111 114 * specified by @vbn to the GC pagecache. @pbn can be supplied by the 112 115 * caller to avoid translation of the disk block address. 113 116 * 114 - * Return Value: On success, 0 is returned. On Error, one of the following 115 - * negative error code is returned. 116 - * 117 - * %-EIO - I/O error. 118 - * 119 - * %-ENOMEM - Insufficient amount of memory available. 117 + * Return: 0 on success, or one of the following negative error codes on 118 + * failure: 119 + * * %-EIO - I/O error (including metadata corruption). 120 + * * %-ENOENT - Invalid virtual block address. 121 + * * %-ENOMEM - Insufficient memory available. 120 122 */ 121 123 int nilfs_gccache_submit_read_node(struct inode *inode, sector_t pbn, 122 124 __u64 vbn, struct buffer_head **out_bh)
+18 -19
fs/nilfs2/ifile.c
··· 38 38 * @out_ino: pointer to a variable to store inode number 39 39 * @out_bh: buffer_head contains newly allocated disk inode 40 40 * 41 - * Return Value: On success, 0 is returned and the newly allocated inode 42 - * number is stored in the place pointed by @ino, and buffer_head pointer 43 - * that contains newly allocated disk inode structure is stored in the 44 - * place pointed by @out_bh 45 - * On error, one of the following negative error codes is returned. 41 + * nilfs_ifile_create_inode() allocates a new inode in the ifile metadata 42 + * file and stores the inode number in the variable pointed to by @out_ino, 43 + * as well as storing the ifile's buffer with the disk inode in the location 44 + * pointed to by @out_bh. 46 45 * 47 - * %-EIO - I/O error. 48 - * 49 - * %-ENOMEM - Insufficient amount of memory available. 50 - * 51 - * %-ENOSPC - No inode left. 46 + * Return: 0 on success, or one of the following negative error codes on 47 + * failure: 48 + * * %-EIO - I/O error (including metadata corruption). 49 + * * %-ENOMEM - Insufficient memory available. 50 + * * %-ENOSPC - No inode left. 52 51 */ 53 52 int nilfs_ifile_create_inode(struct inode *ifile, ino_t *out_ino, 54 53 struct buffer_head **out_bh) ··· 82 83 * @ifile: ifile inode 83 84 * @ino: inode number 84 85 * 85 - * Return Value: On success, 0 is returned. On error, one of the following 86 - * negative error codes is returned. 87 - * 88 - * %-EIO - I/O error. 89 - * 90 - * %-ENOMEM - Insufficient amount of memory available. 91 - * 92 - * %-ENOENT - The inode number @ino have not been allocated. 86 + * Return: 0 on success, or one of the following negative error codes on 87 + * failure: 88 + * * %-EIO - I/O error (including metadata corruption). 89 + * * %-ENOENT - Inode number unallocated. 90 + * * %-ENOMEM - Insufficient memory available. 93 91 */ 94 92 int nilfs_ifile_delete_inode(struct inode *ifile, ino_t ino) 95 93 { ··· 146 150 * @ifile: ifile inode 147 151 * @nmaxinodes: current maximum of available inodes count [out] 148 152 * @nfreeinodes: free inodes count [out] 153 + * 154 + * Return: 0 on success, or a negative error code on failure. 149 155 */ 150 156 int nilfs_ifile_count_free_inodes(struct inode *ifile, 151 157 u64 *nmaxinodes, u64 *nfreeinodes) ··· 172 174 * @cno: number of checkpoint entry to read 173 175 * @inode_size: size of an inode 174 176 * 175 - * Return: 0 on success, or the following negative error code on failure. 177 + * Return: 0 on success, or one of the following negative error codes on 178 + * failure: 176 179 * * %-EINVAL - Invalid checkpoint. 177 180 * * %-ENOMEM - Insufficient memory available. 178 181 * * %-EIO - I/O error (including metadata corruption).
+7 -9
fs/nilfs2/inode.c
··· 68 68 * 69 69 * This function does not issue actual read request of the specified data 70 70 * block. It is done by VFS. 71 + * 72 + * Return: 0 on success, or a negative error code on failure. 71 73 */ 72 74 int nilfs_get_block(struct inode *inode, sector_t blkoff, 73 75 struct buffer_head *bh_result, int create) ··· 143 141 * address_space_operations. 144 142 * @file: file struct of the file to be read 145 143 * @folio: the folio to be read 144 + * 145 + * Return: 0 on success, or a negative error code on failure. 146 146 */ 147 147 static int nilfs_read_folio(struct file *file, struct folio *folio) 148 148 { ··· 602 598 * or does nothing if the inode already has it. This function allocates 603 599 * an additional inode to maintain page cache of B-tree nodes one-on-one. 604 600 * 605 - * Return Value: On success, 0 is returned. On errors, one of the following 606 - * negative error code is returned. 607 - * 608 - * %-ENOMEM - Insufficient memory available. 601 + * Return: 0 on success, or %-ENOMEM if memory is insufficient. 609 602 */ 610 603 int nilfs_attach_btree_node_cache(struct inode *inode) 611 604 { ··· 661 660 * in one inode and the one for b-tree node pages is set up in the 662 661 * other inode, which is attached to the former inode. 663 662 * 664 - * Return Value: On success, a pointer to the inode for data pages is 665 - * returned. On errors, one of the following negative error code is returned 666 - * in a pointer type. 667 - * 668 - * %-ENOMEM - Insufficient memory available. 663 + * Return: a pointer to the inode for data pages on success, or %-ENOMEM 664 + * if memory is insufficient. 669 665 */ 670 666 struct inode *nilfs_iget_for_shadow(struct inode *inode) 671 667 {
+97 -137
fs/nilfs2/ioctl.c
··· 33 33 * @dofunc: concrete function of get/set metadata info 34 34 * 35 35 * Description: nilfs_ioctl_wrap_copy() gets/sets metadata info by means of 36 - * calling dofunc() function on the basis of @argv argument. 36 + * calling dofunc() function on the basis of @argv argument. If successful, 37 + * the requested metadata information is copied to userspace memory. 37 38 * 38 - * Return Value: On success, 0 is returned and requested metadata info 39 - * is copied into userspace. On error, one of the following 40 - * negative error codes is returned. 41 - * 42 - * %-EINVAL - Invalid arguments from userspace. 43 - * 44 - * %-ENOMEM - Insufficient amount of memory available. 45 - * 46 - * %-EFAULT - Failure during execution of requested operation. 39 + * Return: 0 on success, or one of the following negative error codes on 40 + * failure: 41 + * * %-EFAULT - Failure during execution of requested operation. 42 + * * %-EINVAL - Invalid arguments from userspace. 43 + * * %-ENOMEM - Insufficient memory available. 47 44 */ 48 45 static int nilfs_ioctl_wrap_copy(struct the_nilfs *nilfs, 49 46 struct nilfs_argv *argv, int dir, ··· 187 190 * given checkpoint between checkpoint and snapshot state. This ioctl 188 191 * is used in chcp and mkcp utilities. 189 192 * 190 - * Return Value: On success, 0 is returned and mode of a checkpoint is 191 - * changed. On error, one of the following negative error codes 192 - * is returned. 193 - * 194 - * %-EPERM - Operation not permitted. 195 - * 196 - * %-EFAULT - Failure during checkpoint mode changing. 193 + * Return: 0 on success, or one of the following negative error codes on 194 + * failure: 195 + * %-EFAULT - Failure during checkpoint mode changing. 196 + * %-EPERM - Operation not permitted. 197 197 */ 198 198 static int nilfs_ioctl_change_cpmode(struct inode *inode, struct file *filp, 199 199 unsigned int cmd, void __user *argp) ··· 238 244 * checkpoint from NILFS2 file system. This ioctl is used in rmcp 239 245 * utility. 240 246 * 241 - * Return Value: On success, 0 is returned and a checkpoint is 242 - * removed. On error, one of the following negative error codes 243 - * is returned. 244 - * 245 - * %-EPERM - Operation not permitted. 246 - * 247 - * %-EFAULT - Failure during checkpoint removing. 247 + * Return: 0 on success, or one of the following negative error codes on 248 + * failure: 249 + * %-EFAULT - Failure during checkpoint removing. 250 + * %-EPERM - Operation not permitted. 248 251 */ 249 252 static int 250 253 nilfs_ioctl_delete_checkpoint(struct inode *inode, struct file *filp, ··· 287 296 * requested checkpoints. The NILFS_IOCTL_GET_CPINFO ioctl is used in 288 297 * lscp utility and by nilfs_cleanerd daemon. 289 298 * 290 - * Return value: count of nilfs_cpinfo structures in output buffer. 299 + * Return: Count of nilfs_cpinfo structures in output buffer. 291 300 */ 292 301 static ssize_t 293 302 nilfs_ioctl_do_get_cpinfo(struct the_nilfs *nilfs, __u64 *posp, int flags, ··· 311 320 * 312 321 * Description: nilfs_ioctl_get_cpstat() returns information about checkpoints. 313 322 * The NILFS_IOCTL_GET_CPSTAT ioctl is used by lscp, rmcp utilities 314 - * and by nilfs_cleanerd daemon. 323 + * and by nilfs_cleanerd daemon. The checkpoint statistics are copied to 324 + * the userspace memory pointed to by @argp. 315 325 * 316 - * Return Value: On success, 0 is returned, and checkpoints information is 317 - * copied into userspace pointer @argp. On error, one of the following 318 - * negative error codes is returned. 319 - * 320 - * %-EIO - I/O error. 321 - * 322 - * %-ENOMEM - Insufficient amount of memory available. 323 - * 324 - * %-EFAULT - Failure during getting checkpoints statistics. 326 + * Return: 0 on success, or one of the following negative error codes on 327 + * failure: 328 + * * %-EFAULT - Failure during getting checkpoints statistics. 329 + * * %-EIO - I/O error. 330 + * * %-ENOMEM - Insufficient memory available. 325 331 */ 326 332 static int nilfs_ioctl_get_cpstat(struct inode *inode, struct file *filp, 327 333 unsigned int cmd, void __user *argp) ··· 351 363 * info about requested segments. The NILFS_IOCTL_GET_SUINFO ioctl is used 352 364 * in lssu, nilfs_resize utilities and by nilfs_cleanerd daemon. 353 365 * 354 - * Return value: count of nilfs_suinfo structures in output buffer. 366 + * Return: Count of nilfs_suinfo structures in output buffer on success, 367 + * or a negative error code on failure. 355 368 */ 356 369 static ssize_t 357 370 nilfs_ioctl_do_get_suinfo(struct the_nilfs *nilfs, __u64 *posp, int flags, ··· 376 387 * 377 388 * Description: nilfs_ioctl_get_sustat() returns segment usage statistics. 378 389 * The NILFS_IOCTL_GET_SUSTAT ioctl is used in lssu, nilfs_resize utilities 379 - * and by nilfs_cleanerd daemon. 390 + * and by nilfs_cleanerd daemon. The requested segment usage information is 391 + * copied to the userspace memory pointed to by @argp. 380 392 * 381 - * Return Value: On success, 0 is returned, and segment usage information is 382 - * copied into userspace pointer @argp. On error, one of the following 383 - * negative error codes is returned. 384 - * 385 - * %-EIO - I/O error. 386 - * 387 - * %-ENOMEM - Insufficient amount of memory available. 388 - * 389 - * %-EFAULT - Failure during getting segment usage statistics. 393 + * Return: 0 on success, or one of the following negative error codes on 394 + * failure: 395 + * * %-EFAULT - Failure during getting segment usage statistics. 396 + * * %-EIO - I/O error. 397 + * * %-ENOMEM - Insufficient memory available. 390 398 */ 391 399 static int nilfs_ioctl_get_sustat(struct inode *inode, struct file *filp, 392 400 unsigned int cmd, void __user *argp) ··· 416 430 * on virtual block addresses. The NILFS_IOCTL_GET_VINFO ioctl is used 417 431 * by nilfs_cleanerd daemon. 418 432 * 419 - * Return value: count of nilfs_vinfo structures in output buffer. 433 + * Return: Count of nilfs_vinfo structures in output buffer on success, or 434 + * a negative error code on failure. 420 435 */ 421 436 static ssize_t 422 437 nilfs_ioctl_do_get_vinfo(struct the_nilfs *nilfs, __u64 *posp, int flags, ··· 444 457 * about descriptors of disk block numbers. The NILFS_IOCTL_GET_BDESCS ioctl 445 458 * is used by nilfs_cleanerd daemon. 446 459 * 447 - * Return value: count of nilfs_bdescs structures in output buffer. 460 + * Return: Count of nilfs_bdescs structures in output buffer on success, or 461 + * a negative error code on failure. 448 462 */ 449 463 static ssize_t 450 464 nilfs_ioctl_do_get_bdescs(struct the_nilfs *nilfs, __u64 *posp, int flags, ··· 482 494 * 483 495 * Description: nilfs_ioctl_do_get_bdescs() function returns information 484 496 * about descriptors of disk block numbers. The NILFS_IOCTL_GET_BDESCS ioctl 485 - * is used by nilfs_cleanerd daemon. 497 + * is used by nilfs_cleanerd daemon. If successful, disk block descriptors 498 + * are copied to userspace pointer @argp. 486 499 * 487 - * Return Value: On success, 0 is returned, and disk block descriptors are 488 - * copied into userspace pointer @argp. On error, one of the following 489 - * negative error codes is returned. 490 - * 491 - * %-EINVAL - Invalid arguments from userspace. 492 - * 493 - * %-EIO - I/O error. 494 - * 495 - * %-ENOMEM - Insufficient amount of memory available. 496 - * 497 - * %-EFAULT - Failure during getting disk block descriptors. 500 + * Return: 0 on success, or one of the following negative error codes on 501 + * failure: 502 + * * %-EFAULT - Failure during getting disk block descriptors. 503 + * * %-EINVAL - Invalid arguments from userspace. 504 + * * %-EIO - I/O error. 505 + * * %-ENOMEM - Insufficient memory available. 498 506 */ 499 507 static int nilfs_ioctl_get_bdescs(struct inode *inode, struct file *filp, 500 508 unsigned int cmd, void __user *argp) ··· 524 540 * Description: nilfs_ioctl_move_inode_block() function registers data/node 525 541 * buffer in the GC pagecache and submit read request. 526 542 * 527 - * Return Value: On success, 0 is returned. On error, one of the following 528 - * negative error codes is returned. 529 - * 530 - * %-EIO - I/O error. 531 - * 532 - * %-ENOMEM - Insufficient amount of memory available. 533 - * 534 - * %-ENOENT - Requested block doesn't exist. 535 - * 536 - * %-EEXIST - Blocks conflict is detected. 543 + * Return: 0 on success, or one of the following negative error codes on 544 + * failure: 545 + * * %-EEXIST - Block conflict detected. 546 + * * %-EIO - I/O error. 547 + * * %-ENOENT - Requested block doesn't exist. 548 + * * %-ENOMEM - Insufficient memory available. 537 549 */ 538 550 static int nilfs_ioctl_move_inode_block(struct inode *inode, 539 551 struct nilfs_vdesc *vdesc, ··· 584 604 * blocks that garbage collector specified with the array of nilfs_vdesc 585 605 * structures and stores them into page caches of GC inodes. 586 606 * 587 - * Return Value: Number of processed nilfs_vdesc structures or 588 - * error code, otherwise. 607 + * Return: Number of processed nilfs_vdesc structures on success, or 608 + * a negative error code on failure. 589 609 */ 590 610 static int nilfs_ioctl_move_blocks(struct super_block *sb, 591 611 struct nilfs_argv *argv, void *buf) ··· 662 682 * in the period from p_start to p_end, excluding p_end itself. The checkpoints 663 683 * which have been already deleted are ignored. 664 684 * 665 - * Return Value: Number of processed nilfs_period structures or 666 - * error code, otherwise. 667 - * 668 - * %-EIO - I/O error. 669 - * 670 - * %-ENOMEM - Insufficient amount of memory available. 671 - * 672 - * %-EINVAL - invalid checkpoints. 685 + * Return: Number of processed nilfs_period structures on success, or one of 686 + * the following negative error codes on failure: 687 + * * %-EINVAL - invalid checkpoints. 688 + * * %-EIO - I/O error. 689 + * * %-ENOMEM - Insufficient memory available. 673 690 */ 674 691 static int nilfs_ioctl_delete_checkpoints(struct the_nilfs *nilfs, 675 692 struct nilfs_argv *argv, void *buf) ··· 694 717 * Description: nilfs_ioctl_free_vblocknrs() function frees 695 718 * the virtual block numbers specified by @buf and @argv->v_nmembs. 696 719 * 697 - * Return Value: Number of processed virtual block numbers or 698 - * error code, otherwise. 699 - * 700 - * %-EIO - I/O error. 701 - * 702 - * %-ENOMEM - Insufficient amount of memory available. 703 - * 704 - * %-ENOENT - The virtual block number have not been allocated. 720 + * Return: Number of processed virtual block numbers on success, or one of the 721 + * following negative error codes on failure: 722 + * * %-EIO - I/O error. 723 + * * %-ENOENT - Unallocated virtual block number. 724 + * * %-ENOMEM - Insufficient memory available. 705 725 */ 706 726 static int nilfs_ioctl_free_vblocknrs(struct the_nilfs *nilfs, 707 727 struct nilfs_argv *argv, void *buf) ··· 720 746 * Description: nilfs_ioctl_mark_blocks_dirty() function marks 721 747 * metadata file or data blocks as dirty. 722 748 * 723 - * Return Value: Number of processed block descriptors or 724 - * error code, otherwise. 725 - * 726 - * %-ENOMEM - Insufficient memory available. 727 - * 728 - * %-EIO - I/O error 729 - * 730 - * %-ENOENT - the specified block does not exist (hole block) 749 + * Return: Number of processed block descriptors on success, or one of the 750 + * following negative error codes on failure: 751 + * * %-EIO - I/O error. 752 + * * %-ENOENT - Non-existent block (hole block). 753 + * * %-ENOMEM - Insufficient memory available. 731 754 */ 732 755 static int nilfs_ioctl_mark_blocks_dirty(struct the_nilfs *nilfs, 733 756 struct nilfs_argv *argv, void *buf) ··· 823 852 * from userspace. The NILFS_IOCTL_CLEAN_SEGMENTS ioctl is used by 824 853 * nilfs_cleanerd daemon. 825 854 * 826 - * Return Value: On success, 0 is returned or error code, otherwise. 855 + * Return: 0 on success, or a negative error code on failure. 827 856 */ 828 857 static int nilfs_ioctl_clean_segments(struct inode *inode, struct file *filp, 829 858 unsigned int cmd, void __user *argp) ··· 947 976 * and metadata are written out to the device when it successfully 948 977 * returned. 949 978 * 950 - * Return Value: On success, 0 is retured. On errors, one of the following 951 - * negative error code is returned. 952 - * 953 - * %-EROFS - Read only filesystem. 954 - * 955 - * %-EIO - I/O error 956 - * 957 - * %-ENOSPC - No space left on device (only in a panic state). 958 - * 959 - * %-ERESTARTSYS - Interrupted. 960 - * 961 - * %-ENOMEM - Insufficient memory available. 962 - * 963 - * %-EFAULT - Failure during execution of requested operation. 979 + * Return: 0 on success, or one of the following negative error codes on 980 + * failure: 981 + * * %-EFAULT - Failure during execution of requested operation. 982 + * * %-EIO - I/O error. 983 + * * %-ENOMEM - Insufficient memory available. 984 + * * %-ENOSPC - No space left on device (only in a panic state). 985 + * * %-ERESTARTSYS - Interrupted. 986 + * * %-EROFS - Read only filesystem. 964 987 */ 965 988 static int nilfs_ioctl_sync(struct inode *inode, struct file *filp, 966 989 unsigned int cmd, void __user *argp) ··· 988 1023 * @filp: file object 989 1024 * @argp: pointer on argument from userspace 990 1025 * 991 - * Return Value: On success, 0 is returned or error code, otherwise. 1026 + * Return: 0 on success, or a negative error code on failure. 992 1027 */ 993 1028 static int nilfs_ioctl_resize(struct inode *inode, struct file *filp, 994 1029 void __user *argp) ··· 1024 1059 * checks the arguments from userspace and calls nilfs_sufile_trim_fs, which 1025 1060 * performs the actual trim operation. 1026 1061 * 1027 - * Return Value: On success, 0 is returned or negative error code, otherwise. 1062 + * Return: 0 on success, or a negative error code on failure. 1028 1063 */ 1029 1064 static int nilfs_ioctl_trim_fs(struct inode *inode, void __user *argp) 1030 1065 { ··· 1066 1101 * of segments in bytes and upper limit of segments in bytes. 1067 1102 * The NILFS_IOCTL_SET_ALLOC_RANGE is used by nilfs_resize utility. 1068 1103 * 1069 - * Return Value: On success, 0 is returned or error code, otherwise. 1104 + * Return: 0 on success, or a negative error code on failure. 1070 1105 */ 1071 1106 static int nilfs_ioctl_set_alloc_range(struct inode *inode, void __user *argp) 1072 1107 { ··· 1117 1152 * @dofunc: concrete function of getting metadata info 1118 1153 * 1119 1154 * Description: nilfs_ioctl_get_info() gets metadata info by means of 1120 - * calling dofunc() function. 1155 + * calling dofunc() function. The requested metadata information is copied 1156 + * to userspace memory @argp. 1121 1157 * 1122 - * Return Value: On success, 0 is returned and requested metadata info 1123 - * is copied into userspace. On error, one of the following 1124 - * negative error codes is returned. 1125 - * 1126 - * %-EINVAL - Invalid arguments from userspace. 1127 - * 1128 - * %-ENOMEM - Insufficient amount of memory available. 1129 - * 1130 - * %-EFAULT - Failure during execution of requested operation. 1158 + * Return: 0 on success, or one of the following negative error codes on 1159 + * failure: 1160 + * * %-EFAULT - Failure during execution of requested operation. 1161 + * * %-EINVAL - Invalid arguments from userspace. 1162 + * * %-EIO - I/O error. 1163 + * * %-ENOMEM - Insufficient memory available. 1131 1164 */ 1132 1165 static int nilfs_ioctl_get_info(struct inode *inode, struct file *filp, 1133 1166 unsigned int cmd, void __user *argp, ··· 1165 1202 * encapsulated in nilfs_argv and updates the segment usage info 1166 1203 * according to the flags in nilfs_suinfo_update. 1167 1204 * 1168 - * Return Value: On success, 0 is returned. On error, one of the 1169 - * following negative error codes is returned. 1170 - * 1171 - * %-EPERM - Not enough permissions 1172 - * 1173 - * %-EFAULT - Error copying input data 1174 - * 1175 - * %-EIO - I/O error. 1176 - * 1177 - * %-ENOMEM - Insufficient amount of memory available. 1178 - * 1179 - * %-EINVAL - Invalid values in input (segment number, flags or nblocks) 1205 + * Return: 0 on success, or one of the following negative error codes on 1206 + * failure: 1207 + * * %-EEXIST - Block conflict detected. 1208 + * * %-EFAULT - Error copying input data. 1209 + * * %-EINVAL - Invalid values in input (segment number, flags or nblocks). 1210 + * * %-EIO - I/O error. 1211 + * * %-ENOMEM - Insufficient memory available. 1212 + * * %-EPERM - Not enough permissions. 1180 1213 */ 1181 1214 static int nilfs_ioctl_set_suinfo(struct inode *inode, struct file *filp, 1182 1215 unsigned int cmd, void __user *argp) ··· 1268 1309 * @filp: file object 1269 1310 * @argp: pointer to userspace memory that contains the volume name 1270 1311 * 1271 - * Return: 0 on success, or the following negative error code on failure. 1312 + * Return: 0 on success, or one of the following negative error codes on 1313 + * failure: 1272 1314 * * %-EFAULT - Error copying input data. 1273 1315 * * %-EINVAL - Label length exceeds record size in superblock. 1274 1316 * * %-EIO - I/O error.
+31 -32
fs/nilfs2/mdt.c
··· 226 226 * @out_bh: output of a pointer to the buffer_head 227 227 * 228 228 * nilfs_mdt_get_block() looks up the specified buffer and tries to create 229 - * a new buffer if @create is not zero. On success, the returned buffer is 230 - * assured to be either existing or formatted using a buffer lock on success. 231 - * @out_bh is substituted only when zero is returned. 229 + * a new buffer if @create is not zero. If (and only if) this function 230 + * succeeds, it stores a pointer to the retrieved buffer head in the location 231 + * pointed to by @out_bh. 232 232 * 233 - * Return Value: On success, it returns 0. On error, the following negative 234 - * error code is returned. 233 + * The retrieved buffer may be either an existing one or a newly allocated one. 234 + * For a newly created buffer, if the callback function argument @init_block 235 + * is non-NULL, the callback will be called with the buffer locked to format 236 + * the block. 235 237 * 236 - * %-ENOMEM - Insufficient memory available. 237 - * 238 - * %-EIO - I/O error 239 - * 240 - * %-ENOENT - the specified block does not exist (hole block) 241 - * 242 - * %-EROFS - Read only filesystem (for create mode) 238 + * Return: 0 on success, or one of the following negative error codes on 239 + * failure: 240 + * * %-EIO - I/O error (including metadata corruption). 241 + * * %-ENOENT - The specified block does not exist (hole block). 242 + * * %-ENOMEM - Insufficient memory available. 243 + * * %-EROFS - Read only filesystem (for create mode). 243 244 */ 244 245 int nilfs_mdt_get_block(struct inode *inode, unsigned long blkoff, int create, 245 246 void (*init_block)(struct inode *, ··· 276 275 * @out_bh, and block offset to @blkoff, respectively. @out_bh and 277 276 * @blkoff are substituted only when zero is returned. 278 277 * 279 - * Return Value: On success, it returns 0. On error, the following negative 280 - * error code is returned. 281 - * 282 - * %-ENOMEM - Insufficient memory available. 283 - * 284 - * %-EIO - I/O error 285 - * 286 - * %-ENOENT - no block was found in the range 278 + * Return: 0 on success, or one of the following negative error codes on 279 + * failure: 280 + * * %-EIO - I/O error (including metadata corruption). 281 + * * %-ENOENT - No block was found in the range. 282 + * * %-ENOMEM - Insufficient memory available. 287 283 */ 288 284 int nilfs_mdt_find_block(struct inode *inode, unsigned long start, 289 285 unsigned long end, unsigned long *blkoff, ··· 319 321 * @inode: inode of the meta data file 320 322 * @block: block offset 321 323 * 322 - * Return Value: On success, zero is returned. 323 - * On error, one of the following negative error code is returned. 324 - * 325 - * %-ENOMEM - Insufficient memory available. 326 - * 327 - * %-EIO - I/O error 324 + * Return: 0 on success, or one of the following negative error codes on 325 + * failure: 326 + * * %-EIO - I/O error (including metadata corruption). 327 + * * %-ENOENT - Non-existent block. 328 + * * %-ENOMEM - Insufficient memory available. 328 329 */ 329 330 int nilfs_mdt_delete_block(struct inode *inode, unsigned long block) 330 331 { ··· 346 349 * nilfs_mdt_forget_block() clears a dirty flag of the specified buffer, and 347 350 * tries to release the page including the buffer from a page cache. 348 351 * 349 - * Return Value: On success, 0 is returned. On error, one of the following 350 - * negative error code is returned. 351 - * 352 - * %-EBUSY - page has an active buffer. 353 - * 354 - * %-ENOENT - page cache has no page addressed by the offset. 352 + * Return: 0 on success, or one of the following negative error codes on 353 + * failure: 354 + * * %-EBUSY - Page has an active buffer. 355 + * * %-ENOENT - Page cache has no page addressed by the offset. 355 356 */ 356 357 int nilfs_mdt_forget_block(struct inode *inode, unsigned long block) 357 358 { ··· 519 524 * nilfs_mdt_setup_shadow_map - setup shadow map and bind it to metadata file 520 525 * @inode: inode of the metadata file 521 526 * @shadow: shadow mapping 527 + * 528 + * Return: 0 on success, or a negative error code on failure. 522 529 */ 523 530 int nilfs_mdt_setup_shadow_map(struct inode *inode, 524 531 struct nilfs_shadow_map *shadow) ··· 542 545 /** 543 546 * nilfs_mdt_save_to_shadow_map - copy bmap and dirty pages to shadow map 544 547 * @inode: inode of the metadata file 548 + * 549 + * Return: 0 on success, or a negative error code on failure. 545 550 */ 546 551 int nilfs_mdt_save_to_shadow_map(struct inode *inode) 547 552 {
+21 -18
fs/nilfs2/namei.c
··· 370 370 struct folio *old_folio; 371 371 struct nilfs_dir_entry *old_de; 372 372 struct nilfs_transaction_info ti; 373 + bool old_is_dir = S_ISDIR(old_inode->i_mode); 373 374 int err; 374 375 375 376 if (flags & ~RENAME_NOREPLACE) ··· 386 385 goto out; 387 386 } 388 387 389 - if (S_ISDIR(old_inode->i_mode)) { 388 + if (old_is_dir && old_dir != new_dir) { 390 389 err = -EIO; 391 390 dir_de = nilfs_dotdot(old_inode, &dir_folio); 392 391 if (!dir_de) ··· 398 397 struct nilfs_dir_entry *new_de; 399 398 400 399 err = -ENOTEMPTY; 401 - if (dir_de && !nilfs_empty_dir(new_inode)) 400 + if (old_is_dir && !nilfs_empty_dir(new_inode)) 402 401 goto out_dir; 403 402 404 403 new_de = nilfs_find_entry(new_dir, &new_dentry->d_name, ··· 407 406 err = PTR_ERR(new_de); 408 407 goto out_dir; 409 408 } 410 - nilfs_set_link(new_dir, new_de, new_folio, old_inode); 409 + err = nilfs_set_link(new_dir, new_de, new_folio, old_inode); 411 410 folio_release_kmap(new_folio, new_de); 411 + if (unlikely(err)) 412 + goto out_dir; 412 413 nilfs_mark_inode_dirty(new_dir); 413 414 inode_set_ctime_current(new_inode); 414 - if (dir_de) 415 + if (old_is_dir) 415 416 drop_nlink(new_inode); 416 417 drop_nlink(new_inode); 417 418 nilfs_mark_inode_dirty(new_inode); ··· 421 418 err = nilfs_add_link(new_dentry, old_inode); 422 419 if (err) 423 420 goto out_dir; 424 - if (dir_de) { 421 + if (old_is_dir) { 425 422 inc_nlink(new_dir); 426 423 nilfs_mark_inode_dirty(new_dir); 427 424 } ··· 433 430 */ 434 431 inode_set_ctime_current(old_inode); 435 432 436 - nilfs_delete_entry(old_de, old_folio); 437 - 438 - if (dir_de) { 439 - nilfs_set_link(old_inode, dir_de, dir_folio, new_dir); 440 - folio_release_kmap(dir_folio, dir_de); 441 - drop_nlink(old_dir); 433 + err = nilfs_delete_entry(old_de, old_folio); 434 + if (likely(!err)) { 435 + if (old_is_dir) { 436 + if (old_dir != new_dir) 437 + err = nilfs_set_link(old_inode, dir_de, 438 + dir_folio, new_dir); 439 + drop_nlink(old_dir); 440 + } 441 + nilfs_mark_inode_dirty(old_dir); 442 442 } 443 - folio_release_kmap(old_folio, old_de); 444 - 445 - nilfs_mark_inode_dirty(old_dir); 446 443 nilfs_mark_inode_dirty(old_inode); 447 - 448 - err = nilfs_transaction_commit(old_dir->i_sb); 449 - return err; 450 444 451 445 out_dir: 452 446 if (dir_de) ··· 451 451 out_old: 452 452 folio_release_kmap(old_folio, old_de); 453 453 out: 454 - nilfs_transaction_abort(old_dir->i_sb); 454 + if (likely(!err)) 455 + err = nilfs_transaction_commit(old_dir->i_sb); 456 + else 457 + nilfs_transaction_abort(old_dir->i_sb); 455 458 return err; 456 459 } 457 460
+2 -2
fs/nilfs2/nilfs.h
··· 261 261 int nilfs_delete_entry(struct nilfs_dir_entry *, struct folio *); 262 262 int nilfs_empty_dir(struct inode *); 263 263 struct nilfs_dir_entry *nilfs_dotdot(struct inode *, struct folio **); 264 - void nilfs_set_link(struct inode *, struct nilfs_dir_entry *, 265 - struct folio *, struct inode *); 264 + int nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de, 265 + struct folio *folio, struct inode *inode); 266 266 267 267 /* file.c */ 268 268 extern int nilfs_sync_file(struct file *, loff_t, loff_t, int);
+31 -8
fs/nilfs2/page.c
··· 135 135 * nilfs_folio_buffers_clean - Check if a folio has dirty buffers or not. 136 136 * @folio: Folio to be checked. 137 137 * 138 - * nilfs_folio_buffers_clean() returns false if the folio has dirty buffers. 139 - * Otherwise, it returns true. 138 + * Return: false if the folio has dirty buffers, true otherwise. 140 139 */ 141 140 bool nilfs_folio_buffers_clean(struct folio *folio) 142 141 { ··· 391 392 /** 392 393 * nilfs_clear_folio_dirty - discard dirty folio 393 394 * @folio: dirty folio that will be discarded 395 + * 396 + * nilfs_clear_folio_dirty() clears working states including dirty state for 397 + * the folio and its buffers. If the folio has buffers, clear only if it is 398 + * confirmed that none of the buffer heads are busy (none have valid 399 + * references and none are locked). 394 400 */ 395 401 void nilfs_clear_folio_dirty(struct folio *folio) 396 402 { 397 403 struct buffer_head *bh, *head; 398 404 399 405 BUG_ON(!folio_test_locked(folio)); 400 - 401 - folio_clear_uptodate(folio); 402 - folio_clear_mappedtodisk(folio); 403 - folio_clear_checked(folio); 404 406 405 407 head = folio_buffers(folio); 406 408 if (head) { ··· 410 410 BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) | 411 411 BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected) | 412 412 BIT(BH_Delay)); 413 + bool busy, invalidated = false; 414 + 415 + recheck_buffers: 416 + busy = false; 417 + bh = head; 418 + do { 419 + if (atomic_read(&bh->b_count) | buffer_locked(bh)) { 420 + busy = true; 421 + break; 422 + } 423 + } while (bh = bh->b_this_page, bh != head); 424 + 425 + if (busy) { 426 + if (invalidated) 427 + return; 428 + invalidate_bh_lrus(); 429 + invalidated = true; 430 + goto recheck_buffers; 431 + } 413 432 414 433 bh = head; 415 434 do { ··· 438 419 } while (bh = bh->b_this_page, bh != head); 439 420 } 440 421 422 + folio_clear_uptodate(folio); 423 + folio_clear_mappedtodisk(folio); 424 + folio_clear_checked(folio); 441 425 __nilfs_clear_folio_dirty(folio); 442 426 } 443 427 ··· 499 477 * This function searches an extent of buffers marked "delayed" which 500 478 * starts from a block offset equal to or larger than @start_blk. If 501 479 * such an extent was found, this will store the start offset in 502 - * @blkoff and return its length in blocks. Otherwise, zero is 503 - * returned. 480 + * @blkoff and return its length in blocks. 481 + * 482 + * Return: Length in blocks of found extent, 0 otherwise. 504 483 */ 505 484 unsigned long nilfs_find_uncommitted_extent(struct inode *inode, 506 485 sector_t start_blk,
+42 -20
fs/nilfs2/recovery.c
··· 88 88 * @check_bytes: number of bytes to be checked 89 89 * @start: DBN of start block 90 90 * @nblock: number of blocks to be checked 91 + * 92 + * Return: 0 on success, or %-EIO if an I/O error occurs. 91 93 */ 92 94 static int nilfs_compute_checksum(struct the_nilfs *nilfs, 93 95 struct buffer_head *bhs, u32 *sum, ··· 128 126 * @sr_block: disk block number of the super root block 129 127 * @pbh: address of a buffer_head pointer to return super root buffer 130 128 * @check: CRC check flag 129 + * 130 + * Return: 0 on success, or one of the following negative error codes on 131 + * failure: 132 + * * %-EINVAL - Super root block corrupted. 133 + * * %-EIO - I/O error. 131 134 */ 132 135 int nilfs_read_super_root_block(struct the_nilfs *nilfs, sector_t sr_block, 133 136 struct buffer_head **pbh, int check) ··· 183 176 * @nilfs: nilfs object 184 177 * @start_blocknr: start block number of the log 185 178 * @sum: pointer to return segment summary structure 179 + * 180 + * Return: Buffer head pointer, or NULL if an I/O error occurs. 186 181 */ 187 182 static struct buffer_head * 188 183 nilfs_read_log_header(struct the_nilfs *nilfs, sector_t start_blocknr, ··· 204 195 * @seg_seq: sequence number of segment 205 196 * @bh_sum: buffer head of summary block 206 197 * @sum: segment summary struct 198 + * 199 + * Return: 0 on success, or one of the following internal codes on failure: 200 + * * %NILFS_SEG_FAIL_MAGIC - Magic number mismatch. 201 + * * %NILFS_SEG_FAIL_SEQ - Sequence number mismatch. 202 + * * %NIFLS_SEG_FAIL_CONSISTENCY - Block count out of range. 203 + * * %NILFS_SEG_FAIL_IO - I/O error. 204 + * * %NILFS_SEG_FAIL_CHECKSUM_FULL - Full log checksum verification failed. 207 205 */ 208 206 static int nilfs_validate_log(struct the_nilfs *nilfs, u64 seg_seq, 209 207 struct buffer_head *bh_sum, ··· 254 238 * @pbh: the current buffer head on summary blocks [in, out] 255 239 * @offset: the current byte offset on summary blocks [in, out] 256 240 * @bytes: byte size of the item to be read 241 + * 242 + * Return: Kernel space address of current segment summary entry, or 243 + * NULL if an I/O error occurs. 257 244 */ 258 245 static void *nilfs_read_summary_info(struct the_nilfs *nilfs, 259 246 struct buffer_head **pbh, ··· 319 300 * @start_blocknr: start block number of the log 320 301 * @sum: log summary information 321 302 * @head: list head to add nilfs_recovery_block struct 303 + * 304 + * Return: 0 on success, or one of the following negative error codes on 305 + * failure: 306 + * * %-EIO - I/O error. 307 + * * %-ENOMEM - Insufficient memory available. 322 308 */ 323 309 static int nilfs_scan_dsync_log(struct the_nilfs *nilfs, sector_t start_blocknr, 324 310 struct nilfs_segment_summary *sum, ··· 595 571 * @sb: super block instance 596 572 * @root: NILFS root instance 597 573 * @ri: pointer to a nilfs_recovery_info 574 + * 575 + * Return: 0 on success, or one of the following negative error codes on 576 + * failure: 577 + * * %-EINVAL - Log format error. 578 + * * %-EIO - I/O error. 579 + * * %-ENOMEM - Insufficient memory available. 598 580 */ 599 581 static int nilfs_do_roll_forward(struct the_nilfs *nilfs, 600 582 struct super_block *sb, ··· 784 754 * @sb: super block instance 785 755 * @ri: pointer to a nilfs_recovery_info struct to store search results. 786 756 * 787 - * Return Value: On success, 0 is returned. On error, one of the following 788 - * negative error code is returned. 789 - * 790 - * %-EINVAL - Inconsistent filesystem state. 791 - * 792 - * %-EIO - I/O error 793 - * 794 - * %-ENOSPC - No space left on device (only in a panic state). 795 - * 796 - * %-ERESTARTSYS - Interrupted. 797 - * 798 - * %-ENOMEM - Insufficient memory available. 757 + * Return: 0 on success, or one of the following negative error codes on 758 + * failure: 759 + * * %-EINVAL - Inconsistent filesystem state. 760 + * * %-EIO - I/O error. 761 + * * %-ENOMEM - Insufficient memory available. 762 + * * %-ENOSPC - No space left on device (only in a panic state). 763 + * * %-ERESTARTSYS - Interrupted. 799 764 */ 800 765 int nilfs_salvage_orphan_logs(struct the_nilfs *nilfs, 801 766 struct super_block *sb, ··· 855 830 * segment pointed by the superblock. It sets up struct the_nilfs through 856 831 * this search. It fills nilfs_recovery_info (ri) required for recovery. 857 832 * 858 - * Return Value: On success, 0 is returned. On error, one of the following 859 - * negative error code is returned. 860 - * 861 - * %-EINVAL - No valid segment found 862 - * 863 - * %-EIO - I/O error 864 - * 865 - * %-ENOMEM - Insufficient memory available. 833 + * Return: 0 on success, or one of the following negative error codes on 834 + * failure: 835 + * * %-EINVAL - No valid segment found. 836 + * * %-EIO - I/O error. 837 + * * %-ENOMEM - Insufficient memory available. 866 838 */ 867 839 int nilfs_search_super_root(struct the_nilfs *nilfs, 868 840 struct nilfs_recovery_info *ri)
+2 -10
fs/nilfs2/segbuf.c
··· 406 406 * @segbuf: buffer storing a log to be written 407 407 * @nilfs: nilfs object 408 408 * 409 - * Return Value: On Success, 0 is returned. On Error, one of the following 410 - * negative error code is returned. 411 - * 412 - * %-EIO - I/O error 413 - * 414 - * %-ENOMEM - Insufficient memory available. 409 + * Return: Always 0. 415 410 */ 416 411 static int nilfs_segbuf_write(struct nilfs_segment_buffer *segbuf, 417 412 struct the_nilfs *nilfs) ··· 447 452 * nilfs_segbuf_wait - wait for completion of requested BIOs 448 453 * @segbuf: segment buffer 449 454 * 450 - * Return Value: On Success, 0 is returned. On Error, one of the following 451 - * negative error code is returned. 452 - * 453 - * %-EIO - I/O error 455 + * Return: 0 on success, or %-EIO if I/O error is detected. 454 456 */ 455 457 static int nilfs_segbuf_wait(struct nilfs_segment_buffer *segbuf) 456 458 {
+33 -33
fs/nilfs2/segment.c
··· 191 191 * When @vacancy_check flag is set, this function will check the amount of 192 192 * free space, and will wait for the GC to reclaim disk space if low capacity. 193 193 * 194 - * Return Value: On success, 0 is returned. On error, one of the following 195 - * negative error code is returned. 196 - * 197 - * %-ENOMEM - Insufficient memory available. 198 - * 199 - * %-ENOSPC - No space left on device 194 + * Return: 0 on success, or one of the following negative error codes on 195 + * failure: 196 + * * %-ENOMEM - Insufficient memory available. 197 + * * %-ENOSPC - No space left on device (if checking free space). 200 198 */ 201 199 int nilfs_transaction_begin(struct super_block *sb, 202 200 struct nilfs_transaction_info *ti, ··· 250 252 * nilfs_transaction_commit() sets a timer to start the segment 251 253 * constructor. If a sync flag is set, it starts construction 252 254 * directly. 255 + * 256 + * Return: 0 on success, or a negative error code on failure. 253 257 */ 254 258 int nilfs_transaction_commit(struct super_block *sb) 255 259 { ··· 407 407 /** 408 408 * nilfs_segctor_reset_segment_buffer - reset the current segment buffer 409 409 * @sci: nilfs_sc_info 410 + * 411 + * Return: 0 on success, or a negative error code on failure. 410 412 */ 411 413 static int nilfs_segctor_reset_segment_buffer(struct nilfs_sc_info *sci) 412 414 { ··· 736 734 if (!head) 737 735 head = create_empty_buffers(folio, 738 736 i_blocksize(inode), 0); 739 - folio_unlock(folio); 740 737 741 738 bh = head; 742 739 do { ··· 745 744 list_add_tail(&bh->b_assoc_buffers, listp); 746 745 ndirties++; 747 746 if (unlikely(ndirties >= nlimit)) { 747 + folio_unlock(folio); 748 748 folio_batch_release(&fbatch); 749 749 cond_resched(); 750 750 return ndirties; 751 751 } 752 752 } while (bh = bh->b_this_page, bh != head); 753 + 754 + folio_unlock(folio); 753 755 } 754 756 folio_batch_release(&fbatch); 755 757 cond_resched(); ··· 1122 1118 * a super root block containing this sufile change is complete, and it can 1123 1119 * be canceled with nilfs_sufile_cancel_freev() until then. 1124 1120 * 1125 - * Return: 0 on success, or the following negative error code on failure. 1121 + * Return: 0 on success, or one of the following negative error codes on 1122 + * failure: 1126 1123 * * %-EINVAL - Invalid segment number. 1127 1124 * * %-EIO - I/O error (including metadata corruption). 1128 1125 * * %-ENOMEM - Insufficient memory available. ··· 1320 1315 * nilfs_segctor_begin_construction - setup segment buffer to make a new log 1321 1316 * @sci: nilfs_sc_info 1322 1317 * @nilfs: nilfs object 1318 + * 1319 + * Return: 0 on success, or a negative error code on failure. 1323 1320 */ 1324 1321 static int nilfs_segctor_begin_construction(struct nilfs_sc_info *sci, 1325 1322 struct the_nilfs *nilfs) ··· 2319 2312 * nilfs_construct_segment - construct a logical segment 2320 2313 * @sb: super block 2321 2314 * 2322 - * Return Value: On success, 0 is returned. On errors, one of the following 2323 - * negative error code is returned. 2324 - * 2325 - * %-EROFS - Read only filesystem. 2326 - * 2327 - * %-EIO - I/O error 2328 - * 2329 - * %-ENOSPC - No space left on device (only in a panic state). 2330 - * 2331 - * %-ERESTARTSYS - Interrupted. 2332 - * 2333 - * %-ENOMEM - Insufficient memory available. 2315 + * Return: 0 on success, or one of the following negative error codes on 2316 + * failure: 2317 + * * %-EIO - I/O error (including metadata corruption). 2318 + * * %-ENOMEM - Insufficient memory available. 2319 + * * %-ENOSPC - No space left on device (only in a panic state). 2320 + * * %-ERESTARTSYS - Interrupted. 2321 + * * %-EROFS - Read only filesystem. 2334 2322 */ 2335 2323 int nilfs_construct_segment(struct super_block *sb) 2336 2324 { ··· 2349 2347 * @start: start byte offset 2350 2348 * @end: end byte offset (inclusive) 2351 2349 * 2352 - * Return Value: On success, 0 is returned. On errors, one of the following 2353 - * negative error code is returned. 2354 - * 2355 - * %-EROFS - Read only filesystem. 2356 - * 2357 - * %-EIO - I/O error 2358 - * 2359 - * %-ENOSPC - No space left on device (only in a panic state). 2360 - * 2361 - * %-ERESTARTSYS - Interrupted. 2362 - * 2363 - * %-ENOMEM - Insufficient memory available. 2350 + * Return: 0 on success, or one of the following negative error codes on 2351 + * failure: 2352 + * * %-EIO - I/O error (including metadata corruption). 2353 + * * %-ENOMEM - Insufficient memory available. 2354 + * * %-ENOSPC - No space left on device (only in a panic state). 2355 + * * %-ERESTARTSYS - Interrupted. 2356 + * * %-EROFS - Read only filesystem. 2364 2357 */ 2365 2358 int nilfs_construct_dsync_segment(struct super_block *sb, struct inode *inode, 2366 2359 loff_t start, loff_t end) ··· 2461 2464 * nilfs_segctor_construct - form logs and write them to disk 2462 2465 * @sci: segment constructor object 2463 2466 * @mode: mode of log forming 2467 + * 2468 + * Return: 0 on success, or a negative error code on failure. 2464 2469 */ 2465 2470 static int nilfs_segctor_construct(struct nilfs_sc_info *sci, int mode) 2466 2471 { ··· 2835 2836 * This allocates a log writer object, initializes it, and starts the 2836 2837 * log writer. 2837 2838 * 2838 - * Return: 0 on success, or the following negative error code on failure. 2839 + * Return: 0 on success, or one of the following negative error codes on 2840 + * failure: 2839 2841 * * %-EINTR - Log writer thread creation failed due to interruption. 2840 2842 * * %-ENOMEM - Insufficient memory available. 2841 2843 */
+48 -64
fs/nilfs2/sufile.c
··· 133 133 /** 134 134 * nilfs_sufile_get_ncleansegs - return the number of clean segments 135 135 * @sufile: inode of segment usage file 136 + * 137 + * Return: Number of clean segments. 136 138 */ 137 139 unsigned long nilfs_sufile_get_ncleansegs(struct inode *sufile) 138 140 { ··· 157 155 * of successfully modified segments from the head is stored in the 158 156 * place @ndone points to. 159 157 * 160 - * Return Value: On success, zero is returned. On error, one of the 161 - * following negative error codes is returned. 162 - * 163 - * %-EIO - I/O error. 164 - * 165 - * %-ENOMEM - Insufficient amount of memory available. 166 - * 167 - * %-ENOENT - Given segment usage is in hole block (may be returned if 168 - * @create is zero) 169 - * 170 - * %-EINVAL - Invalid segment usage number 158 + * Return: 0 on success, or one of the following negative error codes on 159 + * failure: 160 + * * %-EINVAL - Invalid segment usage number 161 + * * %-EIO - I/O error (including metadata corruption). 162 + * * %-ENOENT - Given segment usage is in hole block (may be returned if 163 + * @create is zero) 164 + * * %-ENOMEM - Insufficient memory available. 171 165 */ 172 166 int nilfs_sufile_updatev(struct inode *sufile, __u64 *segnumv, size_t nsegs, 173 167 int create, size_t *ndone, ··· 270 272 * @start: minimum segment number of allocatable region (inclusive) 271 273 * @end: maximum segment number of allocatable region (inclusive) 272 274 * 273 - * Return Value: On success, 0 is returned. On error, one of the 274 - * following negative error codes is returned. 275 - * 276 - * %-ERANGE - invalid segment region 275 + * Return: 0 on success, or %-ERANGE if segment range is invalid. 277 276 */ 278 277 int nilfs_sufile_set_alloc_range(struct inode *sufile, __u64 start, __u64 end) 279 278 { ··· 295 300 * @sufile: inode of segment usage file 296 301 * @segnump: pointer to segment number 297 302 * 298 - * Description: nilfs_sufile_alloc() allocates a clean segment. 303 + * Description: nilfs_sufile_alloc() allocates a clean segment, and stores 304 + * its segment number in the place pointed to by @segnump. 299 305 * 300 - * Return Value: On success, 0 is returned and the segment number of the 301 - * allocated segment is stored in the place pointed by @segnump. On error, one 302 - * of the following negative error codes is returned. 303 - * 304 - * %-EIO - I/O error. 305 - * 306 - * %-ENOMEM - Insufficient amount of memory available. 307 - * 308 - * %-ENOSPC - No clean segment left. 306 + * Return: 0 on success, or one of the following negative error codes on 307 + * failure: 308 + * * %-EIO - I/O error (including metadata corruption). 309 + * * %-ENOMEM - Insufficient memory available. 310 + * * %-ENOSPC - No clean segment left. 309 311 */ 310 312 int nilfs_sufile_alloc(struct inode *sufile, __u64 *segnump) 311 313 { ··· 502 510 * nilfs_sufile_mark_dirty - mark the buffer having a segment usage dirty 503 511 * @sufile: inode of segment usage file 504 512 * @segnum: segment number 513 + * 514 + * Return: 0 on success, or a negative error code on failure. 505 515 */ 506 516 int nilfs_sufile_mark_dirty(struct inode *sufile, __u64 segnum) 507 517 { ··· 563 569 * @segnum: segment number 564 570 * @nblocks: number of live blocks in the segment 565 571 * @modtime: modification time (option) 572 + * 573 + * Return: 0 on success, or a negative error code on failure. 566 574 */ 567 575 int nilfs_sufile_set_segment_usage(struct inode *sufile, __u64 segnum, 568 576 unsigned long nblocks, time64_t modtime) ··· 606 610 * @sufile: inode of segment usage file 607 611 * @sustat: pointer to a structure of segment usage statistics 608 612 * 609 - * Description: nilfs_sufile_get_stat() returns information about segment 610 - * usage. 613 + * Description: nilfs_sufile_get_stat() retrieves segment usage statistics 614 + * and stores them in the location pointed to by @sustat. 611 615 * 612 - * Return Value: On success, 0 is returned, and segment usage information is 613 - * stored in the place pointed by @sustat. On error, one of the following 614 - * negative error codes is returned. 615 - * 616 - * %-EIO - I/O error. 617 - * 618 - * %-ENOMEM - Insufficient amount of memory available. 616 + * Return: 0 on success, or one of the following negative error codes on 617 + * failure: 618 + * * %-EIO - I/O error (including metadata corruption). 619 + * * %-ENOMEM - Insufficient memory available. 619 620 */ 620 621 int nilfs_sufile_get_stat(struct inode *sufile, struct nilfs_sustat *sustat) 621 622 { ··· 676 683 * @start: start segment number (inclusive) 677 684 * @end: end segment number (inclusive) 678 685 * 679 - * Return Value: On success, 0 is returned. On error, one of the 680 - * following negative error codes is returned. 681 - * 682 - * %-EIO - I/O error. 683 - * 684 - * %-ENOMEM - Insufficient amount of memory available. 685 - * 686 - * %-EINVAL - Invalid number of segments specified 687 - * 688 - * %-EBUSY - Dirty or active segments are present in the range 686 + * Return: 0 on success, or one of the following negative error codes on 687 + * failure: 688 + * * %-EBUSY - Dirty or active segments are present in the range. 689 + * * %-EINVAL - Invalid number of segments specified. 690 + * * %-EIO - I/O error (including metadata corruption). 691 + * * %-ENOMEM - Insufficient memory available. 689 692 */ 690 693 static int nilfs_sufile_truncate_range(struct inode *sufile, 691 694 __u64 start, __u64 end) ··· 776 787 * @sufile: inode of segment usage file 777 788 * @newnsegs: new number of segments 778 789 * 779 - * Return Value: On success, 0 is returned. On error, one of the 780 - * following negative error codes is returned. 781 - * 782 - * %-EIO - I/O error. 783 - * 784 - * %-ENOMEM - Insufficient amount of memory available. 785 - * 786 - * %-ENOSPC - Enough free space is not left for shrinking 787 - * 788 - * %-EBUSY - Dirty or active segments exist in the region to be truncated 790 + * Return: 0 on success, or one of the following negative error codes on 791 + * failure: 792 + * * %-EBUSY - Dirty or active segments exist in the region to be truncated. 793 + * * %-EIO - I/O error (including metadata corruption). 794 + * * %-ENOMEM - Insufficient memory available. 795 + * * %-ENOSPC - Enough free space is not left for shrinking. 789 796 */ 790 797 int nilfs_sufile_resize(struct inode *sufile, __u64 newnsegs) 791 798 { ··· 850 865 * @nsi: size of suinfo array 851 866 * 852 867 * Return: Count of segment usage info items stored in the output buffer on 853 - * success, or the following negative error code on failure. 868 + * success, or one of the following negative error codes on failure: 854 869 * * %-EIO - I/O error (including metadata corruption). 855 870 * * %-ENOMEM - Insufficient memory available. 856 871 */ ··· 924 939 * segment usage accordingly. Only the fields indicated by the sup_flags 925 940 * are updated. 926 941 * 927 - * Return Value: On success, 0 is returned. On error, one of the 928 - * following negative error codes is returned. 929 - * 930 - * %-EIO - I/O error. 931 - * 932 - * %-ENOMEM - Insufficient amount of memory available. 933 - * 934 - * %-EINVAL - Invalid values in input (segment number, flags or nblocks) 942 + * Return: 0 on success, or one of the following negative error codes on 943 + * failure: 944 + * * %-EINVAL - Invalid values in input (segment number, flags or nblocks). 945 + * * %-EIO - I/O error (including metadata corruption). 946 + * * %-ENOMEM - Insufficient memory available. 935 947 */ 936 948 ssize_t nilfs_sufile_set_suinfo(struct inode *sufile, void *buf, 937 949 unsigned int supsz, size_t nsup) ··· 1055 1073 * and start+len is rounded down. For each clean segment blkdev_issue_discard 1056 1074 * function is invoked. 1057 1075 * 1058 - * Return Value: On success, 0 is returned or negative error code, otherwise. 1076 + * Return: 0 on success, or a negative error code on failure. 1059 1077 */ 1060 1078 int nilfs_sufile_trim_fs(struct inode *sufile, struct fstrim_range *range) 1061 1079 { ··· 1201 1219 * @susize: size of a segment usage entry 1202 1220 * @raw_inode: on-disk sufile inode 1203 1221 * @inodep: buffer to store the inode 1222 + * 1223 + * Return: 0 on success, or a negative error code on failure. 1204 1224 */ 1205 1225 int nilfs_sufile_read(struct super_block *sb, size_t susize, 1206 1226 struct nilfs_inode *raw_inode, struct inode **inodep)
+12 -10
fs/nilfs2/sufile.h
··· 58 58 * nilfs_sufile_scrap - make a segment garbage 59 59 * @sufile: inode of segment usage file 60 60 * @segnum: segment number to be freed 61 + * 62 + * Return: 0 on success, or a negative error code on failure. 61 63 */ 62 64 static inline int nilfs_sufile_scrap(struct inode *sufile, __u64 segnum) 63 65 { ··· 70 68 * nilfs_sufile_free - free segment 71 69 * @sufile: inode of segment usage file 72 70 * @segnum: segment number to be freed 71 + * 72 + * Return: 0 on success, or a negative error code on failure. 73 73 */ 74 74 static inline int nilfs_sufile_free(struct inode *sufile, __u64 segnum) 75 75 { ··· 84 80 * @segnumv: array of segment numbers 85 81 * @nsegs: size of @segnumv array 86 82 * @ndone: place to store the number of freed segments 83 + * 84 + * Return: 0 on success, or a negative error code on failure. 87 85 */ 88 86 static inline int nilfs_sufile_freev(struct inode *sufile, __u64 *segnumv, 89 87 size_t nsegs, size_t *ndone) ··· 101 95 * @nsegs: size of @segnumv array 102 96 * @ndone: place to store the number of cancelled segments 103 97 * 104 - * Return Value: On success, 0 is returned. On error, a negative error codes 105 - * is returned. 98 + * Return: 0 on success, or a negative error code on failure. 106 99 */ 107 100 static inline int nilfs_sufile_cancel_freev(struct inode *sufile, 108 101 __u64 *segnumv, size_t nsegs, ··· 119 114 * Description: nilfs_sufile_set_error() marks the segment specified by 120 115 * @segnum as erroneous. The error segment will never be used again. 121 116 * 122 - * Return Value: On success, 0 is returned. On error, one of the following 123 - * negative error codes is returned. 124 - * 125 - * %-EIO - I/O error. 126 - * 127 - * %-ENOMEM - Insufficient amount of memory available. 128 - * 129 - * %-EINVAL - Invalid segment usage number. 117 + * Return: 0 on success, or one of the following negative error codes on 118 + * failure: 119 + * * %-EINVAL - Invalid segment usage number. 120 + * * %-EIO - I/O error (including metadata corruption). 121 + * * %-ENOMEM - Insufficient memory available. 130 122 */ 131 123 static inline int nilfs_sufile_set_error(struct inode *sufile, __u64 segnum) 132 124 {
+9 -1
fs/nilfs2/super.c
··· 309 309 * This function restores state flags in the on-disk super block. 310 310 * This will set "clean" flag (i.e. NILFS_VALID_FS) unless the 311 311 * filesystem was not clean previously. 312 + * 313 + * Return: 0 on success, %-EIO if I/O error or superblock is corrupted. 312 314 */ 313 315 int nilfs_cleanup_super(struct super_block *sb) 314 316 { ··· 341 339 * nilfs_move_2nd_super - relocate secondary super block 342 340 * @sb: super block instance 343 341 * @sb2off: new offset of the secondary super block (in bytes) 342 + * 343 + * Return: 0 on success, or a negative error code on failure. 344 344 */ 345 345 static int nilfs_move_2nd_super(struct super_block *sb, loff_t sb2off) 346 346 { ··· 424 420 * nilfs_resize_fs - resize the filesystem 425 421 * @sb: super block instance 426 422 * @newsize: new size of the filesystem (in bytes) 423 + * 424 + * Return: 0 on success, or a negative error code on failure. 427 425 */ 428 426 int nilfs_resize_fs(struct super_block *sb, __u64 newsize) 429 427 { ··· 993 987 * nilfs_tree_is_busy() - try to shrink dentries of a checkpoint 994 988 * @root_dentry: root dentry of the tree to be shrunk 995 989 * 996 - * This function returns true if the tree was in-use. 990 + * Return: true if the tree was in-use, false otherwise. 997 991 */ 998 992 static bool nilfs_tree_is_busy(struct dentry *root_dentry) 999 993 { ··· 1039 1033 * 1040 1034 * This function is called exclusively by nilfs->ns_mount_mutex. 1041 1035 * So, the recovery process is protected from other simultaneous mounts. 1036 + * 1037 + * Return: 0 on success, or a negative error code on failure. 1042 1038 */ 1043 1039 static int 1044 1040 nilfs_fill_super(struct super_block *sb, struct fs_context *fc)
+19 -7
fs/nilfs2/the_nilfs.c
··· 49 49 * alloc_nilfs - allocate a nilfs object 50 50 * @sb: super block instance 51 51 * 52 - * Return Value: On success, pointer to the_nilfs is returned. 53 - * On error, NULL is returned. 52 + * Return: a pointer to the allocated nilfs object on success, or NULL on 53 + * failure. 54 54 */ 55 55 struct the_nilfs *alloc_nilfs(struct super_block *sb) 56 56 { ··· 165 165 * containing a super root from a given super block, and initializes 166 166 * relevant information on the nilfs object preparatory for log 167 167 * scanning and recovery. 168 + * 169 + * Return: 0 on success, or %-EINVAL if current segment number is out 170 + * of range. 168 171 */ 169 172 static int nilfs_store_log_cursor(struct the_nilfs *nilfs, 170 173 struct nilfs_super_block *sbp) ··· 203 200 * exponent information written in @sbp and stores it in @blocksize, 204 201 * or aborts with an error message if it's too large. 205 202 * 206 - * Return Value: On success, 0 is returned. If the block size is too 207 - * large, -EINVAL is returned. 203 + * Return: 0 on success, or %-EINVAL if the block size is too large. 208 204 */ 209 205 static int nilfs_get_blocksize(struct super_block *sb, 210 206 struct nilfs_super_block *sbp, int *blocksize) ··· 228 226 * load_nilfs() searches and load the latest super root, 229 227 * attaches the last segment, and does recovery if needed. 230 228 * The caller must call this exclusively for simultaneous mounts. 229 + * 230 + * Return: 0 on success, or one of the following negative error codes on 231 + * failure: 232 + * * %-EINVAL - No valid segment found. 233 + * * %-EIO - I/O error. 234 + * * %-ENOMEM - Insufficient memory available. 235 + * * %-EROFS - Read only device or RO compat mode (if recovery is required) 231 236 */ 232 237 int load_nilfs(struct the_nilfs *nilfs, struct super_block *sb) 233 238 { ··· 404 395 * nilfs_nrsvsegs - calculate the number of reserved segments 405 396 * @nilfs: nilfs object 406 397 * @nsegs: total number of segments 398 + * 399 + * Return: Number of reserved segments. 407 400 */ 408 401 unsigned long nilfs_nrsvsegs(struct the_nilfs *nilfs, unsigned long nsegs) 409 402 { ··· 417 406 /** 418 407 * nilfs_max_segment_count - calculate the maximum number of segments 419 408 * @nilfs: nilfs object 409 + * 410 + * Return: Maximum number of segments 420 411 */ 421 412 static u64 nilfs_max_segment_count(struct the_nilfs *nilfs) 422 413 { ··· 551 538 * area, or if the parameters themselves are not normal, it is 552 539 * determined to be invalid. 553 540 * 554 - * Return Value: true if invalid, false if valid. 541 + * Return: true if invalid, false if valid. 555 542 */ 556 543 static bool nilfs_sb2_bad_offset(struct nilfs_super_block *sbp, u64 offset) 557 544 { ··· 697 684 * reading the super block, getting disk layout information, initializing 698 685 * shared fields in the_nilfs). 699 686 * 700 - * Return Value: On success, 0 is returned. On error, a negative error 701 - * code is returned. 687 + * Return: 0 on success, or a negative error code on failure. 702 688 */ 703 689 int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb) 704 690 {
+84 -73
fs/ocfs2/alloc.c
··· 566 566 struct ocfs2_path *path, 567 567 struct ocfs2_extent_rec *insert_rec); 568 568 /* 569 - * Reset the actual path elements so that we can re-use the structure 569 + * Reset the actual path elements so that we can reuse the structure 570 570 * to build another path. Generally, this involves freeing the buffer 571 571 * heads. 572 572 */ ··· 1182 1182 1183 1183 /* 1184 1184 * If there is a gap before the root end and the real end 1185 - * of the righmost leaf block, we need to remove the gap 1185 + * of the rightmost leaf block, we need to remove the gap 1186 1186 * between new_cpos and root_end first so that the tree 1187 1187 * is consistent after we add a new branch(it will start 1188 1188 * from new_cpos). ··· 1238 1238 1239 1239 /* Note: new_eb_bhs[new_blocks - 1] is the guy which will be 1240 1240 * linked with the rest of the tree. 1241 - * conversly, new_eb_bhs[0] is the new bottommost leaf. 1241 + * conversely, new_eb_bhs[0] is the new bottommost leaf. 1242 1242 * 1243 1243 * when we leave the loop, new_last_eb_blk will point to the 1244 1244 * newest leaf, and next_blkno will point to the topmost extent ··· 3712 3712 * update split_index here. 3713 3713 * 3714 3714 * When the split_index is zero, we need to merge it to the 3715 - * prevoius extent block. It is more efficient and easier 3715 + * previous extent block. It is more efficient and easier 3716 3716 * if we do merge_right first and merge_left later. 3717 3717 */ 3718 3718 ret = ocfs2_merge_rec_right(path, handle, et, split_rec, ··· 4517 4517 } 4518 4518 4519 4519 /* 4520 - * This should only be called against the righmost leaf extent list. 4520 + * This should only be called against the rightmost leaf extent list. 4521 4521 * 4522 4522 * ocfs2_figure_appending_type() will figure out whether we'll have to 4523 4523 * insert at the tail of the rightmost leaf. ··· 6154 6154 int status; 6155 6155 struct inode *inode = NULL; 6156 6156 struct buffer_head *bh = NULL; 6157 + struct ocfs2_dinode *di; 6158 + struct ocfs2_truncate_log *tl; 6159 + unsigned int tl_count; 6157 6160 6158 6161 inode = ocfs2_get_system_file_inode(osb, 6159 6162 TRUNCATE_LOG_SYSTEM_INODE, ··· 6170 6167 status = ocfs2_read_inode_block(inode, &bh); 6171 6168 if (status < 0) { 6172 6169 iput(inode); 6170 + mlog_errno(status); 6171 + goto bail; 6172 + } 6173 + 6174 + di = (struct ocfs2_dinode *)bh->b_data; 6175 + tl = &di->id2.i_dealloc; 6176 + tl_count = le16_to_cpu(tl->tl_count); 6177 + if (unlikely(tl_count > ocfs2_truncate_recs_per_inode(osb->sb) || 6178 + tl_count == 0)) { 6179 + status = -EFSCORRUPTED; 6180 + iput(inode); 6181 + brelse(bh); 6173 6182 mlog_errno(status); 6174 6183 goto bail; 6175 6184 } ··· 6823 6808 return 0; 6824 6809 } 6825 6810 6826 - void ocfs2_map_and_dirty_page(struct inode *inode, handle_t *handle, 6827 - unsigned int from, unsigned int to, 6828 - struct page *page, int zero, u64 *phys) 6811 + void ocfs2_map_and_dirty_folio(struct inode *inode, handle_t *handle, 6812 + size_t from, size_t to, struct folio *folio, int zero, 6813 + u64 *phys) 6829 6814 { 6830 6815 int ret, partial = 0; 6831 - loff_t start_byte = ((loff_t)page->index << PAGE_SHIFT) + from; 6816 + loff_t start_byte = folio_pos(folio) + from; 6832 6817 loff_t length = to - from; 6833 6818 6834 - ret = ocfs2_map_page_blocks(page, phys, inode, from, to, 0); 6819 + ret = ocfs2_map_folio_blocks(folio, phys, inode, from, to, 0); 6835 6820 if (ret) 6836 6821 mlog_errno(ret); 6837 6822 6838 6823 if (zero) 6839 - zero_user_segment(page, from, to); 6824 + folio_zero_segment(folio, from, to); 6840 6825 6841 6826 /* 6842 6827 * Need to set the buffers we zero'd into uptodate 6843 6828 * here if they aren't - ocfs2_map_page_blocks() 6844 6829 * might've skipped some 6845 6830 */ 6846 - ret = walk_page_buffers(handle, page_buffers(page), 6831 + ret = walk_page_buffers(handle, folio_buffers(folio), 6847 6832 from, to, &partial, 6848 6833 ocfs2_zero_func); 6849 6834 if (ret < 0) ··· 6856 6841 } 6857 6842 6858 6843 if (!partial) 6859 - SetPageUptodate(page); 6844 + folio_mark_uptodate(folio); 6860 6845 6861 - flush_dcache_page(page); 6846 + flush_dcache_folio(folio); 6862 6847 } 6863 6848 6864 - static void ocfs2_zero_cluster_pages(struct inode *inode, loff_t start, 6865 - loff_t end, struct page **pages, 6866 - int numpages, u64 phys, handle_t *handle) 6849 + static void ocfs2_zero_cluster_folios(struct inode *inode, loff_t start, 6850 + loff_t end, struct folio **folios, int numfolios, 6851 + u64 phys, handle_t *handle) 6867 6852 { 6868 6853 int i; 6869 - struct page *page; 6870 - unsigned int from, to = PAGE_SIZE; 6871 6854 struct super_block *sb = inode->i_sb; 6872 6855 6873 6856 BUG_ON(!ocfs2_sparse_alloc(OCFS2_SB(sb))); 6874 6857 6875 - if (numpages == 0) 6858 + if (numfolios == 0) 6876 6859 goto out; 6877 6860 6878 - to = PAGE_SIZE; 6879 - for(i = 0; i < numpages; i++) { 6880 - page = pages[i]; 6861 + for (i = 0; i < numfolios; i++) { 6862 + struct folio *folio = folios[i]; 6863 + size_t to = folio_size(folio); 6864 + size_t from = offset_in_folio(folio, start); 6881 6865 6882 - from = start & (PAGE_SIZE - 1); 6883 - if ((end >> PAGE_SHIFT) == page->index) 6884 - to = end & (PAGE_SIZE - 1); 6866 + if (to > end - folio_pos(folio)) 6867 + to = end - folio_pos(folio); 6885 6868 6886 - BUG_ON(from > PAGE_SIZE); 6887 - BUG_ON(to > PAGE_SIZE); 6869 + ocfs2_map_and_dirty_folio(inode, handle, from, to, folio, 1, 6870 + &phys); 6888 6871 6889 - ocfs2_map_and_dirty_page(inode, handle, from, to, page, 1, 6890 - &phys); 6891 - 6892 - start = (page->index + 1) << PAGE_SHIFT; 6872 + start = folio_next_index(folio) << PAGE_SHIFT; 6893 6873 } 6894 6874 out: 6895 - if (pages) 6896 - ocfs2_unlock_and_free_pages(pages, numpages); 6875 + if (folios) 6876 + ocfs2_unlock_and_free_folios(folios, numfolios); 6897 6877 } 6898 6878 6899 - int ocfs2_grab_pages(struct inode *inode, loff_t start, loff_t end, 6900 - struct page **pages, int *num) 6879 + static int ocfs2_grab_folios(struct inode *inode, loff_t start, loff_t end, 6880 + struct folio **folios, int *num) 6901 6881 { 6902 - int numpages, ret = 0; 6882 + int numfolios, ret = 0; 6903 6883 struct address_space *mapping = inode->i_mapping; 6904 6884 unsigned long index; 6905 6885 loff_t last_page_bytes; 6906 6886 6907 6887 BUG_ON(start > end); 6908 6888 6909 - numpages = 0; 6889 + numfolios = 0; 6910 6890 last_page_bytes = PAGE_ALIGN(end); 6911 6891 index = start >> PAGE_SHIFT; 6912 6892 do { 6913 - pages[numpages] = find_or_create_page(mapping, index, GFP_NOFS); 6914 - if (!pages[numpages]) { 6915 - ret = -ENOMEM; 6893 + folios[numfolios] = __filemap_get_folio(mapping, index, 6894 + FGP_LOCK | FGP_ACCESSED | FGP_CREAT, GFP_NOFS); 6895 + if (IS_ERR(folios[numfolios])) { 6896 + ret = PTR_ERR(folios[numfolios]); 6916 6897 mlog_errno(ret); 6917 6898 goto out; 6918 6899 } 6919 6900 6920 - numpages++; 6921 - index++; 6901 + index = folio_next_index(folios[numfolios]); 6902 + numfolios++; 6922 6903 } while (index < (last_page_bytes >> PAGE_SHIFT)); 6923 6904 6924 6905 out: 6925 6906 if (ret != 0) { 6926 - if (pages) 6927 - ocfs2_unlock_and_free_pages(pages, numpages); 6928 - numpages = 0; 6907 + if (folios) 6908 + ocfs2_unlock_and_free_folios(folios, numfolios); 6909 + numfolios = 0; 6929 6910 } 6930 6911 6931 - *num = numpages; 6912 + *num = numfolios; 6932 6913 6933 6914 return ret; 6934 6915 } 6935 6916 6936 - static int ocfs2_grab_eof_pages(struct inode *inode, loff_t start, loff_t end, 6937 - struct page **pages, int *num) 6917 + static int ocfs2_grab_eof_folios(struct inode *inode, loff_t start, loff_t end, 6918 + struct folio **folios, int *num) 6938 6919 { 6939 6920 struct super_block *sb = inode->i_sb; 6940 6921 6941 6922 BUG_ON(start >> OCFS2_SB(sb)->s_clustersize_bits != 6942 6923 (end - 1) >> OCFS2_SB(sb)->s_clustersize_bits); 6943 6924 6944 - return ocfs2_grab_pages(inode, start, end, pages, num); 6925 + return ocfs2_grab_folios(inode, start, end, folios, num); 6945 6926 } 6946 6927 6947 6928 /* ··· 6951 6940 int ocfs2_zero_range_for_truncate(struct inode *inode, handle_t *handle, 6952 6941 u64 range_start, u64 range_end) 6953 6942 { 6954 - int ret = 0, numpages; 6955 - struct page **pages = NULL; 6943 + int ret = 0, numfolios; 6944 + struct folio **folios = NULL; 6956 6945 u64 phys; 6957 6946 unsigned int ext_flags; 6958 6947 struct super_block *sb = inode->i_sb; ··· 6965 6954 return 0; 6966 6955 6967 6956 /* 6968 - * Avoid zeroing pages fully beyond current i_size. It is pointless as 6969 - * underlying blocks of those pages should be already zeroed out and 6957 + * Avoid zeroing folios fully beyond current i_size. It is pointless as 6958 + * underlying blocks of those folios should be already zeroed out and 6970 6959 * page writeback will skip them anyway. 6971 6960 */ 6972 6961 range_end = min_t(u64, range_end, i_size_read(inode)); 6973 6962 if (range_start >= range_end) 6974 6963 return 0; 6975 6964 6976 - pages = kcalloc(ocfs2_pages_per_cluster(sb), 6977 - sizeof(struct page *), GFP_NOFS); 6978 - if (pages == NULL) { 6965 + folios = kcalloc(ocfs2_pages_per_cluster(sb), 6966 + sizeof(struct folio *), GFP_NOFS); 6967 + if (folios == NULL) { 6979 6968 ret = -ENOMEM; 6980 6969 mlog_errno(ret); 6981 6970 goto out; ··· 6996 6985 if (phys == 0 || ext_flags & OCFS2_EXT_UNWRITTEN) 6997 6986 goto out; 6998 6987 6999 - ret = ocfs2_grab_eof_pages(inode, range_start, range_end, pages, 7000 - &numpages); 6988 + ret = ocfs2_grab_eof_folios(inode, range_start, range_end, folios, 6989 + &numfolios); 7001 6990 if (ret) { 7002 6991 mlog_errno(ret); 7003 6992 goto out; 7004 6993 } 7005 6994 7006 - ocfs2_zero_cluster_pages(inode, range_start, range_end, pages, 7007 - numpages, phys, handle); 6995 + ocfs2_zero_cluster_folios(inode, range_start, range_end, folios, 6996 + numfolios, phys, handle); 7008 6997 7009 6998 /* 7010 - * Initiate writeout of the pages we zero'd here. We don't 6999 + * Initiate writeout of the folios we zero'd here. We don't 7011 7000 * wait on them - the truncate_inode_pages() call later will 7012 7001 * do that for us. 7013 7002 */ ··· 7017 7006 mlog_errno(ret); 7018 7007 7019 7008 out: 7020 - kfree(pages); 7009 + kfree(folios); 7021 7010 7022 7011 return ret; 7023 7012 } ··· 7070 7059 int ocfs2_convert_inline_data_to_extents(struct inode *inode, 7071 7060 struct buffer_head *di_bh) 7072 7061 { 7073 - int ret, has_data, num_pages = 0; 7062 + int ret, has_data, num_folios = 0; 7074 7063 int need_free = 0; 7075 7064 u32 bit_off, num; 7076 7065 handle_t *handle; ··· 7079 7068 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 7080 7069 struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data; 7081 7070 struct ocfs2_alloc_context *data_ac = NULL; 7082 - struct page *page = NULL; 7071 + struct folio *folio = NULL; 7083 7072 struct ocfs2_extent_tree et; 7084 7073 int did_quota = 0; 7085 7074 ··· 7130 7119 7131 7120 /* 7132 7121 * Save two copies, one for insert, and one that can 7133 - * be changed by ocfs2_map_and_dirty_page() below. 7122 + * be changed by ocfs2_map_and_dirty_folio() below. 7134 7123 */ 7135 7124 block = phys = ocfs2_clusters_to_blocks(inode->i_sb, bit_off); 7136 7125 7137 - ret = ocfs2_grab_eof_pages(inode, 0, page_end, &page, 7138 - &num_pages); 7126 + ret = ocfs2_grab_eof_folios(inode, 0, page_end, &folio, 7127 + &num_folios); 7139 7128 if (ret) { 7140 7129 mlog_errno(ret); 7141 7130 need_free = 1; ··· 7146 7135 * This should populate the 1st page for us and mark 7147 7136 * it up to date. 7148 7137 */ 7149 - ret = ocfs2_read_inline_data(inode, page, di_bh); 7138 + ret = ocfs2_read_inline_data(inode, folio, di_bh); 7150 7139 if (ret) { 7151 7140 mlog_errno(ret); 7152 7141 need_free = 1; 7153 7142 goto out_unlock; 7154 7143 } 7155 7144 7156 - ocfs2_map_and_dirty_page(inode, handle, 0, page_end, page, 0, 7157 - &phys); 7145 + ocfs2_map_and_dirty_folio(inode, handle, 0, page_end, folio, 0, 7146 + &phys); 7158 7147 } 7159 7148 7160 7149 spin_lock(&oi->ip_lock); ··· 7185 7174 } 7186 7175 7187 7176 out_unlock: 7188 - if (page) 7189 - ocfs2_unlock_and_free_pages(&page, num_pages); 7177 + if (folio) 7178 + ocfs2_unlock_and_free_folios(&folio, num_folios); 7190 7179 7191 7180 out_commit: 7192 7181 if (ret < 0 && did_quota)
+3 -5
fs/ocfs2/alloc.h
··· 254 254 return !rec->e_leaf_clusters; 255 255 } 256 256 257 - int ocfs2_grab_pages(struct inode *inode, loff_t start, loff_t end, 258 - struct page **pages, int *num); 259 - void ocfs2_map_and_dirty_page(struct inode *inode, handle_t *handle, 260 - unsigned int from, unsigned int to, 261 - struct page *page, int zero, u64 *phys); 257 + void ocfs2_map_and_dirty_folio(struct inode *inode, handle_t *handle, 258 + size_t from, size_t to, struct folio *folio, int zero, 259 + u64 *phys); 262 260 /* 263 261 * Structures which describe a path through a btree, and functions to 264 262 * manipulate them.
+159 -178
fs/ocfs2/aops.c
··· 215 215 return err; 216 216 } 217 217 218 - int ocfs2_read_inline_data(struct inode *inode, struct page *page, 218 + int ocfs2_read_inline_data(struct inode *inode, struct folio *folio, 219 219 struct buffer_head *di_bh) 220 220 { 221 - void *kaddr; 222 221 loff_t size; 223 222 struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data; 224 223 ··· 229 230 230 231 size = i_size_read(inode); 231 232 232 - if (size > PAGE_SIZE || 233 + if (size > folio_size(folio) || 233 234 size > ocfs2_max_inline_data_with_xattr(inode->i_sb, di)) { 234 235 ocfs2_error(inode->i_sb, 235 236 "Inode %llu has with inline data has bad size: %Lu\n", ··· 238 239 return -EROFS; 239 240 } 240 241 241 - kaddr = kmap_atomic(page); 242 - if (size) 243 - memcpy(kaddr, di->id2.i_data.id_data, size); 244 - /* Clear the remaining part of the page */ 245 - memset(kaddr + size, 0, PAGE_SIZE - size); 246 - flush_dcache_page(page); 247 - kunmap_atomic(kaddr); 248 - 249 - SetPageUptodate(page); 242 + folio_fill_tail(folio, 0, di->id2.i_data.id_data, size); 243 + folio_mark_uptodate(folio); 250 244 251 245 return 0; 252 246 } 253 247 254 - static int ocfs2_readpage_inline(struct inode *inode, struct page *page) 248 + static int ocfs2_readpage_inline(struct inode *inode, struct folio *folio) 255 249 { 256 250 int ret; 257 251 struct buffer_head *di_bh = NULL; 258 252 259 - BUG_ON(!PageLocked(page)); 253 + BUG_ON(!folio_test_locked(folio)); 260 254 BUG_ON(!(OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL)); 261 255 262 256 ret = ocfs2_read_inode_block(inode, &di_bh); ··· 258 266 goto out; 259 267 } 260 268 261 - ret = ocfs2_read_inline_data(inode, page, di_bh); 269 + ret = ocfs2_read_inline_data(inode, folio, di_bh); 262 270 out: 263 - unlock_page(page); 271 + folio_unlock(folio); 264 272 265 273 brelse(di_bh); 266 274 return ret; ··· 275 283 276 284 trace_ocfs2_readpage((unsigned long long)oi->ip_blkno, folio->index); 277 285 278 - ret = ocfs2_inode_lock_with_page(inode, NULL, 0, &folio->page); 286 + ret = ocfs2_inode_lock_with_folio(inode, NULL, 0, folio); 279 287 if (ret != 0) { 280 288 if (ret == AOP_TRUNCATED_PAGE) 281 289 unlock = 0; ··· 297 305 } 298 306 299 307 /* 300 - * i_size might have just been updated as we grabed the meta lock. We 308 + * i_size might have just been updated as we grabbed the meta lock. We 301 309 * might now be discovering a truncate that hit on another node. 302 310 * block_read_full_folio->get_block freaks out if it is asked to read 303 311 * beyond the end of a file, so we check here. Callers ··· 314 322 } 315 323 316 324 if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) 317 - ret = ocfs2_readpage_inline(inode, &folio->page); 325 + ret = ocfs2_readpage_inline(inode, folio); 318 326 else 319 327 ret = block_read_full_folio(folio, ocfs2_get_block); 320 328 unlock = 0; ··· 526 534 * 527 535 * from == to == 0 is code for "zero the entire cluster region" 528 536 */ 529 - static void ocfs2_clear_page_regions(struct page *page, 537 + static void ocfs2_clear_folio_regions(struct folio *folio, 530 538 struct ocfs2_super *osb, u32 cpos, 531 539 unsigned from, unsigned to) 532 540 { ··· 535 543 536 544 ocfs2_figure_cluster_boundaries(osb, cpos, &cluster_start, &cluster_end); 537 545 538 - kaddr = kmap_atomic(page); 546 + kaddr = kmap_local_folio(folio, 0); 539 547 540 548 if (from || to) { 541 549 if (from > cluster_start) ··· 546 554 memset(kaddr + cluster_start, 0, cluster_end - cluster_start); 547 555 } 548 556 549 - kunmap_atomic(kaddr); 557 + kunmap_local(kaddr); 550 558 } 551 559 552 560 /* 553 561 * Nonsparse file systems fully allocate before we get to the write 554 562 * code. This prevents ocfs2_write() from tagging the write as an 555 - * allocating one, which means ocfs2_map_page_blocks() might try to 563 + * allocating one, which means ocfs2_map_folio_blocks() might try to 556 564 * read-in the blocks at the tail of our file. Avoid reading them by 557 565 * testing i_size against each block offset. 558 566 */ ··· 577 585 * 578 586 * This will also skip zeroing, which is handled externally. 579 587 */ 580 - int ocfs2_map_page_blocks(struct page *page, u64 *p_blkno, 588 + int ocfs2_map_folio_blocks(struct folio *folio, u64 *p_blkno, 581 589 struct inode *inode, unsigned int from, 582 590 unsigned int to, int new) 583 591 { 584 - struct folio *folio = page_folio(page); 585 592 int ret = 0; 586 593 struct buffer_head *head, *bh, *wait[2], **wait_bh = wait; 587 594 unsigned int block_end, block_start; ··· 720 729 unsigned int w_large_pages; 721 730 722 731 /* 723 - * Pages involved in this write. 732 + * Folios involved in this write. 724 733 * 725 - * w_target_page is the page being written to by the user. 734 + * w_target_folio is the folio being written to by the user. 726 735 * 727 - * w_pages is an array of pages which always contains 728 - * w_target_page, and in the case of an allocating write with 736 + * w_folios is an array of folios which always contains 737 + * w_target_folio, and in the case of an allocating write with 729 738 * page_size < cluster size, it will contain zero'd and mapped 730 - * pages adjacent to w_target_page which need to be written 739 + * pages adjacent to w_target_folio which need to be written 731 740 * out in so that future reads from that region will get 732 741 * zero's. 733 742 */ 734 - unsigned int w_num_pages; 735 - struct page *w_pages[OCFS2_MAX_CTXT_PAGES]; 736 - struct page *w_target_page; 743 + unsigned int w_num_folios; 744 + struct folio *w_folios[OCFS2_MAX_CTXT_PAGES]; 745 + struct folio *w_target_folio; 737 746 738 747 /* 739 748 * w_target_locked is used for page_mkwrite path indicating no unlocking 740 - * against w_target_page in ocfs2_write_end_nolock. 749 + * against w_target_folio in ocfs2_write_end_nolock. 741 750 */ 742 751 unsigned int w_target_locked:1; 743 752 ··· 762 771 unsigned int w_unwritten_count; 763 772 }; 764 773 765 - void ocfs2_unlock_and_free_pages(struct page **pages, int num_pages) 774 + void ocfs2_unlock_and_free_folios(struct folio **folios, int num_folios) 766 775 { 767 776 int i; 768 777 769 - for(i = 0; i < num_pages; i++) { 770 - if (pages[i]) { 771 - unlock_page(pages[i]); 772 - mark_page_accessed(pages[i]); 773 - put_page(pages[i]); 774 - } 778 + for(i = 0; i < num_folios; i++) { 779 + if (!folios[i]) 780 + continue; 781 + folio_unlock(folios[i]); 782 + folio_mark_accessed(folios[i]); 783 + folio_put(folios[i]); 775 784 } 776 785 } 777 786 778 - static void ocfs2_unlock_pages(struct ocfs2_write_ctxt *wc) 787 + static void ocfs2_unlock_folios(struct ocfs2_write_ctxt *wc) 779 788 { 780 789 int i; 781 790 782 791 /* 783 792 * w_target_locked is only set to true in the page_mkwrite() case. 784 793 * The intent is to allow us to lock the target page from write_begin() 785 - * to write_end(). The caller must hold a ref on w_target_page. 794 + * to write_end(). The caller must hold a ref on w_target_folio. 786 795 */ 787 796 if (wc->w_target_locked) { 788 - BUG_ON(!wc->w_target_page); 789 - for (i = 0; i < wc->w_num_pages; i++) { 790 - if (wc->w_target_page == wc->w_pages[i]) { 791 - wc->w_pages[i] = NULL; 797 + BUG_ON(!wc->w_target_folio); 798 + for (i = 0; i < wc->w_num_folios; i++) { 799 + if (wc->w_target_folio == wc->w_folios[i]) { 800 + wc->w_folios[i] = NULL; 792 801 break; 793 802 } 794 803 } 795 - mark_page_accessed(wc->w_target_page); 796 - put_page(wc->w_target_page); 804 + folio_mark_accessed(wc->w_target_folio); 805 + folio_put(wc->w_target_folio); 797 806 } 798 - ocfs2_unlock_and_free_pages(wc->w_pages, wc->w_num_pages); 807 + ocfs2_unlock_and_free_folios(wc->w_folios, wc->w_num_folios); 799 808 } 800 809 801 810 static void ocfs2_free_unwritten_list(struct inode *inode, ··· 817 826 struct ocfs2_write_ctxt *wc) 818 827 { 819 828 ocfs2_free_unwritten_list(inode, &wc->w_unwritten_list); 820 - ocfs2_unlock_pages(wc); 829 + ocfs2_unlock_folios(wc); 821 830 brelse(wc->w_di_bh); 822 831 kfree(wc); 823 832 } ··· 860 869 * and dirty so they'll be written out (in order to prevent uninitialised 861 870 * block data from leaking). And clear the new bit. 862 871 */ 863 - static void ocfs2_zero_new_buffers(struct page *page, unsigned from, unsigned to) 872 + static void ocfs2_zero_new_buffers(struct folio *folio, size_t from, size_t to) 864 873 { 865 874 unsigned int block_start, block_end; 866 875 struct buffer_head *head, *bh; 867 876 868 - BUG_ON(!PageLocked(page)); 869 - if (!page_has_buffers(page)) 877 + BUG_ON(!folio_test_locked(folio)); 878 + head = folio_buffers(folio); 879 + if (!head) 870 880 return; 871 881 872 - bh = head = page_buffers(page); 882 + bh = head; 873 883 block_start = 0; 874 884 do { 875 885 block_end = block_start + bh->b_size; 876 886 877 887 if (buffer_new(bh)) { 878 888 if (block_end > from && block_start < to) { 879 - if (!PageUptodate(page)) { 889 + if (!folio_test_uptodate(folio)) { 880 890 unsigned start, end; 881 891 882 892 start = max(from, block_start); 883 893 end = min(to, block_end); 884 894 885 - zero_user_segment(page, start, end); 895 + folio_zero_segment(folio, start, end); 886 896 set_buffer_uptodate(bh); 887 897 } 888 898 ··· 908 916 int i; 909 917 unsigned from = user_pos & (PAGE_SIZE - 1), 910 918 to = user_pos + user_len; 911 - struct page *tmppage; 912 919 913 - if (wc->w_target_page) 914 - ocfs2_zero_new_buffers(wc->w_target_page, from, to); 920 + if (wc->w_target_folio) 921 + ocfs2_zero_new_buffers(wc->w_target_folio, from, to); 915 922 916 - for(i = 0; i < wc->w_num_pages; i++) { 917 - tmppage = wc->w_pages[i]; 923 + for (i = 0; i < wc->w_num_folios; i++) { 924 + struct folio *folio = wc->w_folios[i]; 918 925 919 - if (tmppage && page_has_buffers(tmppage)) { 926 + if (folio && folio_buffers(folio)) { 920 927 if (ocfs2_should_order_data(inode)) 921 928 ocfs2_jbd2_inode_add_write(wc->w_handle, inode, 922 929 user_pos, user_len); 923 930 924 - block_commit_write(tmppage, from, to); 931 + block_commit_write(&folio->page, from, to); 925 932 } 926 933 } 927 934 } 928 935 929 - static int ocfs2_prepare_page_for_write(struct inode *inode, u64 *p_blkno, 930 - struct ocfs2_write_ctxt *wc, 931 - struct page *page, u32 cpos, 932 - loff_t user_pos, unsigned user_len, 933 - int new) 936 + static int ocfs2_prepare_folio_for_write(struct inode *inode, u64 *p_blkno, 937 + struct ocfs2_write_ctxt *wc, struct folio *folio, u32 cpos, 938 + loff_t user_pos, unsigned user_len, int new) 934 939 { 935 940 int ret; 936 941 unsigned int map_from = 0, map_to = 0; ··· 940 951 /* treat the write as new if the a hole/lseek spanned across 941 952 * the page boundary. 942 953 */ 943 - new = new | ((i_size_read(inode) <= page_offset(page)) && 944 - (page_offset(page) <= user_pos)); 954 + new = new | ((i_size_read(inode) <= folio_pos(folio)) && 955 + (folio_pos(folio) <= user_pos)); 945 956 946 - if (page == wc->w_target_page) { 957 + if (folio == wc->w_target_folio) { 947 958 map_from = user_pos & (PAGE_SIZE - 1); 948 959 map_to = map_from + user_len; 949 960 950 961 if (new) 951 - ret = ocfs2_map_page_blocks(page, p_blkno, inode, 952 - cluster_start, cluster_end, 953 - new); 962 + ret = ocfs2_map_folio_blocks(folio, p_blkno, inode, 963 + cluster_start, cluster_end, new); 954 964 else 955 - ret = ocfs2_map_page_blocks(page, p_blkno, inode, 956 - map_from, map_to, new); 965 + ret = ocfs2_map_folio_blocks(folio, p_blkno, inode, 966 + map_from, map_to, new); 957 967 if (ret) { 958 968 mlog_errno(ret); 959 969 goto out; ··· 966 978 } 967 979 } else { 968 980 /* 969 - * If we haven't allocated the new page yet, we 981 + * If we haven't allocated the new folio yet, we 970 982 * shouldn't be writing it out without copying user 971 983 * data. This is likely a math error from the caller. 972 984 */ ··· 975 987 map_from = cluster_start; 976 988 map_to = cluster_end; 977 989 978 - ret = ocfs2_map_page_blocks(page, p_blkno, inode, 979 - cluster_start, cluster_end, new); 990 + ret = ocfs2_map_folio_blocks(folio, p_blkno, inode, 991 + cluster_start, cluster_end, new); 980 992 if (ret) { 981 993 mlog_errno(ret); 982 994 goto out; ··· 984 996 } 985 997 986 998 /* 987 - * Parts of newly allocated pages need to be zero'd. 999 + * Parts of newly allocated folios need to be zero'd. 988 1000 * 989 1001 * Above, we have also rewritten 'to' and 'from' - as far as 990 1002 * the rest of the function is concerned, the entire cluster 991 - * range inside of a page needs to be written. 1003 + * range inside of a folio needs to be written. 992 1004 * 993 - * We can skip this if the page is up to date - it's already 1005 + * We can skip this if the folio is uptodate - it's already 994 1006 * been zero'd from being read in as a hole. 995 1007 */ 996 - if (new && !PageUptodate(page)) 997 - ocfs2_clear_page_regions(page, OCFS2_SB(inode->i_sb), 1008 + if (new && !folio_test_uptodate(folio)) 1009 + ocfs2_clear_folio_regions(folio, OCFS2_SB(inode->i_sb), 998 1010 cpos, user_data_from, user_data_to); 999 1011 1000 - flush_dcache_page(page); 1012 + flush_dcache_folio(folio); 1001 1013 1002 1014 out: 1003 1015 return ret; ··· 1006 1018 /* 1007 1019 * This function will only grab one clusters worth of pages. 1008 1020 */ 1009 - static int ocfs2_grab_pages_for_write(struct address_space *mapping, 1010 - struct ocfs2_write_ctxt *wc, 1011 - u32 cpos, loff_t user_pos, 1012 - unsigned user_len, int new, 1013 - struct page *mmap_page) 1021 + static int ocfs2_grab_folios_for_write(struct address_space *mapping, 1022 + struct ocfs2_write_ctxt *wc, u32 cpos, loff_t user_pos, 1023 + unsigned user_len, int new, struct folio *mmap_folio) 1014 1024 { 1015 1025 int ret = 0, i; 1016 1026 unsigned long start, target_index, end_index, index; ··· 1025 1039 * last page of the write. 1026 1040 */ 1027 1041 if (new) { 1028 - wc->w_num_pages = ocfs2_pages_per_cluster(inode->i_sb); 1042 + wc->w_num_folios = ocfs2_pages_per_cluster(inode->i_sb); 1029 1043 start = ocfs2_align_clusters_to_page_index(inode->i_sb, cpos); 1030 1044 /* 1031 1045 * We need the index *past* the last page we could possibly ··· 1035 1049 last_byte = max(user_pos + user_len, i_size_read(inode)); 1036 1050 BUG_ON(last_byte < 1); 1037 1051 end_index = ((last_byte - 1) >> PAGE_SHIFT) + 1; 1038 - if ((start + wc->w_num_pages) > end_index) 1039 - wc->w_num_pages = end_index - start; 1052 + if ((start + wc->w_num_folios) > end_index) 1053 + wc->w_num_folios = end_index - start; 1040 1054 } else { 1041 - wc->w_num_pages = 1; 1055 + wc->w_num_folios = 1; 1042 1056 start = target_index; 1043 1057 } 1044 1058 end_index = (user_pos + user_len - 1) >> PAGE_SHIFT; 1045 1059 1046 - for(i = 0; i < wc->w_num_pages; i++) { 1060 + for(i = 0; i < wc->w_num_folios; i++) { 1047 1061 index = start + i; 1048 1062 1049 1063 if (index >= target_index && index <= end_index && ··· 1053 1067 * and wants us to directly use the page 1054 1068 * passed in. 1055 1069 */ 1056 - lock_page(mmap_page); 1070 + folio_lock(mmap_folio); 1057 1071 1058 1072 /* Exit and let the caller retry */ 1059 - if (mmap_page->mapping != mapping) { 1060 - WARN_ON(mmap_page->mapping); 1061 - unlock_page(mmap_page); 1073 + if (mmap_folio->mapping != mapping) { 1074 + WARN_ON(mmap_folio->mapping); 1075 + folio_unlock(mmap_folio); 1062 1076 ret = -EAGAIN; 1063 1077 goto out; 1064 1078 } 1065 1079 1066 - get_page(mmap_page); 1067 - wc->w_pages[i] = mmap_page; 1080 + folio_get(mmap_folio); 1081 + wc->w_folios[i] = mmap_folio; 1068 1082 wc->w_target_locked = true; 1069 1083 } else if (index >= target_index && index <= end_index && 1070 1084 wc->w_type == OCFS2_WRITE_DIRECT) { 1071 1085 /* Direct write has no mapping page. */ 1072 - wc->w_pages[i] = NULL; 1086 + wc->w_folios[i] = NULL; 1073 1087 continue; 1074 1088 } else { 1075 - wc->w_pages[i] = find_or_create_page(mapping, index, 1076 - GFP_NOFS); 1077 - if (!wc->w_pages[i]) { 1078 - ret = -ENOMEM; 1089 + wc->w_folios[i] = __filemap_get_folio(mapping, index, 1090 + FGP_LOCK | FGP_ACCESSED | FGP_CREAT, 1091 + GFP_NOFS); 1092 + if (IS_ERR(wc->w_folios[i])) { 1093 + ret = PTR_ERR(wc->w_folios[i]); 1079 1094 mlog_errno(ret); 1080 1095 goto out; 1081 1096 } 1082 1097 } 1083 - wait_for_stable_page(wc->w_pages[i]); 1098 + folio_wait_stable(wc->w_folios[i]); 1084 1099 1085 1100 if (index == target_index) 1086 - wc->w_target_page = wc->w_pages[i]; 1101 + wc->w_target_folio = wc->w_folios[i]; 1087 1102 } 1088 1103 out: 1089 1104 if (ret) ··· 1168 1181 if (!should_zero) 1169 1182 p_blkno += (user_pos >> inode->i_sb->s_blocksize_bits) & (u64)(bpc - 1); 1170 1183 1171 - for(i = 0; i < wc->w_num_pages; i++) { 1184 + for (i = 0; i < wc->w_num_folios; i++) { 1172 1185 int tmpret; 1173 1186 1174 1187 /* This is the direct io target page. */ 1175 - if (wc->w_pages[i] == NULL) { 1188 + if (wc->w_folios[i] == NULL) { 1176 1189 p_blkno += (1 << (PAGE_SHIFT - inode->i_sb->s_blocksize_bits)); 1177 1190 continue; 1178 1191 } 1179 1192 1180 - tmpret = ocfs2_prepare_page_for_write(inode, &p_blkno, wc, 1181 - wc->w_pages[i], cpos, 1182 - user_pos, user_len, 1183 - should_zero); 1193 + tmpret = ocfs2_prepare_folio_for_write(inode, &p_blkno, wc, 1194 + wc->w_folios[i], cpos, user_pos, user_len, 1195 + should_zero); 1184 1196 if (tmpret) { 1185 1197 mlog_errno(tmpret); 1186 1198 if (ret == 0) ··· 1458 1472 { 1459 1473 int ret; 1460 1474 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 1461 - struct page *page; 1475 + struct folio *folio; 1462 1476 handle_t *handle; 1463 1477 struct ocfs2_dinode *di = (struct ocfs2_dinode *)wc->w_di_bh->b_data; 1464 1478 ··· 1469 1483 goto out; 1470 1484 } 1471 1485 1472 - page = find_or_create_page(mapping, 0, GFP_NOFS); 1473 - if (!page) { 1486 + folio = __filemap_get_folio(mapping, 0, 1487 + FGP_LOCK | FGP_ACCESSED | FGP_CREAT, GFP_NOFS); 1488 + if (IS_ERR(folio)) { 1474 1489 ocfs2_commit_trans(osb, handle); 1475 - ret = -ENOMEM; 1490 + ret = PTR_ERR(folio); 1476 1491 mlog_errno(ret); 1477 1492 goto out; 1478 1493 } 1479 1494 /* 1480 - * If we don't set w_num_pages then this page won't get unlocked 1495 + * If we don't set w_num_folios then this folio won't get unlocked 1481 1496 * and freed on cleanup of the write context. 1482 1497 */ 1483 - wc->w_pages[0] = wc->w_target_page = page; 1484 - wc->w_num_pages = 1; 1498 + wc->w_target_folio = folio; 1499 + wc->w_folios[0] = folio; 1500 + wc->w_num_folios = 1; 1485 1501 1486 1502 ret = ocfs2_journal_access_di(handle, INODE_CACHE(inode), wc->w_di_bh, 1487 1503 OCFS2_JOURNAL_ACCESS_WRITE); ··· 1497 1509 if (!(OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL)) 1498 1510 ocfs2_set_inode_data_inline(inode, di); 1499 1511 1500 - if (!PageUptodate(page)) { 1501 - ret = ocfs2_read_inline_data(inode, page, wc->w_di_bh); 1512 + if (!folio_test_uptodate(folio)) { 1513 + ret = ocfs2_read_inline_data(inode, folio, wc->w_di_bh); 1502 1514 if (ret) { 1503 1515 ocfs2_commit_trans(osb, handle); 1504 1516 ··· 1521 1533 } 1522 1534 1523 1535 static int ocfs2_try_to_write_inline_data(struct address_space *mapping, 1524 - struct inode *inode, loff_t pos, 1525 - unsigned len, struct page *mmap_page, 1526 - struct ocfs2_write_ctxt *wc) 1536 + struct inode *inode, loff_t pos, size_t len, 1537 + struct folio *mmap_folio, struct ocfs2_write_ctxt *wc) 1527 1538 { 1528 1539 int ret, written = 0; 1529 1540 loff_t end = pos + len; ··· 1537 1550 * Handle inodes which already have inline data 1st. 1538 1551 */ 1539 1552 if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) { 1540 - if (mmap_page == NULL && 1553 + if (mmap_folio == NULL && 1541 1554 ocfs2_size_fits_inline_data(wc->w_di_bh, end)) 1542 1555 goto do_inline_write; 1543 1556 ··· 1561 1574 * Check whether the write can fit. 1562 1575 */ 1563 1576 di = (struct ocfs2_dinode *)wc->w_di_bh->b_data; 1564 - if (mmap_page || 1577 + if (mmap_folio || 1565 1578 end > ocfs2_max_inline_data_with_xattr(inode->i_sb, di)) 1566 1579 return 0; 1567 1580 ··· 1628 1641 } 1629 1642 1630 1643 int ocfs2_write_begin_nolock(struct address_space *mapping, 1631 - loff_t pos, unsigned len, ocfs2_write_type_t type, 1632 - struct folio **foliop, void **fsdata, 1633 - struct buffer_head *di_bh, struct page *mmap_page) 1644 + loff_t pos, unsigned len, ocfs2_write_type_t type, 1645 + struct folio **foliop, void **fsdata, 1646 + struct buffer_head *di_bh, struct folio *mmap_folio) 1634 1647 { 1635 1648 int ret, cluster_of_pages, credits = OCFS2_INODE_UPDATE_CREDITS; 1636 1649 unsigned int clusters_to_alloc, extents_to_split, clusters_need = 0; ··· 1653 1666 1654 1667 if (ocfs2_supports_inline_data(osb)) { 1655 1668 ret = ocfs2_try_to_write_inline_data(mapping, inode, pos, len, 1656 - mmap_page, wc); 1669 + mmap_folio, wc); 1657 1670 if (ret == 1) { 1658 1671 ret = 0; 1659 1672 goto success; ··· 1705 1718 (unsigned long long)OCFS2_I(inode)->ip_blkno, 1706 1719 (long long)i_size_read(inode), 1707 1720 le32_to_cpu(di->i_clusters), 1708 - pos, len, type, mmap_page, 1721 + pos, len, type, mmap_folio, 1709 1722 clusters_to_alloc, extents_to_split); 1710 1723 1711 1724 /* ··· 1776 1789 } 1777 1790 1778 1791 /* 1779 - * Fill our page array first. That way we've grabbed enough so 1792 + * Fill our folio array first. That way we've grabbed enough so 1780 1793 * that we can zero and flush if we error after adding the 1781 1794 * extent. 1782 1795 */ 1783 - ret = ocfs2_grab_pages_for_write(mapping, wc, wc->w_cpos, pos, len, 1784 - cluster_of_pages, mmap_page); 1796 + ret = ocfs2_grab_folios_for_write(mapping, wc, wc->w_cpos, pos, len, 1797 + cluster_of_pages, mmap_folio); 1785 1798 if (ret) { 1786 1799 /* 1787 - * ocfs2_grab_pages_for_write() returns -EAGAIN if it could not lock 1788 - * the target page. In this case, we exit with no error and no target 1789 - * page. This will trigger the caller, page_mkwrite(), to re-try 1790 - * the operation. 1800 + * ocfs2_grab_folios_for_write() returns -EAGAIN if it 1801 + * could not lock the target folio. In this case, we exit 1802 + * with no error and no target folio. This will trigger 1803 + * the caller, page_mkwrite(), to re-try the operation. 1791 1804 */ 1792 1805 if (type == OCFS2_WRITE_MMAP && ret == -EAGAIN) { 1793 - BUG_ON(wc->w_target_page); 1806 + BUG_ON(wc->w_target_folio); 1794 1807 ret = 0; 1795 1808 goto out_quota; 1796 1809 } ··· 1813 1826 1814 1827 success: 1815 1828 if (foliop) 1816 - *foliop = page_folio(wc->w_target_page); 1829 + *foliop = wc->w_target_folio; 1817 1830 *fsdata = wc; 1818 1831 return 0; 1819 1832 out_quota: ··· 1832 1845 * to VM code. 1833 1846 */ 1834 1847 if (wc->w_target_locked) 1835 - unlock_page(mmap_page); 1848 + folio_unlock(mmap_folio); 1836 1849 1837 1850 ocfs2_free_write_ctxt(inode, wc); 1838 1851 ··· 1911 1924 struct ocfs2_dinode *di, 1912 1925 struct ocfs2_write_ctxt *wc) 1913 1926 { 1914 - void *kaddr; 1915 - 1916 1927 if (unlikely(*copied < len)) { 1917 - if (!PageUptodate(wc->w_target_page)) { 1928 + if (!folio_test_uptodate(wc->w_target_folio)) { 1918 1929 *copied = 0; 1919 1930 return; 1920 1931 } 1921 1932 } 1922 1933 1923 - kaddr = kmap_atomic(wc->w_target_page); 1924 - memcpy(di->id2.i_data.id_data + pos, kaddr + pos, *copied); 1925 - kunmap_atomic(kaddr); 1934 + memcpy_from_folio(di->id2.i_data.id_data + pos, wc->w_target_folio, 1935 + pos, *copied); 1926 1936 1927 1937 trace_ocfs2_write_end_inline( 1928 1938 (unsigned long long)OCFS2_I(inode)->ip_blkno, ··· 1928 1944 le16_to_cpu(di->i_dyn_features)); 1929 1945 } 1930 1946 1931 - int ocfs2_write_end_nolock(struct address_space *mapping, 1932 - loff_t pos, unsigned len, unsigned copied, void *fsdata) 1947 + int ocfs2_write_end_nolock(struct address_space *mapping, loff_t pos, 1948 + unsigned len, unsigned copied, void *fsdata) 1933 1949 { 1934 1950 int i, ret; 1935 - unsigned from, to, start = pos & (PAGE_SIZE - 1); 1951 + size_t from, to, start = pos & (PAGE_SIZE - 1); 1936 1952 struct inode *inode = mapping->host; 1937 1953 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 1938 1954 struct ocfs2_write_ctxt *wc = fsdata; 1939 1955 struct ocfs2_dinode *di = (struct ocfs2_dinode *)wc->w_di_bh->b_data; 1940 1956 handle_t *handle = wc->w_handle; 1941 - struct page *tmppage; 1942 1957 1943 1958 BUG_ON(!list_empty(&wc->w_unwritten_list)); 1944 1959 ··· 1956 1973 goto out_write_size; 1957 1974 } 1958 1975 1959 - if (unlikely(copied < len) && wc->w_target_page) { 1976 + if (unlikely(copied < len) && wc->w_target_folio) { 1960 1977 loff_t new_isize; 1961 1978 1962 - if (!PageUptodate(wc->w_target_page)) 1979 + if (!folio_test_uptodate(wc->w_target_folio)) 1963 1980 copied = 0; 1964 1981 1965 1982 new_isize = max_t(loff_t, i_size_read(inode), pos + copied); 1966 - if (new_isize > page_offset(wc->w_target_page)) 1967 - ocfs2_zero_new_buffers(wc->w_target_page, start+copied, 1983 + if (new_isize > folio_pos(wc->w_target_folio)) 1984 + ocfs2_zero_new_buffers(wc->w_target_folio, start+copied, 1968 1985 start+len); 1969 1986 else { 1970 1987 /* 1971 - * When page is fully beyond new isize (data copy 1972 - * failed), do not bother zeroing the page. Invalidate 1988 + * When folio is fully beyond new isize (data copy 1989 + * failed), do not bother zeroing the folio. Invalidate 1973 1990 * it instead so that writeback does not get confused 1974 1991 * put page & buffer dirty bits into inconsistent 1975 1992 * state. 1976 1993 */ 1977 - block_invalidate_folio(page_folio(wc->w_target_page), 1978 - 0, PAGE_SIZE); 1994 + block_invalidate_folio(wc->w_target_folio, 0, 1995 + folio_size(wc->w_target_folio)); 1979 1996 } 1980 1997 } 1981 - if (wc->w_target_page) 1982 - flush_dcache_page(wc->w_target_page); 1998 + if (wc->w_target_folio) 1999 + flush_dcache_folio(wc->w_target_folio); 1983 2000 1984 - for(i = 0; i < wc->w_num_pages; i++) { 1985 - tmppage = wc->w_pages[i]; 2001 + for (i = 0; i < wc->w_num_folios; i++) { 2002 + struct folio *folio = wc->w_folios[i]; 1986 2003 1987 - /* This is the direct io target page. */ 1988 - if (tmppage == NULL) 2004 + /* This is the direct io target folio */ 2005 + if (folio == NULL) 1989 2006 continue; 1990 2007 1991 - if (tmppage == wc->w_target_page) { 2008 + if (folio == wc->w_target_folio) { 1992 2009 from = wc->w_target_from; 1993 2010 to = wc->w_target_to; 1994 2011 1995 - BUG_ON(from > PAGE_SIZE || 1996 - to > PAGE_SIZE || 2012 + BUG_ON(from > folio_size(folio) || 2013 + to > folio_size(folio) || 1997 2014 to < from); 1998 2015 } else { 1999 2016 /* ··· 2002 2019 * to flush their entire range. 2003 2020 */ 2004 2021 from = 0; 2005 - to = PAGE_SIZE; 2022 + to = folio_size(folio); 2006 2023 } 2007 2024 2008 - if (page_has_buffers(tmppage)) { 2025 + if (folio_buffers(folio)) { 2009 2026 if (handle && ocfs2_should_order_data(inode)) { 2010 - loff_t start_byte = 2011 - ((loff_t)tmppage->index << PAGE_SHIFT) + 2012 - from; 2027 + loff_t start_byte = folio_pos(folio) + from; 2013 2028 loff_t length = to - from; 2014 2029 ocfs2_jbd2_inode_add_write(handle, inode, 2015 2030 start_byte, length); 2016 2031 } 2017 - block_commit_write(tmppage, from, to); 2032 + block_commit_write(&folio->page, from, to); 2018 2033 } 2019 2034 } 2020 2035 ··· 2041 2060 * this lock and will ask for the page lock when flushing the data. 2042 2061 * put it here to preserve the unlock order. 2043 2062 */ 2044 - ocfs2_unlock_pages(wc); 2063 + ocfs2_unlock_folios(wc); 2045 2064 2046 2065 if (handle) 2047 2066 ocfs2_commit_trans(osb, handle);
+6 -11
fs/ocfs2/aops.h
··· 8 8 9 9 #include <linux/fs.h> 10 10 11 - handle_t *ocfs2_start_walk_page_trans(struct inode *inode, 12 - struct page *page, 13 - unsigned from, 14 - unsigned to); 15 - 16 - int ocfs2_map_page_blocks(struct page *page, u64 *p_blkno, 11 + int ocfs2_map_folio_blocks(struct folio *folio, u64 *p_blkno, 17 12 struct inode *inode, unsigned int from, 18 13 unsigned int to, int new); 19 14 20 - void ocfs2_unlock_and_free_pages(struct page **pages, int num_pages); 15 + void ocfs2_unlock_and_free_folios(struct folio **folios, int num_folios); 21 16 22 17 int walk_page_buffers( handle_t *handle, 23 18 struct buffer_head *head, ··· 32 37 } ocfs2_write_type_t; 33 38 34 39 int ocfs2_write_begin_nolock(struct address_space *mapping, 35 - loff_t pos, unsigned len, ocfs2_write_type_t type, 36 - struct folio **foliop, void **fsdata, 37 - struct buffer_head *di_bh, struct page *mmap_page); 40 + loff_t pos, unsigned len, ocfs2_write_type_t type, 41 + struct folio **foliop, void **fsdata, 42 + struct buffer_head *di_bh, struct folio *mmap_folio); 38 43 39 - int ocfs2_read_inline_data(struct inode *inode, struct page *page, 44 + int ocfs2_read_inline_data(struct inode *inode, struct folio *folio, 40 45 struct buffer_head *di_bh); 41 46 int ocfs2_size_fits_inline_data(struct buffer_head *di_bh, u64 new_size); 42 47
+16 -12
fs/ocfs2/cluster/heartbeat.c
··· 3 3 * Copyright (C) 2004, 2005 Oracle. All rights reserved. 4 4 */ 5 5 6 + #include "linux/kstrtox.h" 6 7 #include <linux/kernel.h> 7 8 #include <linux/sched.h> 8 9 #include <linux/jiffies.h> ··· 1021 1020 if (list_empty(&slot->ds_live_item)) 1022 1021 goto out; 1023 1022 1024 - /* live nodes only go dead after enough consequtive missed 1023 + /* live nodes only go dead after enough consecutive missed 1025 1024 * samples.. reset the missed counter whenever we see 1026 1025 * activity */ 1027 1026 if (slot->ds_equal_samples >= o2hb_dead_threshold || gen_changed) { ··· 1536 1535 { 1537 1536 unsigned long bytes; 1538 1537 char *p = (char *)page; 1538 + int ret; 1539 1539 1540 - bytes = simple_strtoul(p, &p, 0); 1541 - if (!p || (*p && (*p != '\n'))) 1542 - return -EINVAL; 1540 + ret = kstrtoul(p, 0, &bytes); 1541 + if (ret) 1542 + return ret; 1543 1543 1544 1544 /* Heartbeat and fs min / max block sizes are the same. */ 1545 1545 if (bytes > 4096 || bytes < 512) ··· 1624 1622 struct o2hb_region *reg = to_o2hb_region(item); 1625 1623 unsigned long tmp; 1626 1624 char *p = (char *)page; 1625 + int ret; 1627 1626 1628 1627 if (reg->hr_bdev_file) 1629 1628 return -EINVAL; 1630 1629 1631 - tmp = simple_strtoul(p, &p, 0); 1632 - if (!p || (*p && (*p != '\n'))) 1633 - return -EINVAL; 1630 + ret = kstrtoul(p, 0, &tmp); 1631 + if (ret) 1632 + return ret; 1634 1633 1635 1634 if (tmp > O2NM_MAX_NODES || tmp == 0) 1636 1635 return -ERANGE; ··· 1779 1776 if (o2nm_this_node() == O2NM_MAX_NODES) 1780 1777 return -EINVAL; 1781 1778 1782 - fd = simple_strtol(p, &p, 0); 1783 - if (!p || (*p && (*p != '\n'))) 1779 + ret = kstrtol(p, 0, &fd); 1780 + if (ret < 0) 1784 1781 return -EINVAL; 1785 1782 1786 1783 if (fd < 0 || fd >= INT_MAX) ··· 2139 2136 { 2140 2137 unsigned long tmp; 2141 2138 char *p = (char *)page; 2139 + int ret; 2142 2140 2143 - tmp = simple_strtoul(p, &p, 10); 2144 - if (!p || (*p && (*p != '\n'))) 2145 - return -EINVAL; 2141 + ret = kstrtoul(p, 10, &tmp); 2142 + if (ret) 2143 + return ret; 2146 2144 2147 2145 /* this will validate ranges for us. */ 2148 2146 o2hb_dead_threshold_set((unsigned int) tmp);
+1 -1
fs/ocfs2/cluster/masklog.h
··· 29 29 * just calling printk() so that this can eventually make its way through 30 30 * relayfs along with the debugging messages. Everything else gets KERN_DEBUG. 31 31 * The inline tests and macro dance give GCC the opportunity to quite cleverly 32 - * only emit the appropriage printk() when the caller passes in a constant 32 + * only emit the appropriate printk() when the caller passes in a constant 33 33 * mask, as is almost always the case. 34 34 * 35 35 * All this bitmask nonsense is managed from the files under
+3 -3
fs/ocfs2/cluster/quorum.c
··· 23 23 * race between when we see a node start heartbeating and when we connect 24 24 * to it. 25 25 * 26 - * So nodes that are in this transtion put a hold on the quorum decision 26 + * So nodes that are in this transition put a hold on the quorum decision 27 27 * with a counter. As they fall out of this transition they drop the count 28 28 * and if they're the last, they fire off the decision. 29 29 */ ··· 189 189 } 190 190 191 191 /* as a node comes up we delay the quorum decision until we know the fate of 192 - * the connection. the hold will be droped in conn_up or hb_down. it might be 192 + * the connection. the hold will be dropped in conn_up or hb_down. it might be 193 193 * perpetuated by con_err until hb_down. if we already have a conn, we might 194 194 * be dropping a hold that conn_up got. */ 195 195 void o2quo_hb_up(u8 node) ··· 256 256 } 257 257 258 258 /* This is analogous to hb_up. as a node's connection comes up we delay the 259 - * quorum decision until we see it heartbeating. the hold will be droped in 259 + * quorum decision until we see it heartbeating. the hold will be dropped in 260 260 * hb_up or hb_down. it might be perpetuated by con_err until hb_down. if 261 261 * it's already heartbeating we might be dropping a hold that conn_up got. 262 262 * */
+4 -4
fs/ocfs2/cluster/tcp.c
··· 5 5 * 6 6 * ---- 7 7 * 8 - * Callers for this were originally written against a very simple synchronus 8 + * Callers for this were originally written against a very simple synchronous 9 9 * API. This implementation reflects those simple callers. Some day I'm sure 10 10 * we'll need to move to a more robust posting/callback mechanism. 11 11 * 12 12 * Transmit calls pass in kernel virtual addresses and block copying this into 13 13 * the socket's tx buffers via a usual blocking sendmsg. They'll block waiting 14 - * for a failed socket to timeout. TX callers can also pass in a poniter to an 14 + * for a failed socket to timeout. TX callers can also pass in a pointer to an 15 15 * 'int' which gets filled with an errno off the wire in response to the 16 16 * message they send. 17 17 * ··· 101 101 * o2net_wq. teardown detaches the callbacks before destroying the workqueue. 102 102 * quorum work is queued as sock containers are shutdown.. stop_listening 103 103 * tears down all the node's sock containers, preventing future shutdowns 104 - * and queued quroum work, before canceling delayed quorum work and 104 + * and queued quorum work, before canceling delayed quorum work and 105 105 * destroying the work queue. 106 106 */ 107 107 static struct workqueue_struct *o2net_wq; ··· 1419 1419 return ret; 1420 1420 } 1421 1421 1422 - /* this work func is triggerd by data ready. it reads until it can read no 1422 + /* this work func is triggered by data ready. it reads until it can read no 1423 1423 * more. it interprets 0, eof, as fatal. if data_ready hits while we're doing 1424 1424 * our work the work struct will be marked and we'll be called again. */ 1425 1425 static void o2net_rx_until_empty(struct work_struct *work)
+1 -1
fs/ocfs2/dlm/dlmapi.h
··· 118 118 #define LKM_VALBLK 0x00000100 /* lock value block request */ 119 119 #define LKM_NOQUEUE 0x00000200 /* non blocking request */ 120 120 #define LKM_CONVERT 0x00000400 /* conversion request */ 121 - #define LKM_NODLCKWT 0x00000800 /* this lock wont deadlock (U) */ 121 + #define LKM_NODLCKWT 0x00000800 /* this lock won't deadlock (U) */ 122 122 #define LKM_UNLOCK 0x00001000 /* deallocate this lock */ 123 123 #define LKM_CANCEL 0x00002000 /* cancel conversion request */ 124 124 #define LKM_DEQALL 0x00004000 /* remove all locks held by proc (U) */
+5 -4
fs/ocfs2/dlm/dlmdebug.c
··· 14 14 #include <linux/spinlock.h> 15 15 #include <linux/debugfs.h> 16 16 #include <linux/export.h> 17 + #include <linux/string_choices.h> 17 18 18 19 #include "../cluster/heartbeat.h" 19 20 #include "../cluster/nodemanager.h" ··· 91 90 buf, res->owner, res->state); 92 91 printk(" last used: %lu, refcnt: %u, on purge list: %s\n", 93 92 res->last_used, kref_read(&res->refs), 94 - list_empty(&res->purge) ? "no" : "yes"); 93 + str_no_yes(list_empty(&res->purge))); 95 94 printk(" on dirty list: %s, on reco list: %s, " 96 95 "migrating pending: %s\n", 97 - list_empty(&res->dirty) ? "no" : "yes", 98 - list_empty(&res->recovering) ? "no" : "yes", 99 - res->migration_pending ? "yes" : "no"); 96 + str_no_yes(list_empty(&res->dirty)), 97 + str_no_yes(list_empty(&res->recovering)), 98 + str_yes_no(res->migration_pending)); 100 99 printk(" inflight locks: %d, asts reserved: %d\n", 101 100 res->inflight_locks, atomic_read(&res->asts_reserved)); 102 101 dlm_print_lockres_refmap(res);
+6 -6
fs/ocfs2/dlm/dlmmaster.c
··· 21 21 #include <linux/inet.h> 22 22 #include <linux/spinlock.h> 23 23 #include <linux/delay.h> 24 - 24 + #include <linux/string_choices.h> 25 25 26 26 #include "../cluster/heartbeat.h" 27 27 #include "../cluster/nodemanager.h" ··· 2859 2859 dlm_lockres_release_ast(dlm, res); 2860 2860 2861 2861 mlog(0, "about to wait on migration_wq, dirty=%s\n", 2862 - res->state & DLM_LOCK_RES_DIRTY ? "yes" : "no"); 2862 + str_yes_no(res->state & DLM_LOCK_RES_DIRTY)); 2863 2863 /* if the extra ref we just put was the final one, this 2864 2864 * will pass thru immediately. otherwise, we need to wait 2865 2865 * for the last ast to finish. */ ··· 2869 2869 msecs_to_jiffies(1000)); 2870 2870 if (ret < 0) { 2871 2871 mlog(0, "woken again: migrating? %s, dead? %s\n", 2872 - res->state & DLM_LOCK_RES_MIGRATING ? "yes":"no", 2873 - test_bit(target, dlm->domain_map) ? "no":"yes"); 2872 + str_yes_no(res->state & DLM_LOCK_RES_MIGRATING), 2873 + str_no_yes(test_bit(target, dlm->domain_map))); 2874 2874 } else { 2875 2875 mlog(0, "all is well: migrating? %s, dead? %s\n", 2876 - res->state & DLM_LOCK_RES_MIGRATING ? "yes":"no", 2877 - test_bit(target, dlm->domain_map) ? "no":"yes"); 2876 + str_yes_no(res->state & DLM_LOCK_RES_MIGRATING), 2877 + str_no_yes(test_bit(target, dlm->domain_map))); 2878 2878 } 2879 2879 if (!dlm_migration_can_proceed(dlm, res, target)) { 2880 2880 mlog(0, "trying again...\n");
+6 -7
fs/ocfs2/dlm/dlmrecovery.c
··· 22 22 #include <linux/timer.h> 23 23 #include <linux/kthread.h> 24 24 #include <linux/delay.h> 25 - 25 + #include <linux/string_choices.h> 26 26 27 27 #include "../cluster/heartbeat.h" 28 28 #include "../cluster/nodemanager.h" ··· 207 207 * 1) all recovery threads cluster wide will work on recovering 208 208 * ONE node at a time 209 209 * 2) negotiate who will take over all the locks for the dead node. 210 - * thats right... ALL the locks. 210 + * that's right... ALL the locks. 211 211 * 3) once a new master is chosen, everyone scans all locks 212 212 * and moves aside those mastered by the dead guy 213 213 * 4) each of these locks should be locked until recovery is done ··· 581 581 msecs_to_jiffies(1000)); 582 582 mlog(0, "waited 1 sec for %u, " 583 583 "dead? %s\n", ndata->node_num, 584 - dlm_is_node_dead(dlm, ndata->node_num) ? 585 - "yes" : "no"); 584 + str_yes_no(dlm_is_node_dead(dlm, ndata->node_num))); 586 585 } else { 587 586 /* -ENOMEM on the other node */ 588 587 mlog(0, "%s: node %u returned " ··· 676 677 spin_unlock(&dlm_reco_state_lock); 677 678 678 679 mlog(0, "pass #%d, all_nodes_done?: %s\n", ++pass, 679 - all_nodes_done?"yes":"no"); 680 + str_yes_no(all_nodes_done)); 680 681 if (all_nodes_done) { 681 682 int ret; 682 683 ··· 1468 1469 * The first one is handled at the end of this function. The 1469 1470 * other two are handled in the worker thread after locks have 1470 1471 * been attached. Yes, we don't wait for purge time to match 1471 - * kref_init. The lockres will still have atleast one ref 1472 + * kref_init. The lockres will still have at least one ref 1472 1473 * added because it is in the hash __dlm_insert_lockres() */ 1473 1474 extra_refs++; 1474 1475 ··· 1734 1735 spin_unlock(&res->spinlock); 1735 1736 } 1736 1737 } else { 1737 - /* put.. incase we are not the master */ 1738 + /* put.. in case we are not the master */ 1738 1739 spin_unlock(&res->spinlock); 1739 1740 dlm_lockres_put(res); 1740 1741 }
+16 -7
fs/ocfs2/dlmfs/dlmfs.c
··· 20 20 21 21 #include <linux/module.h> 22 22 #include <linux/fs.h> 23 + #include <linux/fs_context.h> 23 24 #include <linux/pagemap.h> 24 25 #include <linux/types.h> 25 26 #include <linux/slab.h> ··· 507 506 return status; 508 507 } 509 508 510 - static int dlmfs_fill_super(struct super_block * sb, 511 - void * data, 512 - int silent) 509 + static int dlmfs_fill_super(struct super_block *sb, struct fs_context *fc) 513 510 { 514 511 sb->s_maxbytes = MAX_LFS_FILESIZE; 515 512 sb->s_blocksize = PAGE_SIZE; ··· 555 556 .setattr = dlmfs_file_setattr, 556 557 }; 557 558 558 - static struct dentry *dlmfs_mount(struct file_system_type *fs_type, 559 - int flags, const char *dev_name, void *data) 559 + static int dlmfs_get_tree(struct fs_context *fc) 560 560 { 561 - return mount_nodev(fs_type, flags, data, dlmfs_fill_super); 561 + return get_tree_nodev(fc, dlmfs_fill_super); 562 + } 563 + 564 + static const struct fs_context_operations dlmfs_context_ops = { 565 + .get_tree = dlmfs_get_tree, 566 + }; 567 + 568 + static int dlmfs_init_fs_context(struct fs_context *fc) 569 + { 570 + fc->ops = &dlmfs_context_ops; 571 + 572 + return 0; 562 573 } 563 574 564 575 static struct file_system_type dlmfs_fs_type = { 565 576 .owner = THIS_MODULE, 566 577 .name = "ocfs2_dlmfs", 567 - .mount = dlmfs_mount, 568 578 .kill_sb = kill_litter_super, 579 + .init_fs_context = dlmfs_init_fs_context, 569 580 }; 570 581 MODULE_ALIAS_FS("ocfs2_dlmfs"); 571 582
+15 -16
fs/ocfs2/dlmglue.c
··· 19 19 #include <linux/delay.h> 20 20 #include <linux/quotaops.h> 21 21 #include <linux/sched/signal.h> 22 + #include <linux/string_choices.h> 22 23 23 24 #define MLOG_MASK_PREFIX ML_DLM_GLUE 24 25 #include <cluster/masklog.h> ··· 795 794 796 795 /* 797 796 * Keep a list of processes who have interest in a lockres. 798 - * Note: this is now only uesed for check recursive cluster locking. 797 + * Note: this is now only used for check recursive cluster locking. 799 798 */ 800 799 static inline void ocfs2_add_holder(struct ocfs2_lock_res *lockres, 801 800 struct ocfs2_lock_holder *oh) ··· 2530 2529 2531 2530 /* 2532 2531 * This is working around a lock inversion between tasks acquiring DLM 2533 - * locks while holding a page lock and the downconvert thread which 2534 - * blocks dlm lock acquiry while acquiring page locks. 2532 + * locks while holding a folio lock and the downconvert thread which 2533 + * blocks dlm lock acquiry while acquiring folio locks. 2535 2534 * 2536 - * ** These _with_page variantes are only intended to be called from aop 2537 - * methods that hold page locks and return a very specific *positive* error 2535 + * ** These _with_folio variants are only intended to be called from aop 2536 + * methods that hold folio locks and return a very specific *positive* error 2538 2537 * code that aop methods pass up to the VFS -- test for errors with != 0. ** 2539 2538 * 2540 2539 * The DLM is called such that it returns -EAGAIN if it would have 2541 2540 * blocked waiting for the downconvert thread. In that case we unlock 2542 - * our page so the downconvert thread can make progress. Once we've 2541 + * our folio so the downconvert thread can make progress. Once we've 2543 2542 * done this we have to return AOP_TRUNCATED_PAGE so the aop method 2544 2543 * that called us can bubble that back up into the VFS who will then 2545 2544 * immediately retry the aop call. 2546 2545 */ 2547 - int ocfs2_inode_lock_with_page(struct inode *inode, 2548 - struct buffer_head **ret_bh, 2549 - int ex, 2550 - struct page *page) 2546 + int ocfs2_inode_lock_with_folio(struct inode *inode, 2547 + struct buffer_head **ret_bh, int ex, struct folio *folio) 2551 2548 { 2552 2549 int ret; 2553 2550 2554 2551 ret = ocfs2_inode_lock_full(inode, ret_bh, ex, OCFS2_LOCK_NONBLOCK); 2555 2552 if (ret == -EAGAIN) { 2556 - unlock_page(page); 2553 + folio_unlock(folio); 2557 2554 /* 2558 2555 * If we can't get inode lock immediately, we should not return 2559 2556 * directly here, since this will lead to a softlockup problem. ··· 2629 2630 } 2630 2631 2631 2632 /* 2632 - * This _tracker variantes are introduced to deal with the recursive cluster 2633 + * This _tracker variants are introduced to deal with the recursive cluster 2633 2634 * locking issue. The idea is to keep track of a lock holder on the stack of 2634 2635 * the current process. If there's a lock holder on the stack, we know the 2635 2636 * task context is already protected by cluster locking. Currently, they're ··· 2734 2735 struct ocfs2_lock_res *lockres; 2735 2736 2736 2737 lockres = &OCFS2_I(inode)->ip_inode_lockres; 2737 - /* had_lock means that the currect process already takes the cluster 2738 + /* had_lock means that the current process already takes the cluster 2738 2739 * lock previously. 2739 2740 * If had_lock is 1, we have nothing to do here. 2740 2741 * If had_lock is 0, we will release the lock. ··· 3801 3802 * set when the ast is received for an upconvert just before the 3802 3803 * OCFS2_LOCK_BUSY flag is cleared. Now if the fs received a bast 3803 3804 * on the heels of the ast, we want to delay the downconvert just 3804 - * enough to allow the up requestor to do its task. Because this 3805 + * enough to allow the up requester to do its task. Because this 3805 3806 * lock is in the blocked queue, the lock will be downconverted 3806 - * as soon as the requestor is done with the lock. 3807 + * as soon as the requester is done with the lock. 3807 3808 */ 3808 3809 if (lockres->l_flags & OCFS2_LOCK_UPCONVERT_FINISHING) 3809 3810 goto leave_requeue; ··· 4338 4339 ocfs2_schedule_blocked_lock(osb, lockres); 4339 4340 4340 4341 mlog(ML_BASTS, "lockres %s, requeue = %s.\n", lockres->l_name, 4341 - ctl.requeue ? "yes" : "no"); 4342 + str_yes_no(ctl.requeue)); 4342 4343 spin_unlock_irqrestore(&lockres->l_lock, flags); 4343 4344 4344 4345 if (ctl.unblock_action != UNBLOCK_CONTINUE
+2 -4
fs/ocfs2/dlmglue.h
··· 137 137 int ex, 138 138 int arg_flags, 139 139 int subclass); 140 - int ocfs2_inode_lock_with_page(struct inode *inode, 141 - struct buffer_head **ret_bh, 142 - int ex, 143 - struct page *page); 140 + int ocfs2_inode_lock_with_folio(struct inode *inode, 141 + struct buffer_head **ret_bh, int ex, struct folio *folio); 144 142 /* Variants without special locking class or flags */ 145 143 #define ocfs2_inode_lock_full(i, r, e, f)\ 146 144 ocfs2_inode_lock_full_nested(i, r, e, f, OI_LS_NORMAL)
+10
fs/ocfs2/extent_map.c
··· 435 435 } 436 436 } 437 437 438 + if (le16_to_cpu(el->l_next_free_rec) > le16_to_cpu(el->l_count)) { 439 + ocfs2_error(inode->i_sb, 440 + "Inode %lu has an invalid extent (next_free_rec %u, count %u)\n", 441 + inode->i_ino, 442 + le16_to_cpu(el->l_next_free_rec), 443 + le16_to_cpu(el->l_count)); 444 + ret = -EROFS; 445 + goto out; 446 + } 447 + 438 448 i = ocfs2_search_extent_list(el, v_cluster); 439 449 if (i == -1) { 440 450 /*
+4 -4
fs/ocfs2/file.c
··· 782 782 goto out_commit_trans; 783 783 } 784 784 785 - /* Get the offsets within the page that we want to zero */ 786 - zero_from = abs_from & (PAGE_SIZE - 1); 787 - zero_to = abs_to & (PAGE_SIZE - 1); 785 + /* Get the offsets within the folio that we want to zero */ 786 + zero_from = offset_in_folio(folio, abs_from); 787 + zero_to = offset_in_folio(folio, abs_to); 788 788 if (!zero_to) 789 - zero_to = PAGE_SIZE; 789 + zero_to = folio_size(folio); 790 790 791 791 trace_ocfs2_write_zero_page( 792 792 (unsigned long long)OCFS2_I(inode)->ip_blkno,
+26 -2
fs/ocfs2/inode.c
··· 200 200 return inode; 201 201 } 202 202 203 + static int ocfs2_dinode_has_extents(struct ocfs2_dinode *di) 204 + { 205 + /* inodes flagged with other stuff in id2 */ 206 + if (di->i_flags & (OCFS2_SUPER_BLOCK_FL | OCFS2_LOCAL_ALLOC_FL | 207 + OCFS2_CHAIN_FL | OCFS2_DEALLOC_FL)) 208 + return 0; 209 + /* i_flags doesn't indicate when id2 is a fast symlink */ 210 + if (S_ISLNK(di->i_mode) && di->i_size && di->i_clusters == 0) 211 + return 0; 212 + if (di->i_dyn_features & OCFS2_INLINE_DATA_FL) 213 + return 0; 214 + 215 + return 1; 216 + } 203 217 204 218 /* 205 219 * here's how inodes get read from disk: ··· 1136 1122 1137 1123 dquot_drop(inode); 1138 1124 1139 - /* To preven remote deletes we hold open lock before, now it 1125 + /* To prevent remote deletes we hold open lock before, now it 1140 1126 * is time to unlock PR and EX open locks. */ 1141 1127 ocfs2_open_unlock(inode); 1142 1128 ··· 1451 1437 * Call ocfs2_validate_meta_ecc() first since it has ecc repair 1452 1438 * function, but we should not return error immediately when ecc 1453 1439 * validation fails, because the reason is quite likely the invalid 1454 - * inode number inputed. 1440 + * inode number inputted. 1455 1441 */ 1456 1442 rc = ocfs2_validate_meta_ecc(sb, bh->b_data, &di->i_check); 1457 1443 if (rc) { ··· 1559 1545 "Filecheck: reset dinode #%llu: fs_generation to %u\n", 1560 1546 (unsigned long long)bh->b_blocknr, 1561 1547 le32_to_cpu(di->i_fs_generation)); 1548 + } 1549 + 1550 + if (ocfs2_dinode_has_extents(di) && 1551 + le16_to_cpu(di->id2.i_list.l_next_free_rec) > le16_to_cpu(di->id2.i_list.l_count)) { 1552 + di->id2.i_list.l_next_free_rec = di->id2.i_list.l_count; 1553 + changed = 1; 1554 + mlog(ML_ERROR, 1555 + "Filecheck: reset dinode #%llu: l_next_free_rec to %u\n", 1556 + (unsigned long long)bh->b_blocknr, 1557 + le16_to_cpu(di->id2.i_list.l_next_free_rec)); 1562 1558 } 1563 1559 1564 1560 if (changed || ocfs2_validate_meta_ecc(sb, bh->b_data, &di->i_check)) {
+1 -1
fs/ocfs2/ioctl.c
··· 796 796 /* 797 797 * OCFS2_IOC_INFO handles an array of requests passed from userspace. 798 798 * 799 - * ocfs2_info_handle() recevies a large info aggregation, grab and 799 + * ocfs2_info_handle() receives a large info aggregation, grab and 800 800 * validate the request count from header, then break it into small 801 801 * pieces, later specific handlers can handle them one by one. 802 802 *
+1 -1
fs/ocfs2/journal.c
··· 1956 1956 1957 1957 /* 1958 1958 * Scan timer should get fired every ORPHAN_SCAN_SCHEDULE_TIMEOUT. Add some 1959 - * randomness to the timeout to minimize multple nodes firing the timer at the 1959 + * randomness to the timeout to minimize multiple nodes firing the timer at the 1960 1960 * same time. 1961 1961 */ 1962 1962 static inline unsigned long ocfs2_orphan_scan_timeout(void)
+9 -9
fs/ocfs2/mmap.c
··· 44 44 } 45 45 46 46 static vm_fault_t __ocfs2_page_mkwrite(struct file *file, 47 - struct buffer_head *di_bh, struct page *page) 47 + struct buffer_head *di_bh, struct folio *folio) 48 48 { 49 49 int err; 50 50 vm_fault_t ret = VM_FAULT_NOPAGE; 51 51 struct inode *inode = file_inode(file); 52 52 struct address_space *mapping = inode->i_mapping; 53 - loff_t pos = page_offset(page); 53 + loff_t pos = folio_pos(folio); 54 54 unsigned int len = PAGE_SIZE; 55 55 pgoff_t last_index; 56 56 struct folio *locked_folio = NULL; ··· 72 72 * 73 73 * Let VM retry with these cases. 74 74 */ 75 - if ((page->mapping != inode->i_mapping) || 76 - (!PageUptodate(page)) || 77 - (page_offset(page) >= size)) 75 + if ((folio->mapping != inode->i_mapping) || 76 + !folio_test_uptodate(folio) || 77 + (pos >= size)) 78 78 goto out; 79 79 80 80 /* ··· 87 87 * worry about ocfs2_write_begin() skipping some buffer reads 88 88 * because the "write" would invalidate their data. 89 89 */ 90 - if (page->index == last_index) 90 + if (folio->index == last_index) 91 91 len = ((size - 1) & ~PAGE_MASK) + 1; 92 92 93 93 err = ocfs2_write_begin_nolock(mapping, pos, len, OCFS2_WRITE_MMAP, 94 - &locked_folio, &fsdata, di_bh, page); 94 + &locked_folio, &fsdata, di_bh, folio); 95 95 if (err) { 96 96 if (err != -ENOSPC) 97 97 mlog_errno(err); ··· 112 112 113 113 static vm_fault_t ocfs2_page_mkwrite(struct vm_fault *vmf) 114 114 { 115 - struct page *page = vmf->page; 115 + struct folio *folio = page_folio(vmf->page); 116 116 struct inode *inode = file_inode(vmf->vma->vm_file); 117 117 struct buffer_head *di_bh = NULL; 118 118 sigset_t oldset; ··· 141 141 */ 142 142 down_write(&OCFS2_I(inode)->ip_alloc_sem); 143 143 144 - ret = __ocfs2_page_mkwrite(vmf->vma->vm_file, di_bh, page); 144 + ret = __ocfs2_page_mkwrite(vmf->vma->vm_file, di_bh, folio); 145 145 146 146 up_write(&OCFS2_I(inode)->ip_alloc_sem); 147 147
+4 -4
fs/ocfs2/move_extents.c
··· 492 492 bg = (struct ocfs2_group_desc *)gd_bh->b_data; 493 493 494 494 /* 495 - * moving goal is not allowd to start with a group desc blok(#0 blk) 495 + * moving goal is not allowed to start with a group desc blok(#0 blk) 496 496 * let's compromise to the latter cluster. 497 497 */ 498 498 if (range->me_goal == le64_to_cpu(bg->bg_blkno)) ··· 658 658 659 659 /* 660 660 * probe the victim cluster group to find a proper 661 - * region to fit wanted movement, it even will perfrom 661 + * region to fit wanted movement, it even will perform 662 662 * a best-effort attempt by compromising to a threshold 663 663 * around the goal. 664 664 */ ··· 920 920 } 921 921 922 922 /* 923 - * rememer ip_xattr_sem also needs to be held if necessary 923 + * remember ip_xattr_sem also needs to be held if necessary 924 924 */ 925 925 down_write(&OCFS2_I(inode)->ip_alloc_sem); 926 926 ··· 1022 1022 context->range = &range; 1023 1023 1024 1024 /* 1025 - * ok, the default theshold for the defragmentation 1025 + * ok, the default threshold for the defragmentation 1026 1026 * is 1M, since our maximum clustersize was 1M also. 1027 1027 * any thought? 1028 1028 */
+3 -4
fs/ocfs2/namei.c
··· 508 508 struct inode *inode, 509 509 dev_t dev, 510 510 struct buffer_head **new_fe_bh, 511 - struct buffer_head *parent_fe_bh, 512 511 handle_t *handle, 513 512 struct ocfs2_alloc_context *inode_ac, 514 513 u64 fe_blkno, u64 suballoc_loc, u16 suballoc_bit) ··· 640 641 } 641 642 642 643 return __ocfs2_mknod_locked(dir, inode, dev, new_fe_bh, 643 - parent_fe_bh, handle, inode_ac, 644 - fe_blkno, suballoc_loc, suballoc_bit); 644 + handle, inode_ac, fe_blkno, 645 + suballoc_loc, suballoc_bit); 645 646 } 646 647 647 648 static int ocfs2_mkdir(struct mnt_idmap *idmap, ··· 2575 2576 clear_nlink(inode); 2576 2577 /* do the real work now. */ 2577 2578 status = __ocfs2_mknod_locked(dir, inode, 2578 - 0, &new_di_bh, parent_di_bh, handle, 2579 + 0, &new_di_bh, handle, 2579 2580 inode_ac, di_blkno, suballoc_loc, 2580 2581 suballoc_bit); 2581 2582 if (status < 0) {
+4 -4
fs/ocfs2/ocfs2_fs.h
··· 132 132 * well as the name of the cluster being joined. 133 133 * mount.ocfs2 must pass in a matching stack name. 134 134 * 135 - * If not set, the classic stack will be used. This is compatbile with 135 + * If not set, the classic stack will be used. This is compatible with 136 136 * all older versions. 137 137 */ 138 138 #define OCFS2_FEATURE_INCOMPAT_USERSPACE_STACK 0x0080 ··· 143 143 /* Support for extended attributes */ 144 144 #define OCFS2_FEATURE_INCOMPAT_XATTR 0x0200 145 145 146 - /* Support for indexed directores */ 146 + /* Support for indexed directories */ 147 147 #define OCFS2_FEATURE_INCOMPAT_INDEXED_DIRS 0x0400 148 148 149 149 /* Metadata checksum and error correction */ ··· 156 156 #define OCFS2_FEATURE_INCOMPAT_DISCONTIG_BG 0x2000 157 157 158 158 /* 159 - * Incompat bit to indicate useable clusterinfo with stackflags for all 159 + * Incompat bit to indicate usable clusterinfo with stackflags for all 160 160 * cluster stacks (userspace adnd o2cb). If this bit is set, 161 161 * INCOMPAT_USERSPACE_STACK becomes superfluous and thus should not be set. 162 162 */ ··· 1083 1083 struct ocfs2_xattr_header xb_header; /* xattr header if this 1084 1084 block contains xattr */ 1085 1085 struct ocfs2_xattr_tree_root xb_root;/* xattr tree root if this 1086 - block cotains xattr 1086 + block contains xattr 1087 1087 tree. */ 1088 1088 } xb_attrs; 1089 1089 };
+1 -1
fs/ocfs2/ocfs2_ioctl.h
··· 215 215 movement less likely 216 216 to fail, may make fs 217 217 even more fragmented */ 218 - #define OCFS2_MOVE_EXT_FL_COMPLETE (0x00000004) /* Move or defragmenation 218 + #define OCFS2_MOVE_EXT_FL_COMPLETE (0x00000004) /* Move or defragmentation 219 219 completely gets done. 220 220 */ 221 221
+1 -1
fs/ocfs2/ocfs2_lockid.h
··· 93 93 [OCFS2_LOCK_TYPE_DATA] = "Data", 94 94 [OCFS2_LOCK_TYPE_SUPER] = "Super", 95 95 [OCFS2_LOCK_TYPE_RENAME] = "Rename", 96 - /* Need to differntiate from [R]ename.. serializing writes is the 96 + /* Need to differentiate from [R]ename.. serializing writes is the 97 97 * important job it does, anyway. */ 98 98 [OCFS2_LOCK_TYPE_RW] = "Write/Read", 99 99 [OCFS2_LOCK_TYPE_DENTRY] = "Dentry",
+10 -10
fs/ocfs2/ocfs2_trace.h
··· 1658 1658 ); 1659 1659 1660 1660 TRACE_EVENT(ocfs2_fill_super, 1661 - TP_PROTO(void *sb, void *data, int silent), 1662 - TP_ARGS(sb, data, silent), 1661 + TP_PROTO(void *sb, void *fc, int silent), 1662 + TP_ARGS(sb, fc, silent), 1663 1663 TP_STRUCT__entry( 1664 1664 __field(void *, sb) 1665 - __field(void *, data) 1665 + __field(void *, fc) 1666 1666 __field(int, silent) 1667 1667 ), 1668 1668 TP_fast_assign( 1669 1669 __entry->sb = sb; 1670 - __entry->data = data; 1670 + __entry->fc = fc; 1671 1671 __entry->silent = silent; 1672 1672 ), 1673 1673 TP_printk("%p %p %d", __entry->sb, 1674 - __entry->data, __entry->silent) 1674 + __entry->fc, __entry->silent) 1675 1675 ); 1676 1676 1677 1677 TRACE_EVENT(ocfs2_parse_options, 1678 - TP_PROTO(int is_remount, char *options), 1679 - TP_ARGS(is_remount, options), 1678 + TP_PROTO(int is_remount, const char *option), 1679 + TP_ARGS(is_remount, option), 1680 1680 TP_STRUCT__entry( 1681 1681 __field(int, is_remount) 1682 - __string(options, options) 1682 + __string(option, option) 1683 1683 ), 1684 1684 TP_fast_assign( 1685 1685 __entry->is_remount = is_remount; 1686 - __assign_str(options); 1686 + __assign_str(option); 1687 1687 ), 1688 - TP_printk("%d %s", __entry->is_remount, __get_str(options)) 1688 + TP_printk("%d %s", __entry->is_remount, __get_str(option)) 1689 1689 ); 1690 1690 1691 1691 DEFINE_OCFS2_POINTER_EVENT(ocfs2_put_super);
+5
fs/ocfs2/quota_global.c
··· 761 761 handle = ocfs2_start_trans(osb, 762 762 ocfs2_calc_qdel_credits(dquot->dq_sb, dquot->dq_id.type)); 763 763 if (IS_ERR(handle)) { 764 + /* 765 + * Mark dquot as inactive to avoid endless cycle in 766 + * quota_release_workfn(). 767 + */ 768 + clear_bit(DQ_ACTIVE_B, &dquot->dq_flags); 764 769 status = PTR_ERR(handle); 765 770 mlog_errno(status); 766 771 goto out_ilock;
+19 -22
fs/ocfs2/refcounttree.c
··· 2420 2420 * 2421 2421 * If we will insert a new one, this is easy and only happens 2422 2422 * during adding refcounted flag to the extent, so we don't 2423 - * have a chance of spliting. We just need one record. 2423 + * have a chance of splitting. We just need one record. 2424 2424 * 2425 2425 * If the refcount rec already exists, that would be a little 2426 2426 * complicated. we may have to: ··· 2610 2610 /* 2611 2611 * Calculate out the start and number of virtual clusters we need to CoW. 2612 2612 * 2613 - * cpos is vitual start cluster position we want to do CoW in a 2613 + * cpos is virtual start cluster position we want to do CoW in a 2614 2614 * file and write_len is the cluster length. 2615 2615 * max_cpos is the place where we want to stop CoW intentionally. 2616 2616 * 2617 - * Normal we will start CoW from the beginning of extent record cotaining cpos. 2617 + * Normal we will start CoW from the beginning of extent record containing cpos. 2618 2618 * We try to break up extents on boundaries of MAX_CONTIG_BYTES so that we 2619 2619 * get good I/O from the resulting extent tree. 2620 2620 */ ··· 2902 2902 int ret = 0, partial; 2903 2903 struct super_block *sb = inode->i_sb; 2904 2904 u64 new_block = ocfs2_clusters_to_blocks(sb, new_cluster); 2905 - struct page *page; 2906 2905 pgoff_t page_index; 2907 2906 unsigned int from, to; 2908 2907 loff_t offset, end, map_end; ··· 2920 2921 end = i_size_read(inode); 2921 2922 2922 2923 while (offset < end) { 2924 + struct folio *folio; 2923 2925 page_index = offset >> PAGE_SHIFT; 2924 2926 map_end = ((loff_t)page_index + 1) << PAGE_SHIFT; 2925 2927 if (map_end > end) ··· 2933 2933 to = map_end & (PAGE_SIZE - 1); 2934 2934 2935 2935 retry: 2936 - page = find_or_create_page(mapping, page_index, GFP_NOFS); 2937 - if (!page) { 2938 - ret = -ENOMEM; 2936 + folio = __filemap_get_folio(mapping, page_index, 2937 + FGP_LOCK | FGP_ACCESSED | FGP_CREAT, GFP_NOFS); 2938 + if (IS_ERR(folio)) { 2939 + ret = PTR_ERR(folio); 2939 2940 mlog_errno(ret); 2940 2941 break; 2941 2942 } ··· 2946 2945 * page, so write it back. 2947 2946 */ 2948 2947 if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize) { 2949 - if (PageDirty(page)) { 2950 - unlock_page(page); 2951 - put_page(page); 2948 + if (folio_test_dirty(folio)) { 2949 + folio_unlock(folio); 2950 + folio_put(folio); 2952 2951 2953 2952 ret = filemap_write_and_wait_range(mapping, 2954 2953 offset, map_end - 1); ··· 2956 2955 } 2957 2956 } 2958 2957 2959 - if (!PageUptodate(page)) { 2960 - struct folio *folio = page_folio(page); 2961 - 2958 + if (!folio_test_uptodate(folio)) { 2962 2959 ret = block_read_full_folio(folio, ocfs2_get_block); 2963 2960 if (ret) { 2964 2961 mlog_errno(ret); ··· 2965 2966 folio_lock(folio); 2966 2967 } 2967 2968 2968 - if (page_has_buffers(page)) { 2969 - ret = walk_page_buffers(handle, page_buffers(page), 2969 + if (folio_buffers(folio)) { 2970 + ret = walk_page_buffers(handle, folio_buffers(folio), 2970 2971 from, to, &partial, 2971 2972 ocfs2_clear_cow_buffer); 2972 2973 if (ret) { ··· 2975 2976 } 2976 2977 } 2977 2978 2978 - ocfs2_map_and_dirty_page(inode, 2979 - handle, from, to, 2980 - page, 0, &new_block); 2981 - mark_page_accessed(page); 2979 + ocfs2_map_and_dirty_folio(inode, handle, from, to, 2980 + folio, 0, &new_block); 2981 + folio_mark_accessed(folio); 2982 2982 unlock: 2983 - unlock_page(page); 2984 - put_page(page); 2985 - page = NULL; 2983 + folio_unlock(folio); 2984 + folio_put(folio); 2986 2985 offset = map_end; 2987 2986 if (ret) 2988 2987 break;
+2 -2
fs/ocfs2/reservations.h
··· 31 31 32 32 #define OCFS2_RESV_FLAG_INUSE 0x01 /* Set when r_node is part of a btree */ 33 33 #define OCFS2_RESV_FLAG_TMP 0x02 /* Temporary reservation, will be 34 - * destroyed immedately after use */ 34 + * destroyed immediately after use */ 35 35 #define OCFS2_RESV_FLAG_DIR 0x04 /* Reservation is for an unindexed 36 36 * directory btree */ 37 37 ··· 125 125 /** 126 126 * ocfs2_resmap_claimed_bits() - Tell the reservation code that bits were used. 127 127 * @resmap: reservations bitmap 128 - * @resv: optional reservation to recalulate based on new bitmap 128 + * @resv: optional reservation to recalculate based on new bitmap 129 129 * @cstart: start of allocation in clusters 130 130 * @clen: end of allocation in clusters. 131 131 *
+1 -1
fs/ocfs2/stack_o2cb.c
··· 227 227 } 228 228 229 229 /* 230 - * o2dlm aways has a "valid" LVB. If the dlm loses track of the LVB 230 + * o2dlm always has a "valid" LVB. If the dlm loses track of the LVB 231 231 * contents, it will zero out the LVB. Thus the caller can always trust 232 232 * the contents. 233 233 */
+1 -1
fs/ocfs2/stackglue.h
··· 210 210 struct file_lock *fl); 211 211 212 212 /* 213 - * This is an optoinal debugging hook. If provided, the 213 + * This is an optional debugging hook. If provided, the 214 214 * stack can dump debugging information about this lock. 215 215 */ 216 216 void (*dump_lksb)(struct ocfs2_dlm_lksb *lksb);
+266 -321
fs/ocfs2/super.c
··· 19 19 #include <linux/blkdev.h> 20 20 #include <linux/socket.h> 21 21 #include <linux/inet.h> 22 - #include <linux/parser.h> 22 + #include <linux/fs_parser.h> 23 + #include <linux/fs_context.h> 23 24 #include <linux/crc32.h> 24 25 #include <linux/debugfs.h> 25 - #include <linux/mount.h> 26 26 #include <linux/seq_file.h> 27 27 #include <linux/quotaops.h> 28 28 #include <linux/signal.h> ··· 80 80 unsigned int resv_level; 81 81 int dir_resv_level; 82 82 char cluster_stack[OCFS2_STACK_LABEL_LEN + 1]; 83 + bool user_stack; 83 84 }; 84 85 85 - static int ocfs2_parse_options(struct super_block *sb, char *options, 86 - struct mount_options *mopt, 87 - int is_remount); 86 + static int ocfs2_parse_param(struct fs_context *fc, struct fs_parameter *param); 88 87 static int ocfs2_check_set_options(struct super_block *sb, 89 88 struct mount_options *options); 90 89 static int ocfs2_show_options(struct seq_file *s, struct dentry *root); 91 90 static void ocfs2_put_super(struct super_block *sb); 92 91 static int ocfs2_mount_volume(struct super_block *sb); 93 - static int ocfs2_remount(struct super_block *sb, int *flags, char *data); 94 92 static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err); 95 93 static int ocfs2_initialize_mem_caches(void); 96 94 static void ocfs2_free_mem_caches(void); ··· 133 135 .evict_inode = ocfs2_evict_inode, 134 136 .sync_fs = ocfs2_sync_fs, 135 137 .put_super = ocfs2_put_super, 136 - .remount_fs = ocfs2_remount, 137 138 .show_options = ocfs2_show_options, 138 139 .quota_read = ocfs2_quota_read, 139 140 .quota_write = ocfs2_quota_write, ··· 141 144 142 145 enum { 143 146 Opt_barrier, 144 - Opt_err_panic, 145 - Opt_err_ro, 147 + Opt_errors, 146 148 Opt_intr, 147 - Opt_nointr, 148 - Opt_hb_none, 149 - Opt_hb_local, 150 - Opt_hb_global, 151 - Opt_data_ordered, 152 - Opt_data_writeback, 149 + Opt_heartbeat, 150 + Opt_data, 153 151 Opt_atime_quantum, 154 152 Opt_slot, 155 153 Opt_commit, ··· 152 160 Opt_localflocks, 153 161 Opt_stack, 154 162 Opt_user_xattr, 155 - Opt_nouser_xattr, 156 163 Opt_inode64, 157 164 Opt_acl, 158 - Opt_noacl, 159 165 Opt_usrquota, 160 166 Opt_grpquota, 161 - Opt_coherency_buffered, 162 - Opt_coherency_full, 167 + Opt_coherency, 163 168 Opt_resv_level, 164 169 Opt_dir_resv_level, 165 170 Opt_journal_async_commit, 166 - Opt_err_cont, 167 - Opt_err, 168 171 }; 169 172 170 - static const match_table_t tokens = { 171 - {Opt_barrier, "barrier=%u"}, 172 - {Opt_err_panic, "errors=panic"}, 173 - {Opt_err_ro, "errors=remount-ro"}, 174 - {Opt_intr, "intr"}, 175 - {Opt_nointr, "nointr"}, 176 - {Opt_hb_none, OCFS2_HB_NONE}, 177 - {Opt_hb_local, OCFS2_HB_LOCAL}, 178 - {Opt_hb_global, OCFS2_HB_GLOBAL}, 179 - {Opt_data_ordered, "data=ordered"}, 180 - {Opt_data_writeback, "data=writeback"}, 181 - {Opt_atime_quantum, "atime_quantum=%u"}, 182 - {Opt_slot, "preferred_slot=%u"}, 183 - {Opt_commit, "commit=%u"}, 184 - {Opt_localalloc, "localalloc=%d"}, 185 - {Opt_localflocks, "localflocks"}, 186 - {Opt_stack, "cluster_stack=%s"}, 187 - {Opt_user_xattr, "user_xattr"}, 188 - {Opt_nouser_xattr, "nouser_xattr"}, 189 - {Opt_inode64, "inode64"}, 190 - {Opt_acl, "acl"}, 191 - {Opt_noacl, "noacl"}, 192 - {Opt_usrquota, "usrquota"}, 193 - {Opt_grpquota, "grpquota"}, 194 - {Opt_coherency_buffered, "coherency=buffered"}, 195 - {Opt_coherency_full, "coherency=full"}, 196 - {Opt_resv_level, "resv_level=%u"}, 197 - {Opt_dir_resv_level, "dir_resv_level=%u"}, 198 - {Opt_journal_async_commit, "journal_async_commit"}, 199 - {Opt_err_cont, "errors=continue"}, 200 - {Opt_err, NULL} 173 + static const struct constant_table ocfs2_param_errors[] = { 174 + {"panic", OCFS2_MOUNT_ERRORS_PANIC}, 175 + {"remount-ro", OCFS2_MOUNT_ERRORS_ROFS}, 176 + {"continue", OCFS2_MOUNT_ERRORS_CONT}, 177 + {} 178 + }; 179 + 180 + static const struct constant_table ocfs2_param_heartbeat[] = { 181 + {"local", OCFS2_MOUNT_HB_LOCAL}, 182 + {"none", OCFS2_MOUNT_HB_NONE}, 183 + {"global", OCFS2_MOUNT_HB_GLOBAL}, 184 + {} 185 + }; 186 + 187 + static const struct constant_table ocfs2_param_data[] = { 188 + {"writeback", OCFS2_MOUNT_DATA_WRITEBACK}, 189 + {"ordered", 0}, 190 + {} 191 + }; 192 + 193 + static const struct constant_table ocfs2_param_coherency[] = { 194 + {"buffered", OCFS2_MOUNT_COHERENCY_BUFFERED}, 195 + {"full", 0}, 196 + {} 197 + }; 198 + 199 + static const struct fs_parameter_spec ocfs2_param_spec[] = { 200 + fsparam_u32 ("barrier", Opt_barrier), 201 + fsparam_enum ("errors", Opt_errors, ocfs2_param_errors), 202 + fsparam_flag_no ("intr", Opt_intr), 203 + fsparam_enum ("heartbeat", Opt_heartbeat, ocfs2_param_heartbeat), 204 + fsparam_enum ("data", Opt_data, ocfs2_param_data), 205 + fsparam_u32 ("atime_quantum", Opt_atime_quantum), 206 + fsparam_u32 ("preferred_slot", Opt_slot), 207 + fsparam_u32 ("commit", Opt_commit), 208 + fsparam_s32 ("localalloc", Opt_localalloc), 209 + fsparam_flag ("localflocks", Opt_localflocks), 210 + fsparam_string ("cluster_stack", Opt_stack), 211 + fsparam_flag_no ("user_xattr", Opt_user_xattr), 212 + fsparam_flag ("inode64", Opt_inode64), 213 + fsparam_flag_no ("acl", Opt_acl), 214 + fsparam_flag ("usrquota", Opt_usrquota), 215 + fsparam_flag ("grpquota", Opt_grpquota), 216 + fsparam_enum ("coherency", Opt_coherency, ocfs2_param_coherency), 217 + fsparam_u32 ("resv_level", Opt_resv_level), 218 + fsparam_u32 ("dir_resv_level", Opt_dir_resv_level), 219 + fsparam_flag ("journal_async_commit", Opt_journal_async_commit), 220 + {} 201 221 }; 202 222 203 223 #ifdef CONFIG_DEBUG_FS ··· 604 600 return (((unsigned long long)bytes) << bitshift) - trim; 605 601 } 606 602 607 - static int ocfs2_remount(struct super_block *sb, int *flags, char *data) 603 + static int ocfs2_reconfigure(struct fs_context *fc) 608 604 { 609 605 int incompat_features; 610 606 int ret = 0; 611 - struct mount_options parsed_options; 607 + struct mount_options *parsed_options = fc->fs_private; 608 + struct super_block *sb = fc->root->d_sb; 612 609 struct ocfs2_super *osb = OCFS2_SB(sb); 613 610 u32 tmp; 614 611 615 612 sync_filesystem(sb); 616 613 617 - if (!ocfs2_parse_options(sb, data, &parsed_options, 1) || 618 - !ocfs2_check_set_options(sb, &parsed_options)) { 614 + if (!ocfs2_check_set_options(sb, parsed_options)) { 619 615 ret = -EINVAL; 620 616 goto out; 621 617 } 622 618 623 619 tmp = OCFS2_MOUNT_HB_LOCAL | OCFS2_MOUNT_HB_GLOBAL | 624 620 OCFS2_MOUNT_HB_NONE; 625 - if ((osb->s_mount_opt & tmp) != (parsed_options.mount_opt & tmp)) { 621 + if ((osb->s_mount_opt & tmp) != (parsed_options->mount_opt & tmp)) { 626 622 ret = -EINVAL; 627 623 mlog(ML_ERROR, "Cannot change heartbeat mode on remount\n"); 628 624 goto out; 629 625 } 630 626 631 627 if ((osb->s_mount_opt & OCFS2_MOUNT_DATA_WRITEBACK) != 632 - (parsed_options.mount_opt & OCFS2_MOUNT_DATA_WRITEBACK)) { 628 + (parsed_options->mount_opt & OCFS2_MOUNT_DATA_WRITEBACK)) { 633 629 ret = -EINVAL; 634 630 mlog(ML_ERROR, "Cannot change data mode on remount\n"); 635 631 goto out; ··· 638 634 /* Probably don't want this on remount; it might 639 635 * mess with other nodes */ 640 636 if (!(osb->s_mount_opt & OCFS2_MOUNT_INODE64) && 641 - (parsed_options.mount_opt & OCFS2_MOUNT_INODE64)) { 637 + (parsed_options->mount_opt & OCFS2_MOUNT_INODE64)) { 642 638 ret = -EINVAL; 643 639 mlog(ML_ERROR, "Cannot enable inode64 on remount\n"); 644 640 goto out; 645 641 } 646 642 647 643 /* We're going to/from readonly mode. */ 648 - if ((bool)(*flags & SB_RDONLY) != sb_rdonly(sb)) { 644 + if ((bool)(fc->sb_flags & SB_RDONLY) != sb_rdonly(sb)) { 649 645 /* Disable quota accounting before remounting RO */ 650 - if (*flags & SB_RDONLY) { 646 + if (fc->sb_flags & SB_RDONLY) { 651 647 ret = ocfs2_susp_quotas(osb, 0); 652 648 if (ret < 0) 653 649 goto out; ··· 661 657 goto unlock_osb; 662 658 } 663 659 664 - if (*flags & SB_RDONLY) { 660 + if (fc->sb_flags & SB_RDONLY) { 665 661 sb->s_flags |= SB_RDONLY; 666 662 osb->osb_flags |= OCFS2_OSB_SOFT_RO; 667 663 } else { ··· 682 678 sb->s_flags &= ~SB_RDONLY; 683 679 osb->osb_flags &= ~OCFS2_OSB_SOFT_RO; 684 680 } 685 - trace_ocfs2_remount(sb->s_flags, osb->osb_flags, *flags); 681 + trace_ocfs2_remount(sb->s_flags, osb->osb_flags, fc->sb_flags); 686 682 unlock_osb: 687 683 spin_unlock(&osb->osb_lock); 688 684 /* Enable quota accounting after remounting RW */ 689 - if (!ret && !(*flags & SB_RDONLY)) { 685 + if (!ret && !(fc->sb_flags & SB_RDONLY)) { 690 686 if (sb_any_quota_suspended(sb)) 691 687 ret = ocfs2_susp_quotas(osb, 1); 692 688 else ··· 705 701 if (!ret) { 706 702 /* Only save off the new mount options in case of a successful 707 703 * remount. */ 708 - osb->s_mount_opt = parsed_options.mount_opt; 709 - osb->s_atime_quantum = parsed_options.atime_quantum; 710 - osb->preferred_slot = parsed_options.slot; 711 - if (parsed_options.commit_interval) 712 - osb->osb_commit_interval = parsed_options.commit_interval; 704 + osb->s_mount_opt = parsed_options->mount_opt; 705 + osb->s_atime_quantum = parsed_options->atime_quantum; 706 + osb->preferred_slot = parsed_options->slot; 707 + if (parsed_options->commit_interval) 708 + osb->osb_commit_interval = parsed_options->commit_interval; 713 709 714 710 if (!ocfs2_is_hard_readonly(osb)) 715 711 ocfs2_set_journal_params(osb); ··· 970 966 } 971 967 } 972 968 973 - static int ocfs2_fill_super(struct super_block *sb, void *data, int silent) 969 + static int ocfs2_fill_super(struct super_block *sb, struct fs_context *fc) 974 970 { 975 971 struct dentry *root; 976 972 int status, sector_size; 977 - struct mount_options parsed_options; 973 + struct mount_options *parsed_options = fc->fs_private; 978 974 struct inode *inode = NULL; 979 975 struct ocfs2_super *osb = NULL; 980 976 struct buffer_head *bh = NULL; 981 977 char nodestr[12]; 982 978 struct ocfs2_blockcheck_stats stats; 983 979 984 - trace_ocfs2_fill_super(sb, data, silent); 985 - 986 - if (!ocfs2_parse_options(sb, data, &parsed_options, 0)) { 987 - status = -EINVAL; 988 - goto out; 989 - } 980 + trace_ocfs2_fill_super(sb, fc, fc->sb_flags & SB_SILENT); 990 981 991 982 /* probe for superblock */ 992 983 status = ocfs2_sb_probe(sb, &bh, &sector_size, &stats); ··· 998 999 999 1000 osb = OCFS2_SB(sb); 1000 1001 1001 - if (!ocfs2_check_set_options(sb, &parsed_options)) { 1002 + if (!ocfs2_check_set_options(sb, parsed_options)) { 1002 1003 status = -EINVAL; 1003 1004 goto out_super; 1004 1005 } 1005 - osb->s_mount_opt = parsed_options.mount_opt; 1006 - osb->s_atime_quantum = parsed_options.atime_quantum; 1007 - osb->preferred_slot = parsed_options.slot; 1008 - osb->osb_commit_interval = parsed_options.commit_interval; 1006 + osb->s_mount_opt = parsed_options->mount_opt; 1007 + osb->s_atime_quantum = parsed_options->atime_quantum; 1008 + osb->preferred_slot = parsed_options->slot; 1009 + osb->osb_commit_interval = parsed_options->commit_interval; 1009 1010 1010 - ocfs2_la_set_sizes(osb, parsed_options.localalloc_opt); 1011 - osb->osb_resv_level = parsed_options.resv_level; 1012 - osb->osb_dir_resv_level = parsed_options.resv_level; 1013 - if (parsed_options.dir_resv_level == -1) 1014 - osb->osb_dir_resv_level = parsed_options.resv_level; 1011 + ocfs2_la_set_sizes(osb, parsed_options->localalloc_opt); 1012 + osb->osb_resv_level = parsed_options->resv_level; 1013 + osb->osb_dir_resv_level = parsed_options->resv_level; 1014 + if (parsed_options->dir_resv_level == -1) 1015 + osb->osb_dir_resv_level = parsed_options->resv_level; 1015 1016 else 1016 - osb->osb_dir_resv_level = parsed_options.dir_resv_level; 1017 + osb->osb_dir_resv_level = parsed_options->dir_resv_level; 1017 1018 1018 - status = ocfs2_verify_userspace_stack(osb, &parsed_options); 1019 + status = ocfs2_verify_userspace_stack(osb, parsed_options); 1019 1020 if (status) 1020 1021 goto out_super; 1021 1022 ··· 1179 1180 return status; 1180 1181 } 1181 1182 1182 - static struct dentry *ocfs2_mount(struct file_system_type *fs_type, 1183 - int flags, 1184 - const char *dev_name, 1185 - void *data) 1183 + static int ocfs2_get_tree(struct fs_context *fc) 1186 1184 { 1187 - return mount_bdev(fs_type, flags, dev_name, data, ocfs2_fill_super); 1185 + return get_tree_bdev(fc, ocfs2_fill_super); 1186 + } 1187 + 1188 + static void ocfs2_free_fc(struct fs_context *fc) 1189 + { 1190 + kfree(fc->fs_private); 1191 + } 1192 + 1193 + static const struct fs_context_operations ocfs2_context_ops = { 1194 + .parse_param = ocfs2_parse_param, 1195 + .get_tree = ocfs2_get_tree, 1196 + .reconfigure = ocfs2_reconfigure, 1197 + .free = ocfs2_free_fc, 1198 + }; 1199 + 1200 + static int ocfs2_init_fs_context(struct fs_context *fc) 1201 + { 1202 + struct mount_options *mopt; 1203 + 1204 + mopt = kzalloc(sizeof(struct mount_options), GFP_KERNEL); 1205 + if (!mopt) 1206 + return -EINVAL; 1207 + 1208 + mopt->commit_interval = 0; 1209 + mopt->mount_opt = OCFS2_MOUNT_NOINTR; 1210 + mopt->atime_quantum = OCFS2_DEFAULT_ATIME_QUANTUM; 1211 + mopt->slot = OCFS2_INVALID_SLOT; 1212 + mopt->localalloc_opt = -1; 1213 + mopt->cluster_stack[0] = '\0'; 1214 + mopt->resv_level = OCFS2_DEFAULT_RESV_LEVEL; 1215 + mopt->dir_resv_level = -1; 1216 + 1217 + fc->fs_private = mopt; 1218 + fc->ops = &ocfs2_context_ops; 1219 + 1220 + return 0; 1188 1221 } 1189 1222 1190 1223 static struct file_system_type ocfs2_fs_type = { 1191 1224 .owner = THIS_MODULE, 1192 1225 .name = "ocfs2", 1193 - .mount = ocfs2_mount, 1194 1226 .kill_sb = kill_block_super, 1195 1227 .fs_flags = FS_REQUIRES_DEV|FS_RENAME_DOES_D_MOVE, 1196 - .next = NULL 1228 + .next = NULL, 1229 + .init_fs_context = ocfs2_init_fs_context, 1230 + .parameters = ocfs2_param_spec, 1197 1231 }; 1198 1232 MODULE_ALIAS_FS("ocfs2"); 1199 1233 1200 1234 static int ocfs2_check_set_options(struct super_block *sb, 1201 1235 struct mount_options *options) 1202 1236 { 1237 + if (options->user_stack == 0) { 1238 + u32 tmp; 1239 + 1240 + /* Ensure only one heartbeat mode */ 1241 + tmp = options->mount_opt & (OCFS2_MOUNT_HB_LOCAL | 1242 + OCFS2_MOUNT_HB_GLOBAL | 1243 + OCFS2_MOUNT_HB_NONE); 1244 + if (hweight32(tmp) != 1) { 1245 + mlog(ML_ERROR, "Invalid heartbeat mount options\n"); 1246 + return 0; 1247 + } 1248 + } 1203 1249 if (options->mount_opt & OCFS2_MOUNT_USRQUOTA && 1204 1250 !OCFS2_HAS_RO_COMPAT_FEATURE(sb, 1205 1251 OCFS2_FEATURE_RO_COMPAT_USRQUOTA)) { ··· 1276 1232 return 1; 1277 1233 } 1278 1234 1279 - static int ocfs2_parse_options(struct super_block *sb, 1280 - char *options, 1281 - struct mount_options *mopt, 1282 - int is_remount) 1235 + static int ocfs2_parse_param(struct fs_context *fc, struct fs_parameter *param) 1283 1236 { 1284 - int status, user_stack = 0; 1285 - char *p; 1286 - u32 tmp; 1287 - int token, option; 1288 - substring_t args[MAX_OPT_ARGS]; 1237 + struct fs_parse_result result; 1238 + int opt; 1239 + struct mount_options *mopt = fc->fs_private; 1240 + bool is_remount = (fc->purpose & FS_CONTEXT_FOR_RECONFIGURE); 1289 1241 1290 - trace_ocfs2_parse_options(is_remount, options ? options : "(none)"); 1242 + trace_ocfs2_parse_options(is_remount, param->key); 1291 1243 1292 - mopt->commit_interval = 0; 1293 - mopt->mount_opt = OCFS2_MOUNT_NOINTR; 1294 - mopt->atime_quantum = OCFS2_DEFAULT_ATIME_QUANTUM; 1295 - mopt->slot = OCFS2_INVALID_SLOT; 1296 - mopt->localalloc_opt = -1; 1297 - mopt->cluster_stack[0] = '\0'; 1298 - mopt->resv_level = OCFS2_DEFAULT_RESV_LEVEL; 1299 - mopt->dir_resv_level = -1; 1244 + opt = fs_parse(fc, ocfs2_param_spec, param, &result); 1245 + if (opt < 0) 1246 + return opt; 1300 1247 1301 - if (!options) { 1302 - status = 1; 1303 - goto bail; 1304 - } 1305 - 1306 - while ((p = strsep(&options, ",")) != NULL) { 1307 - if (!*p) 1308 - continue; 1309 - 1310 - token = match_token(p, tokens, args); 1311 - switch (token) { 1312 - case Opt_hb_local: 1313 - mopt->mount_opt |= OCFS2_MOUNT_HB_LOCAL; 1314 - break; 1315 - case Opt_hb_none: 1316 - mopt->mount_opt |= OCFS2_MOUNT_HB_NONE; 1317 - break; 1318 - case Opt_hb_global: 1319 - mopt->mount_opt |= OCFS2_MOUNT_HB_GLOBAL; 1320 - break; 1321 - case Opt_barrier: 1322 - if (match_int(&args[0], &option)) { 1323 - status = 0; 1324 - goto bail; 1325 - } 1326 - if (option) 1327 - mopt->mount_opt |= OCFS2_MOUNT_BARRIER; 1328 - else 1329 - mopt->mount_opt &= ~OCFS2_MOUNT_BARRIER; 1330 - break; 1331 - case Opt_intr: 1332 - mopt->mount_opt &= ~OCFS2_MOUNT_NOINTR; 1333 - break; 1334 - case Opt_nointr: 1248 + switch (opt) { 1249 + case Opt_heartbeat: 1250 + mopt->mount_opt |= result.uint_32; 1251 + break; 1252 + case Opt_barrier: 1253 + if (result.uint_32) 1254 + mopt->mount_opt |= OCFS2_MOUNT_BARRIER; 1255 + else 1256 + mopt->mount_opt &= ~OCFS2_MOUNT_BARRIER; 1257 + break; 1258 + case Opt_intr: 1259 + if (result.negated) 1335 1260 mopt->mount_opt |= OCFS2_MOUNT_NOINTR; 1336 - break; 1337 - case Opt_err_panic: 1338 - mopt->mount_opt &= ~OCFS2_MOUNT_ERRORS_CONT; 1339 - mopt->mount_opt &= ~OCFS2_MOUNT_ERRORS_ROFS; 1340 - mopt->mount_opt |= OCFS2_MOUNT_ERRORS_PANIC; 1341 - break; 1342 - case Opt_err_ro: 1343 - mopt->mount_opt &= ~OCFS2_MOUNT_ERRORS_CONT; 1344 - mopt->mount_opt &= ~OCFS2_MOUNT_ERRORS_PANIC; 1345 - mopt->mount_opt |= OCFS2_MOUNT_ERRORS_ROFS; 1346 - break; 1347 - case Opt_err_cont: 1348 - mopt->mount_opt &= ~OCFS2_MOUNT_ERRORS_ROFS; 1349 - mopt->mount_opt &= ~OCFS2_MOUNT_ERRORS_PANIC; 1350 - mopt->mount_opt |= OCFS2_MOUNT_ERRORS_CONT; 1351 - break; 1352 - case Opt_data_ordered: 1353 - mopt->mount_opt &= ~OCFS2_MOUNT_DATA_WRITEBACK; 1354 - break; 1355 - case Opt_data_writeback: 1356 - mopt->mount_opt |= OCFS2_MOUNT_DATA_WRITEBACK; 1357 - break; 1358 - case Opt_user_xattr: 1359 - mopt->mount_opt &= ~OCFS2_MOUNT_NOUSERXATTR; 1360 - break; 1361 - case Opt_nouser_xattr: 1261 + else 1262 + mopt->mount_opt &= ~OCFS2_MOUNT_NOINTR; 1263 + break; 1264 + case Opt_errors: 1265 + mopt->mount_opt &= ~(OCFS2_MOUNT_ERRORS_CONT | 1266 + OCFS2_MOUNT_ERRORS_ROFS | 1267 + OCFS2_MOUNT_ERRORS_PANIC); 1268 + mopt->mount_opt |= result.uint_32; 1269 + break; 1270 + case Opt_data: 1271 + mopt->mount_opt &= ~OCFS2_MOUNT_DATA_WRITEBACK; 1272 + mopt->mount_opt |= result.uint_32; 1273 + break; 1274 + case Opt_user_xattr: 1275 + if (result.negated) 1362 1276 mopt->mount_opt |= OCFS2_MOUNT_NOUSERXATTR; 1363 - break; 1364 - case Opt_atime_quantum: 1365 - if (match_int(&args[0], &option)) { 1366 - status = 0; 1367 - goto bail; 1368 - } 1369 - if (option >= 0) 1370 - mopt->atime_quantum = option; 1371 - break; 1372 - case Opt_slot: 1373 - if (match_int(&args[0], &option)) { 1374 - status = 0; 1375 - goto bail; 1376 - } 1377 - if (option) 1378 - mopt->slot = (u16)option; 1379 - break; 1380 - case Opt_commit: 1381 - if (match_int(&args[0], &option)) { 1382 - status = 0; 1383 - goto bail; 1384 - } 1385 - if (option < 0) 1386 - return 0; 1387 - if (option == 0) 1388 - option = JBD2_DEFAULT_MAX_COMMIT_AGE; 1389 - mopt->commit_interval = HZ * option; 1390 - break; 1391 - case Opt_localalloc: 1392 - if (match_int(&args[0], &option)) { 1393 - status = 0; 1394 - goto bail; 1395 - } 1396 - if (option >= 0) 1397 - mopt->localalloc_opt = option; 1398 - break; 1399 - case Opt_localflocks: 1400 - /* 1401 - * Changing this during remount could race 1402 - * flock() requests, or "unbalance" existing 1403 - * ones (e.g., a lock is taken in one mode but 1404 - * dropped in the other). If users care enough 1405 - * to flip locking modes during remount, we 1406 - * could add a "local" flag to individual 1407 - * flock structures for proper tracking of 1408 - * state. 1409 - */ 1410 - if (!is_remount) 1411 - mopt->mount_opt |= OCFS2_MOUNT_LOCALFLOCKS; 1412 - break; 1413 - case Opt_stack: 1414 - /* Check both that the option we were passed 1415 - * is of the right length and that it is a proper 1416 - * string of the right length. 1417 - */ 1418 - if (((args[0].to - args[0].from) != 1419 - OCFS2_STACK_LABEL_LEN) || 1420 - (strnlen(args[0].from, 1421 - OCFS2_STACK_LABEL_LEN) != 1422 - OCFS2_STACK_LABEL_LEN)) { 1423 - mlog(ML_ERROR, 1424 - "Invalid cluster_stack option\n"); 1425 - status = 0; 1426 - goto bail; 1427 - } 1428 - memcpy(mopt->cluster_stack, args[0].from, 1429 - OCFS2_STACK_LABEL_LEN); 1430 - mopt->cluster_stack[OCFS2_STACK_LABEL_LEN] = '\0'; 1431 - /* 1432 - * Open code the memcmp here as we don't have 1433 - * an osb to pass to 1434 - * ocfs2_userspace_stack(). 1435 - */ 1436 - if (memcmp(mopt->cluster_stack, 1437 - OCFS2_CLASSIC_CLUSTER_STACK, 1438 - OCFS2_STACK_LABEL_LEN)) 1439 - user_stack = 1; 1440 - break; 1441 - case Opt_inode64: 1442 - mopt->mount_opt |= OCFS2_MOUNT_INODE64; 1443 - break; 1444 - case Opt_usrquota: 1445 - mopt->mount_opt |= OCFS2_MOUNT_USRQUOTA; 1446 - break; 1447 - case Opt_grpquota: 1448 - mopt->mount_opt |= OCFS2_MOUNT_GRPQUOTA; 1449 - break; 1450 - case Opt_coherency_buffered: 1451 - mopt->mount_opt |= OCFS2_MOUNT_COHERENCY_BUFFERED; 1452 - break; 1453 - case Opt_coherency_full: 1454 - mopt->mount_opt &= ~OCFS2_MOUNT_COHERENCY_BUFFERED; 1455 - break; 1456 - case Opt_acl: 1457 - mopt->mount_opt |= OCFS2_MOUNT_POSIX_ACL; 1458 - mopt->mount_opt &= ~OCFS2_MOUNT_NO_POSIX_ACL; 1459 - break; 1460 - case Opt_noacl: 1277 + else 1278 + mopt->mount_opt &= ~OCFS2_MOUNT_NOUSERXATTR; 1279 + break; 1280 + case Opt_atime_quantum: 1281 + mopt->atime_quantum = result.uint_32; 1282 + break; 1283 + case Opt_slot: 1284 + if (result.uint_32) 1285 + mopt->slot = (u16)result.uint_32; 1286 + break; 1287 + case Opt_commit: 1288 + if (result.uint_32 == 0) 1289 + mopt->commit_interval = HZ * JBD2_DEFAULT_MAX_COMMIT_AGE; 1290 + else 1291 + mopt->commit_interval = HZ * result.uint_32; 1292 + break; 1293 + case Opt_localalloc: 1294 + if (result.int_32 >= 0) 1295 + mopt->localalloc_opt = result.int_32; 1296 + break; 1297 + case Opt_localflocks: 1298 + /* 1299 + * Changing this during remount could race flock() requests, or 1300 + * "unbalance" existing ones (e.g., a lock is taken in one mode 1301 + * but dropped in the other). If users care enough to flip 1302 + * locking modes during remount, we could add a "local" flag to 1303 + * individual flock structures for proper tracking of state. 1304 + */ 1305 + if (!is_remount) 1306 + mopt->mount_opt |= OCFS2_MOUNT_LOCALFLOCKS; 1307 + break; 1308 + case Opt_stack: 1309 + /* Check both that the option we were passed is of the right 1310 + * length and that it is a proper string of the right length. 1311 + */ 1312 + if (strlen(param->string) != OCFS2_STACK_LABEL_LEN) { 1313 + mlog(ML_ERROR, "Invalid cluster_stack option\n"); 1314 + return -EINVAL; 1315 + } 1316 + memcpy(mopt->cluster_stack, param->string, OCFS2_STACK_LABEL_LEN); 1317 + mopt->cluster_stack[OCFS2_STACK_LABEL_LEN] = '\0'; 1318 + /* 1319 + * Open code the memcmp here as we don't have an osb to pass 1320 + * to ocfs2_userspace_stack(). 1321 + */ 1322 + if (memcmp(mopt->cluster_stack, 1323 + OCFS2_CLASSIC_CLUSTER_STACK, 1324 + OCFS2_STACK_LABEL_LEN)) 1325 + mopt->user_stack = 1; 1326 + break; 1327 + case Opt_inode64: 1328 + mopt->mount_opt |= OCFS2_MOUNT_INODE64; 1329 + break; 1330 + case Opt_usrquota: 1331 + mopt->mount_opt |= OCFS2_MOUNT_USRQUOTA; 1332 + break; 1333 + case Opt_grpquota: 1334 + mopt->mount_opt |= OCFS2_MOUNT_GRPQUOTA; 1335 + break; 1336 + case Opt_coherency: 1337 + mopt->mount_opt &= ~OCFS2_MOUNT_COHERENCY_BUFFERED; 1338 + mopt->mount_opt |= result.uint_32; 1339 + break; 1340 + case Opt_acl: 1341 + if (result.negated) { 1461 1342 mopt->mount_opt |= OCFS2_MOUNT_NO_POSIX_ACL; 1462 1343 mopt->mount_opt &= ~OCFS2_MOUNT_POSIX_ACL; 1463 - break; 1464 - case Opt_resv_level: 1465 - if (is_remount) 1466 - break; 1467 - if (match_int(&args[0], &option)) { 1468 - status = 0; 1469 - goto bail; 1470 - } 1471 - if (option >= OCFS2_MIN_RESV_LEVEL && 1472 - option < OCFS2_MAX_RESV_LEVEL) 1473 - mopt->resv_level = option; 1474 - break; 1475 - case Opt_dir_resv_level: 1476 - if (is_remount) 1477 - break; 1478 - if (match_int(&args[0], &option)) { 1479 - status = 0; 1480 - goto bail; 1481 - } 1482 - if (option >= OCFS2_MIN_RESV_LEVEL && 1483 - option < OCFS2_MAX_RESV_LEVEL) 1484 - mopt->dir_resv_level = option; 1485 - break; 1486 - case Opt_journal_async_commit: 1487 - mopt->mount_opt |= OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT; 1488 - break; 1489 - default: 1490 - mlog(ML_ERROR, 1491 - "Unrecognized mount option \"%s\" " 1492 - "or missing value\n", p); 1493 - status = 0; 1494 - goto bail; 1344 + } else { 1345 + mopt->mount_opt |= OCFS2_MOUNT_POSIX_ACL; 1346 + mopt->mount_opt &= ~OCFS2_MOUNT_NO_POSIX_ACL; 1495 1347 } 1348 + break; 1349 + case Opt_resv_level: 1350 + if (is_remount) 1351 + break; 1352 + if (result.uint_32 >= OCFS2_MIN_RESV_LEVEL && 1353 + result.uint_32 < OCFS2_MAX_RESV_LEVEL) 1354 + mopt->resv_level = result.uint_32; 1355 + break; 1356 + case Opt_dir_resv_level: 1357 + if (is_remount) 1358 + break; 1359 + if (result.uint_32 >= OCFS2_MIN_RESV_LEVEL && 1360 + result.uint_32 < OCFS2_MAX_RESV_LEVEL) 1361 + mopt->dir_resv_level = result.uint_32; 1362 + break; 1363 + case Opt_journal_async_commit: 1364 + mopt->mount_opt |= OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT; 1365 + break; 1366 + default: 1367 + return -EINVAL; 1496 1368 } 1497 1369 1498 - if (user_stack == 0) { 1499 - /* Ensure only one heartbeat mode */ 1500 - tmp = mopt->mount_opt & (OCFS2_MOUNT_HB_LOCAL | 1501 - OCFS2_MOUNT_HB_GLOBAL | 1502 - OCFS2_MOUNT_HB_NONE); 1503 - if (hweight32(tmp) != 1) { 1504 - mlog(ML_ERROR, "Invalid heartbeat mount options\n"); 1505 - status = 0; 1506 - goto bail; 1507 - } 1508 - } 1509 - 1510 - status = 1; 1511 - 1512 - bail: 1513 - return status; 1370 + return 0; 1514 1371 } 1515 1372 1516 1373 static int ocfs2_show_options(struct seq_file *s, struct dentry *root) ··· 1803 1858 osb = OCFS2_SB(sb); 1804 1859 BUG_ON(!osb); 1805 1860 1806 - /* Remove file check sysfs related directores/files, 1861 + /* Remove file check sysfs related directories/files, 1807 1862 * and wait for the pending file check operations */ 1808 1863 ocfs2_filecheck_remove_sysfs(osb); 1809 1864
+6 -10
fs/ocfs2/symlink.c
··· 54 54 55 55 static int ocfs2_fast_symlink_read_folio(struct file *f, struct folio *folio) 56 56 { 57 - struct page *page = &folio->page; 58 - struct inode *inode = page->mapping->host; 57 + struct inode *inode = folio->mapping->host; 59 58 struct buffer_head *bh = NULL; 60 59 int status = ocfs2_read_inode_block(inode, &bh); 61 60 struct ocfs2_dinode *fe; 62 61 const char *link; 63 - void *kaddr; 64 62 size_t len; 65 63 66 64 if (status < 0) { 67 65 mlog_errno(status); 68 - return status; 66 + goto out; 69 67 } 70 68 71 69 fe = (struct ocfs2_dinode *) bh->b_data; 72 70 link = (char *) fe->id2.i_symlink; 73 71 /* will be less than a page size */ 74 72 len = strnlen(link, ocfs2_fast_symlink_chars(inode->i_sb)); 75 - kaddr = kmap_atomic(page); 76 - memcpy(kaddr, link, len + 1); 77 - kunmap_atomic(kaddr); 78 - SetPageUptodate(page); 79 - unlock_page(page); 73 + memcpy_to_folio(folio, 0, link, len + 1); 74 + out: 75 + folio_end_read(folio, status == 0); 80 76 brelse(bh); 81 - return 0; 77 + return status; 82 78 } 83 79 84 80 const struct address_space_operations ocfs2_fast_symlink_aops = {
+5 -5
fs/ocfs2/xattr.c
··· 648 648 * 256(name) + 80(value) + 16(entry) = 352 bytes, 649 649 * The max space of acl xattr taken inline is 650 650 * 80(value) + 16(entry) * 2(if directory) = 192 bytes, 651 - * when blocksize = 512, may reserve one more cluser for 651 + * when blocksize = 512, may reserve one more cluster for 652 652 * xattr bucket, otherwise reserve one metadata block 653 653 * for them is ok. 654 654 * If this is a new directory with inline data, ··· 4371 4371 4372 4372 /* 4373 4373 * defrag a xattr bucket if we find that the bucket has some 4374 - * holes beteen name/value pairs. 4374 + * holes between name/value pairs. 4375 4375 * We will move all the name/value pairs to the end of the bucket 4376 4376 * so that we can spare some space for insertion. 4377 4377 */ ··· 5011 5011 * 2. If cluster_size == bucket_size: 5012 5012 * a) If the previous extent rec has more than one cluster and the insert 5013 5013 * place isn't in the last cluster, copy the entire last cluster to the 5014 - * new one. This time, we don't need to upate the first_bh and header_bh 5014 + * new one. This time, we don't need to update the first_bh and header_bh 5015 5015 * since they will not be moved into the new cluster. 5016 5016 * b) Otherwise, move the bottom half of the xattrs in the last cluster into 5017 5017 * the new one. And we set the extend flag to zero if the insert place is ··· 6189 6189 /* 6190 6190 * Given a xattr header and xe offset, 6191 6191 * return the proper xv and the corresponding bh. 6192 - * xattr in inode, block and xattr tree have different implementaions. 6192 + * xattr in inode, block and xattr tree have different implementations. 6193 6193 */ 6194 6194 typedef int (get_xattr_value_root)(struct super_block *sb, 6195 6195 struct buffer_head *bh, ··· 6269 6269 } 6270 6270 6271 6271 /* 6272 - * Lock the meta_ac and caculate how much credits we need for reflink xattrs. 6272 + * Lock the meta_ac and calculate how much credits we need for reflink xattrs. 6273 6273 * It is only used for inline xattr and xattr block. 6274 6274 */ 6275 6275 static int ocfs2_reflink_lock_xattr_allocators(struct ocfs2_super *osb,
+3 -3
fs/squashfs/Kconfig
··· 5 5 help 6 6 Saying Y here includes support for SquashFS 4.0 (a Compressed 7 7 Read-Only File System). Squashfs is a highly compressed read-only 8 - filesystem for Linux. It uses zlib, lzo or xz compression to 9 - compress both files, inodes and directories. Inodes in the system 8 + filesystem for Linux. It uses zlib, lz4, lzo, xz or zstd compression 9 + to compress both files, inodes and directories. Inodes in the system 10 10 are very small and all blocks are packed to minimise data overhead. 11 11 Block sizes greater than 4K are supported up to a maximum of 1 Mbytes 12 12 (default block size 128K). SquashFS 4.0 supports 64 bit filesystems ··· 16 16 Squashfs is intended for general read-only filesystem use, for 17 17 archival use (i.e. in cases where a .tar.gz file may be used), and in 18 18 embedded systems where low overhead is needed. Further information 19 - and tools are available from http://squashfs.sourceforge.net. 19 + and tools are available from github.com/plougher/squashfs-tools. 20 20 21 21 If you want to compile this as a module ( = code which can be 22 22 inserted in and removed from the running kernel whenever you want),
+7 -3
fs/squashfs/cache.c
··· 224 224 int block_size) 225 225 { 226 226 int i, j; 227 - struct squashfs_cache *cache = kzalloc(sizeof(*cache), GFP_KERNEL); 227 + struct squashfs_cache *cache; 228 228 229 + if (entries == 0) 230 + return NULL; 231 + 232 + cache = kzalloc(sizeof(*cache), GFP_KERNEL); 229 233 if (cache == NULL) { 230 234 ERROR("Failed to allocate %s cache\n", name); 231 - return NULL; 235 + return ERR_PTR(-ENOMEM); 232 236 } 233 237 234 238 cache->entry = kcalloc(entries, sizeof(*(cache->entry)), GFP_KERNEL); ··· 285 281 286 282 cleanup: 287 283 squashfs_cache_delete(cache); 288 - return NULL; 284 + return ERR_PTR(-ENOMEM); 289 285 } 290 286 291 287
+45 -45
fs/squashfs/file.c
··· 362 362 return squashfs_block_size(size); 363 363 } 364 364 365 - void squashfs_fill_page(struct page *page, struct squashfs_cache_entry *buffer, int offset, int avail) 365 + static bool squashfs_fill_page(struct folio *folio, 366 + struct squashfs_cache_entry *buffer, size_t offset, 367 + size_t avail) 366 368 { 367 - int copied; 369 + size_t copied; 368 370 void *pageaddr; 369 371 370 - pageaddr = kmap_atomic(page); 372 + pageaddr = kmap_local_folio(folio, 0); 371 373 copied = squashfs_copy_data(pageaddr, buffer, offset, avail); 372 374 memset(pageaddr + copied, 0, PAGE_SIZE - copied); 373 - kunmap_atomic(pageaddr); 375 + kunmap_local(pageaddr); 374 376 375 - flush_dcache_page(page); 376 - if (copied == avail) 377 - SetPageUptodate(page); 377 + flush_dcache_folio(folio); 378 + 379 + return copied == avail; 378 380 } 379 381 380 382 /* Copy data into page cache */ 381 - void squashfs_copy_cache(struct page *page, struct squashfs_cache_entry *buffer, 382 - int bytes, int offset) 383 + void squashfs_copy_cache(struct folio *folio, 384 + struct squashfs_cache_entry *buffer, size_t bytes, 385 + size_t offset) 383 386 { 384 - struct inode *inode = page->mapping->host; 387 + struct address_space *mapping = folio->mapping; 388 + struct inode *inode = mapping->host; 385 389 struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info; 386 390 int i, mask = (1 << (msblk->block_log - PAGE_SHIFT)) - 1; 387 - int start_index = page->index & ~mask, end_index = start_index | mask; 391 + int start_index = folio->index & ~mask, end_index = start_index | mask; 388 392 389 393 /* 390 394 * Loop copying datablock into pages. As the datablock likely covers ··· 398 394 */ 399 395 for (i = start_index; i <= end_index && bytes > 0; i++, 400 396 bytes -= PAGE_SIZE, offset += PAGE_SIZE) { 401 - struct page *push_page; 402 - int avail = buffer ? min_t(int, bytes, PAGE_SIZE) : 0; 397 + struct folio *push_folio; 398 + size_t avail = buffer ? min(bytes, PAGE_SIZE) : 0; 399 + bool updated = false; 403 400 404 - TRACE("bytes %d, i %d, available_bytes %d\n", bytes, i, avail); 401 + TRACE("bytes %zu, i %d, available_bytes %zu\n", bytes, i, avail); 405 402 406 - push_page = (i == page->index) ? page : 407 - grab_cache_page_nowait(page->mapping, i); 403 + push_folio = (i == folio->index) ? folio : 404 + __filemap_get_folio(mapping, i, 405 + FGP_LOCK|FGP_CREAT|FGP_NOFS|FGP_NOWAIT, 406 + mapping_gfp_mask(mapping)); 408 407 409 - if (!push_page) 408 + if (IS_ERR(push_folio)) 410 409 continue; 411 410 412 - if (PageUptodate(push_page)) 413 - goto skip_page; 411 + if (folio_test_uptodate(push_folio)) 412 + goto skip_folio; 414 413 415 - squashfs_fill_page(push_page, buffer, offset, avail); 416 - skip_page: 417 - unlock_page(push_page); 418 - if (i != page->index) 419 - put_page(push_page); 414 + updated = squashfs_fill_page(push_folio, buffer, offset, avail); 415 + skip_folio: 416 + folio_end_read(push_folio, updated); 417 + if (i != folio->index) 418 + folio_put(push_folio); 420 419 } 421 420 } 422 421 423 422 /* Read datablock stored packed inside a fragment (tail-end packed block) */ 424 - static int squashfs_readpage_fragment(struct page *page, int expected) 423 + static int squashfs_readpage_fragment(struct folio *folio, int expected) 425 424 { 426 - struct inode *inode = page->mapping->host; 425 + struct inode *inode = folio->mapping->host; 427 426 struct squashfs_cache_entry *buffer = squashfs_get_fragment(inode->i_sb, 428 427 squashfs_i(inode)->fragment_block, 429 428 squashfs_i(inode)->fragment_size); ··· 437 430 squashfs_i(inode)->fragment_block, 438 431 squashfs_i(inode)->fragment_size); 439 432 else 440 - squashfs_copy_cache(page, buffer, expected, 433 + squashfs_copy_cache(folio, buffer, expected, 441 434 squashfs_i(inode)->fragment_offset); 442 435 443 436 squashfs_cache_put(buffer); 444 437 return res; 445 438 } 446 439 447 - static int squashfs_readpage_sparse(struct page *page, int expected) 440 + static int squashfs_readpage_sparse(struct folio *folio, int expected) 448 441 { 449 - squashfs_copy_cache(page, NULL, expected, 0); 442 + squashfs_copy_cache(folio, NULL, expected, 0); 450 443 return 0; 451 444 } 452 445 453 446 static int squashfs_read_folio(struct file *file, struct folio *folio) 454 447 { 455 - struct page *page = &folio->page; 456 - struct inode *inode = page->mapping->host; 448 + struct inode *inode = folio->mapping->host; 457 449 struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info; 458 - int index = page->index >> (msblk->block_log - PAGE_SHIFT); 450 + int index = folio->index >> (msblk->block_log - PAGE_SHIFT); 459 451 int file_end = i_size_read(inode) >> msblk->block_log; 460 452 int expected = index == file_end ? 461 453 (i_size_read(inode) & (msblk->block_size - 1)) : 462 454 msblk->block_size; 463 455 int res = 0; 464 - void *pageaddr; 465 456 466 457 TRACE("Entered squashfs_readpage, page index %lx, start block %llx\n", 467 - page->index, squashfs_i(inode)->start); 458 + folio->index, squashfs_i(inode)->start); 468 459 469 - if (page->index >= ((i_size_read(inode) + PAGE_SIZE - 1) >> 460 + if (folio->index >= ((i_size_read(inode) + PAGE_SIZE - 1) >> 470 461 PAGE_SHIFT)) 471 462 goto out; 472 463 ··· 477 472 goto out; 478 473 479 474 if (res == 0) 480 - res = squashfs_readpage_sparse(page, expected); 475 + res = squashfs_readpage_sparse(folio, expected); 481 476 else 482 - res = squashfs_readpage_block(page, block, res, expected); 477 + res = squashfs_readpage_block(folio, block, res, expected); 483 478 } else 484 - res = squashfs_readpage_fragment(page, expected); 479 + res = squashfs_readpage_fragment(folio, expected); 485 480 486 481 if (!res) 487 482 return 0; 488 483 489 484 out: 490 - pageaddr = kmap_atomic(page); 491 - memset(pageaddr, 0, PAGE_SIZE); 492 - kunmap_atomic(pageaddr); 493 - flush_dcache_page(page); 494 - if (res == 0) 495 - SetPageUptodate(page); 496 - unlock_page(page); 485 + folio_zero_segment(folio, 0, folio_size(folio)); 486 + folio_end_read(folio, res == 0); 497 487 498 488 return res; 499 489 }
+3 -3
fs/squashfs/file_cache.c
··· 18 18 #include "squashfs.h" 19 19 20 20 /* Read separately compressed datablock and memcopy into page cache */ 21 - int squashfs_readpage_block(struct page *page, u64 block, int bsize, int expected) 21 + int squashfs_readpage_block(struct folio *folio, u64 block, int bsize, int expected) 22 22 { 23 - struct inode *i = page->mapping->host; 23 + struct inode *i = folio->mapping->host; 24 24 struct squashfs_cache_entry *buffer = squashfs_get_datablock(i->i_sb, 25 25 block, bsize); 26 26 int res = buffer->error; ··· 29 29 ERROR("Unable to read page, block %llx, size %x\n", block, 30 30 bsize); 31 31 else 32 - squashfs_copy_cache(page, buffer, expected, 0); 32 + squashfs_copy_cache(folio, buffer, expected, 0); 33 33 34 34 squashfs_cache_put(buffer); 35 35 return res;
+5 -6
fs/squashfs/file_direct.c
··· 19 19 #include "page_actor.h" 20 20 21 21 /* Read separately compressed datablock directly into page cache */ 22 - int squashfs_readpage_block(struct page *target_page, u64 block, int bsize, 23 - int expected) 24 - 22 + int squashfs_readpage_block(struct folio *folio, u64 block, int bsize, 23 + int expected) 25 24 { 26 - struct folio *folio = page_folio(target_page); 27 - struct inode *inode = target_page->mapping->host; 25 + struct page *target_page = &folio->page; 26 + struct inode *inode = folio->mapping->host; 28 27 struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info; 29 28 loff_t file_end = (i_size_read(inode) - 1) >> PAGE_SHIFT; 30 29 int mask = (1 << (msblk->block_log - PAGE_SHIFT)) - 1; ··· 47 48 /* Try to grab all the pages covered by the Squashfs block */ 48 49 for (i = 0, index = start_index; index <= end_index; index++) { 49 50 page[i] = (index == folio->index) ? target_page : 50 - grab_cache_page_nowait(target_page->mapping, index); 51 + grab_cache_page_nowait(folio->mapping, index); 51 52 52 53 if (page[i] == NULL) 53 54 continue;
+9 -4
fs/squashfs/squashfs.h
··· 14 14 15 15 #define WARNING(s, args...) pr_warn("SQUASHFS: "s, ## args) 16 16 17 + #ifdef CONFIG_SQUASHFS_FILE_CACHE 18 + #define SQUASHFS_READ_PAGES msblk->max_thread_num 19 + #else 20 + #define SQUASHFS_READ_PAGES 0 21 + #endif 22 + 17 23 /* block.c */ 18 24 extern int squashfs_read_data(struct super_block *, u64, int, u64 *, 19 25 struct squashfs_page_actor *); ··· 73 67 u64, u64, unsigned int); 74 68 75 69 /* file.c */ 76 - void squashfs_fill_page(struct page *, struct squashfs_cache_entry *, int, int); 77 - void squashfs_copy_cache(struct page *, struct squashfs_cache_entry *, int, 78 - int); 70 + void squashfs_copy_cache(struct folio *, struct squashfs_cache_entry *, 71 + size_t bytes, size_t offset); 79 72 80 73 /* file_xxx.c */ 81 - extern int squashfs_readpage_block(struct page *, u64, int, int); 74 + int squashfs_readpage_block(struct folio *, u64 block, int bsize, int expected); 82 75 83 76 /* id.c */ 84 77 extern int squashfs_get_id(struct super_block *, unsigned int, unsigned int *);
+12 -9
fs/squashfs/super.c
··· 314 314 sb->s_flags |= SB_RDONLY; 315 315 sb->s_op = &squashfs_super_ops; 316 316 317 - err = -ENOMEM; 318 - 319 317 msblk->block_cache = squashfs_cache_init("metadata", 320 318 SQUASHFS_CACHED_BLKS, SQUASHFS_METADATA_SIZE); 321 - if (msblk->block_cache == NULL) 319 + if (IS_ERR(msblk->block_cache)) { 320 + err = PTR_ERR(msblk->block_cache); 322 321 goto failed_mount; 322 + } 323 323 324 324 /* Allocate read_page block */ 325 325 msblk->read_page = squashfs_cache_init("data", 326 - msblk->max_thread_num, msblk->block_size); 327 - if (msblk->read_page == NULL) { 326 + SQUASHFS_READ_PAGES, msblk->block_size); 327 + if (IS_ERR(msblk->read_page)) { 328 328 errorf(fc, "Failed to allocate read_page block"); 329 + err = PTR_ERR(msblk->read_page); 329 330 goto failed_mount; 330 331 } 331 332 332 333 if (msblk->devblksize == PAGE_SIZE) { 333 334 struct inode *cache = new_inode(sb); 334 335 335 - if (cache == NULL) 336 + if (cache == NULL) { 337 + err = -ENOMEM; 336 338 goto failed_mount; 339 + } 337 340 338 341 set_nlink(cache, 1); 339 342 cache->i_size = OFFSET_MAX; ··· 408 405 goto check_directory_table; 409 406 410 407 msblk->fragment_cache = squashfs_cache_init("fragment", 411 - SQUASHFS_CACHED_FRAGMENTS, msblk->block_size); 412 - if (msblk->fragment_cache == NULL) { 413 - err = -ENOMEM; 408 + min(SQUASHFS_CACHED_FRAGMENTS, fragments), msblk->block_size); 409 + if (IS_ERR(msblk->fragment_cache)) { 410 + err = PTR_ERR(msblk->fragment_cache); 414 411 goto failed_mount; 415 412 } 416 413
+1 -1
include/asm-generic/syscall.h
··· 5 5 * Copyright (C) 2008-2009 Red Hat, Inc. All rights reserved. 6 6 * 7 7 * This file is a stub providing documentation for what functions 8 - * asm-ARCH/syscall.h files need to define. Most arch definitions 8 + * arch/ARCH/include/asm/syscall.h files need to define. Most arch definitions 9 9 * will be simple inlines. 10 10 * 11 11 * All of these functions expect to be called with no locks,
+1 -1
include/linux/bitmap.h
··· 23 23 * 24 24 * Function implementations generic to all architectures are in 25 25 * lib/bitmap.c. Functions implementations that are architecture 26 - * specific are in various include/asm-<arch>/bitops.h headers 26 + * specific are in various arch/<arch>/include/asm/bitops.h headers 27 27 * and other arch/<arch> specific files. 28 28 * 29 29 * See lib/bitmap.c for more details.
+14
include/linux/delayacct.h
··· 29 29 * XXX_delay contains the accumulated delay time in nanoseconds. 30 30 */ 31 31 u64 blkio_start; 32 + u64 blkio_delay_max; 33 + u64 blkio_delay_min; 32 34 u64 blkio_delay; /* wait for sync block io completion */ 33 35 u64 swapin_start; 36 + u64 swapin_delay_max; 37 + u64 swapin_delay_min; 34 38 u64 swapin_delay; /* wait for swapin */ 35 39 u32 blkio_count; /* total count of the number of sync block */ 36 40 /* io operations performed */ 37 41 u32 swapin_count; /* total count of swapin */ 38 42 39 43 u64 freepages_start; 44 + u64 freepages_delay_max; 45 + u64 freepages_delay_min; 40 46 u64 freepages_delay; /* wait for memory reclaim */ 41 47 42 48 u64 thrashing_start; 49 + u64 thrashing_delay_max; 50 + u64 thrashing_delay_min; 43 51 u64 thrashing_delay; /* wait for thrashing page */ 44 52 45 53 u64 compact_start; 54 + u64 compact_delay_max; 55 + u64 compact_delay_min; 46 56 u64 compact_delay; /* wait for memory compact */ 47 57 48 58 u64 wpcopy_start; 59 + u64 wpcopy_delay_max; 60 + u64 wpcopy_delay_min; 49 61 u64 wpcopy_delay; /* wait for write-protect copy */ 50 62 63 + u64 irq_delay_max; 64 + u64 irq_delay_min; 51 65 u64 irq_delay; /* wait for IRQ/SOFTIRQ */ 52 66 53 67 u32 freepages_count; /* total count of memory reclaim */
+1 -1
include/linux/kasan.h
··· 153 153 154 154 void __kasan_poison_new_object(struct kmem_cache *cache, void *object); 155 155 /** 156 - * kasan_unpoison_new_object - Repoison a new slab object. 156 + * kasan_poison_new_object - Repoison a new slab object. 157 157 * @cache: Cache the object belong to. 158 158 * @object: Pointer to the object. 159 159 *
+6
include/linux/lz4.h
··· 645 645 int LZ4_decompress_fast_usingDict(const char *source, char *dest, 646 646 int originalSize, const char *dictStart, int dictSize); 647 647 648 + #define LZ4_DECOMPRESS_INPLACE_MARGIN(compressedSize) (((compressedSize) >> 8) + 32) 649 + 650 + #ifndef LZ4_DISTANCE_MAX /* history window size; can be user-defined at compile time */ 651 + #define LZ4_DISTANCE_MAX 65535 /* set to maximum value by default */ 652 + #endif 653 + 648 654 #endif
+46 -26
include/linux/min_heap.h
··· 6 6 #include <linux/string.h> 7 7 #include <linux/types.h> 8 8 9 + /* 10 + * The Min Heap API provides utilities for managing min-heaps, a binary tree 11 + * structure where each node's value is less than or equal to its children's 12 + * values, ensuring the smallest element is at the root. 13 + * 14 + * Users should avoid directly calling functions prefixed with __min_heap_*(). 15 + * Instead, use the provided macro wrappers. 16 + * 17 + * For further details and examples, refer to Documentation/core-api/min_heap.rst. 18 + */ 19 + 9 20 /** 10 21 * Data structure to hold a min-heap. 11 22 * @nr: Number of elements currently in the heap. ··· 229 218 } 230 219 231 220 #define min_heap_init_inline(_heap, _data, _size) \ 232 - __min_heap_init_inline((min_heap_char *)_heap, _data, _size) 221 + __min_heap_init_inline(container_of(&(_heap)->nr, min_heap_char, nr), _data, _size) 233 222 234 223 /* Get the minimum element from the heap. */ 235 224 static __always_inline ··· 239 228 } 240 229 241 230 #define min_heap_peek_inline(_heap) \ 242 - (__minheap_cast(_heap) __min_heap_peek_inline((min_heap_char *)_heap)) 231 + (__minheap_cast(_heap) \ 232 + __min_heap_peek_inline(container_of(&(_heap)->nr, min_heap_char, nr))) 243 233 244 234 /* Check if the heap is full. */ 245 235 static __always_inline ··· 250 238 } 251 239 252 240 #define min_heap_full_inline(_heap) \ 253 - __min_heap_full_inline((min_heap_char *)_heap) 241 + __min_heap_full_inline(container_of(&(_heap)->nr, min_heap_char, nr)) 254 242 255 243 /* Sift the element at pos down the heap. */ 256 244 static __always_inline ··· 289 277 } 290 278 291 279 #define min_heap_sift_down_inline(_heap, _pos, _func, _args) \ 292 - __min_heap_sift_down_inline((min_heap_char *)_heap, _pos, __minheap_obj_size(_heap), \ 293 - _func, _args) 280 + __min_heap_sift_down_inline(container_of(&(_heap)->nr, min_heap_char, nr), _pos, \ 281 + __minheap_obj_size(_heap), _func, _args) 294 282 295 283 /* Sift up ith element from the heap, O(log2(nr)). */ 296 284 static __always_inline ··· 316 304 } 317 305 318 306 #define min_heap_sift_up_inline(_heap, _idx, _func, _args) \ 319 - __min_heap_sift_up_inline((min_heap_char *)_heap, __minheap_obj_size(_heap), _idx, \ 320 - _func, _args) 307 + __min_heap_sift_up_inline(container_of(&(_heap)->nr, min_heap_char, nr), \ 308 + __minheap_obj_size(_heap), _idx, _func, _args) 321 309 322 310 /* Floyd's approach to heapification that is O(nr). */ 323 311 static __always_inline ··· 331 319 } 332 320 333 321 #define min_heapify_all_inline(_heap, _func, _args) \ 334 - __min_heapify_all_inline((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 322 + __min_heapify_all_inline(container_of(&(_heap)->nr, min_heap_char, nr), \ 323 + __minheap_obj_size(_heap), _func, _args) 335 324 336 325 /* Remove minimum element from the heap, O(log2(nr)). */ 337 326 static __always_inline ··· 353 340 } 354 341 355 342 #define min_heap_pop_inline(_heap, _func, _args) \ 356 - __min_heap_pop_inline((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 343 + __min_heap_pop_inline(container_of(&(_heap)->nr, min_heap_char, nr), \ 344 + __minheap_obj_size(_heap), _func, _args) 357 345 358 346 /* 359 347 * Remove the minimum element and then push the given element. The ··· 370 356 } 371 357 372 358 #define min_heap_pop_push_inline(_heap, _element, _func, _args) \ 373 - __min_heap_pop_push_inline((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), \ 374 - _func, _args) 359 + __min_heap_pop_push_inline(container_of(&(_heap)->nr, min_heap_char, nr), _element, \ 360 + __minheap_obj_size(_heap), _func, _args) 375 361 376 362 /* Push an element on to the heap, O(log2(nr)). */ 377 363 static __always_inline ··· 396 382 } 397 383 398 384 #define min_heap_push_inline(_heap, _element, _func, _args) \ 399 - __min_heap_push_inline((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), \ 400 - _func, _args) 385 + __min_heap_push_inline(container_of(&(_heap)->nr, min_heap_char, nr), _element, \ 386 + __minheap_obj_size(_heap), _func, _args) 401 387 402 388 /* Remove ith element from the heap, O(log2(nr)). */ 403 389 static __always_inline ··· 425 411 } 426 412 427 413 #define min_heap_del_inline(_heap, _idx, _func, _args) \ 428 - __min_heap_del_inline((min_heap_char *)_heap, __minheap_obj_size(_heap), _idx, \ 429 - _func, _args) 414 + __min_heap_del_inline(container_of(&(_heap)->nr, min_heap_char, nr), \ 415 + __minheap_obj_size(_heap), _idx, _func, _args) 430 416 431 417 void __min_heap_init(min_heap_char *heap, void *data, int size); 432 418 void *__min_heap_peek(struct min_heap_char *heap); ··· 447 433 const struct min_heap_callbacks *func, void *args); 448 434 449 435 #define min_heap_init(_heap, _data, _size) \ 450 - __min_heap_init((min_heap_char *)_heap, _data, _size) 436 + __min_heap_init(container_of(&(_heap)->nr, min_heap_char, nr), _data, _size) 451 437 #define min_heap_peek(_heap) \ 452 - (__minheap_cast(_heap) __min_heap_peek((min_heap_char *)_heap)) 438 + (__minheap_cast(_heap) __min_heap_peek(container_of(&(_heap)->nr, min_heap_char, nr))) 453 439 #define min_heap_full(_heap) \ 454 - __min_heap_full((min_heap_char *)_heap) 440 + __min_heap_full(container_of(&(_heap)->nr, min_heap_char, nr)) 455 441 #define min_heap_sift_down(_heap, _pos, _func, _args) \ 456 - __min_heap_sift_down((min_heap_char *)_heap, _pos, __minheap_obj_size(_heap), _func, _args) 442 + __min_heap_sift_down(container_of(&(_heap)->nr, min_heap_char, nr), _pos, \ 443 + __minheap_obj_size(_heap), _func, _args) 457 444 #define min_heap_sift_up(_heap, _idx, _func, _args) \ 458 - __min_heap_sift_up((min_heap_char *)_heap, __minheap_obj_size(_heap), _idx, _func, _args) 445 + __min_heap_sift_up(container_of(&(_heap)->nr, min_heap_char, nr), \ 446 + __minheap_obj_size(_heap), _idx, _func, _args) 459 447 #define min_heapify_all(_heap, _func, _args) \ 460 - __min_heapify_all((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 448 + __min_heapify_all(container_of(&(_heap)->nr, min_heap_char, nr), \ 449 + __minheap_obj_size(_heap), _func, _args) 461 450 #define min_heap_pop(_heap, _func, _args) \ 462 - __min_heap_pop((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 451 + __min_heap_pop(container_of(&(_heap)->nr, min_heap_char, nr), \ 452 + __minheap_obj_size(_heap), _func, _args) 463 453 #define min_heap_pop_push(_heap, _element, _func, _args) \ 464 - __min_heap_pop_push((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), \ 465 - _func, _args) 454 + __min_heap_pop_push(container_of(&(_heap)->nr, min_heap_char, nr), _element, \ 455 + __minheap_obj_size(_heap), _func, _args) 466 456 #define min_heap_push(_heap, _element, _func, _args) \ 467 - __min_heap_push((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), _func, _args) 457 + __min_heap_push(container_of(&(_heap)->nr, min_heap_char, nr), _element, \ 458 + __minheap_obj_size(_heap), _func, _args) 468 459 #define min_heap_del(_heap, _idx, _func, _args) \ 469 - __min_heap_del((min_heap_char *)_heap, __minheap_obj_size(_heap), _idx, _func, _args) 460 + __min_heap_del(container_of(&(_heap)->nr, min_heap_char, nr), \ 461 + __minheap_obj_size(_heap), _idx, _func, _args) 470 462 471 463 #endif /* _LINUX_MIN_HEAP_H */
+102 -117
include/linux/minmax.h
··· 8 8 #include <linux/types.h> 9 9 10 10 /* 11 - * min()/max()/clamp() macros must accomplish three things: 11 + * min()/max()/clamp() macros must accomplish several things: 12 12 * 13 13 * - Avoid multiple evaluations of the arguments (so side-effects like 14 14 * "x++" happen only once) when non-constant. 15 - * - Retain result as a constant expressions when called with only 16 - * constant expressions (to avoid tripping VLA warnings in stack 17 - * allocation usage). 18 15 * - Perform signed v unsigned type-checking (to generate compile 19 16 * errors instead of nasty runtime surprises). 20 17 * - Unsigned char/short are always promoted to signed int and can be ··· 28 31 * bit #0 set if ok for unsigned comparisons 29 32 * bit #1 set if ok for signed comparisons 30 33 * 31 - * In particular, statically non-negative signed integer 32 - * expressions are ok for both. 34 + * In particular, statically non-negative signed integer expressions 35 + * are ok for both. 33 36 * 34 - * NOTE! Unsigned types smaller than 'int' are implicitly 35 - * converted to 'int' in expressions, and are accepted for 36 - * signed conversions for now. This is debatable. 37 + * NOTE! Unsigned types smaller than 'int' are implicitly converted to 'int' 38 + * in expressions, and are accepted for signed conversions for now. 39 + * This is debatable. 37 40 * 38 - * Note that 'x' is the original expression, and 'ux' is 39 - * the unique variable that contains the value. 41 + * Note that 'x' is the original expression, and 'ux' is the unique variable 42 + * that contains the value. 40 43 * 41 - * We use 'ux' for pure type checking, and 'x' for when 42 - * we need to look at the value (but without evaluating 43 - * it for side effects! Careful to only ever evaluate it 44 - * with sizeof() or __builtin_constant_p() etc). 44 + * We use 'ux' for pure type checking, and 'x' for when we need to look at the 45 + * value (but without evaluating it for side effects! 46 + * Careful to only ever evaluate it with sizeof() or __builtin_constant_p() etc). 45 47 * 46 - * Pointers end up being checked by the normal C type 47 - * rules at the actual comparison, and these expressions 48 - * only need to be careful to not cause warnings for 49 - * pointer use. 48 + * Pointers end up being checked by the normal C type rules at the actual 49 + * comparison, and these expressions only need to be careful to not cause 50 + * warnings for pointer use. 50 51 */ 51 - #define __signed_type_use(x,ux) (2+__is_nonneg(x,ux)) 52 - #define __unsigned_type_use(x,ux) (1+2*(sizeof(ux)<4)) 53 - #define __sign_use(x,ux) (is_signed_type(typeof(ux))? \ 54 - __signed_type_use(x,ux):__unsigned_type_use(x,ux)) 52 + #define __sign_use(ux) (is_signed_type(typeof(ux)) ? \ 53 + (2 + __is_nonneg(ux)) : (1 + 2 * (sizeof(ux) < 4))) 55 54 56 55 /* 57 - * To avoid warnings about casting pointers to integers 58 - * of different sizes, we need that special sign type. 56 + * Check whether a signed value is always non-negative. 59 57 * 60 - * On 64-bit we can just always use 'long', since any 61 - * integer or pointer type can just be cast to that. 58 + * A cast is needed to avoid any warnings from values that aren't signed 59 + * integer types (in which case the result doesn't matter). 62 60 * 63 - * This does not work for 128-bit signed integers since 64 - * the cast would truncate them, but we do not use s128 65 - * types in the kernel (we do use 'u128', but they will 66 - * be handled by the !is_signed_type() case). 61 + * On 64-bit any integer or pointer type can safely be cast to 'long long'. 62 + * But on 32-bit we need to avoid warnings about casting pointers to integers 63 + * of different sizes without truncating 64-bit values so 'long' or 'long long' 64 + * must be used depending on the size of the value. 67 65 * 68 - * NOTE! The cast is there only to avoid any warnings 69 - * from when values that aren't signed integer types. 66 + * This does not work for 128-bit signed integers since the cast would truncate 67 + * them, but we do not use s128 types in the kernel (we do use 'u128', 68 + * but they are handled by the !is_signed_type() case). 70 69 */ 71 - #ifdef CONFIG_64BIT 72 - #define __signed_type(ux) long 70 + #if __SIZEOF_POINTER__ == __SIZEOF_LONG_LONG__ 71 + #define __is_nonneg(ux) statically_true((long long)(ux) >= 0) 73 72 #else 74 - #define __signed_type(ux) typeof(__builtin_choose_expr(sizeof(ux)>4,1LL,1L)) 73 + #define __is_nonneg(ux) statically_true( \ 74 + (typeof(__builtin_choose_expr(sizeof(ux) > 4, 1LL, 1L)))(ux) >= 0) 75 75 #endif 76 - #define __is_nonneg(x,ux) statically_true((__signed_type(ux))(x)>=0) 77 76 78 - #define __types_ok(x,y,ux,uy) \ 79 - (__sign_use(x,ux) & __sign_use(y,uy)) 77 + #define __types_ok(ux, uy) \ 78 + (__sign_use(ux) & __sign_use(uy)) 80 79 81 - #define __types_ok3(x,y,z,ux,uy,uz) \ 82 - (__sign_use(x,ux) & __sign_use(y,uy) & __sign_use(z,uz)) 80 + #define __types_ok3(ux, uy, uz) \ 81 + (__sign_use(ux) & __sign_use(uy) & __sign_use(uz)) 83 82 84 83 #define __cmp_op_min < 85 84 #define __cmp_op_max > ··· 90 97 91 98 #define __careful_cmp_once(op, x, y, ux, uy) ({ \ 92 99 __auto_type ux = (x); __auto_type uy = (y); \ 93 - BUILD_BUG_ON_MSG(!__types_ok(x,y,ux,uy), \ 100 + BUILD_BUG_ON_MSG(!__types_ok(ux, uy), \ 94 101 #op"("#x", "#y") signedness error"); \ 95 102 __cmp(op, ux, uy); }) 96 103 97 104 #define __careful_cmp(op, x, y) \ 98 105 __careful_cmp_once(op, x, y, __UNIQUE_ID(x_), __UNIQUE_ID(y_)) 99 - 100 - #define __clamp(val, lo, hi) \ 101 - ((val) >= (hi) ? (hi) : ((val) <= (lo) ? (lo) : (val))) 102 - 103 - #define __clamp_once(val, lo, hi, uval, ulo, uhi) ({ \ 104 - __auto_type uval = (val); \ 105 - __auto_type ulo = (lo); \ 106 - __auto_type uhi = (hi); \ 107 - static_assert(__builtin_choose_expr(__is_constexpr((lo) > (hi)), \ 108 - (lo) <= (hi), true), \ 109 - "clamp() low limit " #lo " greater than high limit " #hi); \ 110 - BUILD_BUG_ON_MSG(!__types_ok3(val,lo,hi,uval,ulo,uhi), \ 111 - "clamp("#val", "#lo", "#hi") signedness error"); \ 112 - __clamp(uval, ulo, uhi); }) 113 - 114 - #define __careful_clamp(val, lo, hi) \ 115 - __clamp_once(val, lo, hi, __UNIQUE_ID(v_), __UNIQUE_ID(l_), __UNIQUE_ID(h_)) 116 106 117 107 /** 118 108 * min - return minimum of two values of the same or compatible types ··· 130 154 131 155 #define __careful_op3(op, x, y, z, ux, uy, uz) ({ \ 132 156 __auto_type ux = (x); __auto_type uy = (y);__auto_type uz = (z);\ 133 - BUILD_BUG_ON_MSG(!__types_ok3(x,y,z,ux,uy,uz), \ 157 + BUILD_BUG_ON_MSG(!__types_ok3(ux, uy, uz), \ 134 158 #op"3("#x", "#y", "#z") signedness error"); \ 135 159 __cmp(op, ux, __cmp(op, uy, uz)); }) 136 160 ··· 153 177 __careful_op3(max, x, y, z, __UNIQUE_ID(x_), __UNIQUE_ID(y_), __UNIQUE_ID(z_)) 154 178 155 179 /** 156 - * min_not_zero - return the minimum that is _not_ zero, unless both are zero 157 - * @x: value1 158 - * @y: value2 159 - */ 160 - #define min_not_zero(x, y) ({ \ 161 - typeof(x) __x = (x); \ 162 - typeof(y) __y = (y); \ 163 - __x == 0 ? __y : ((__y == 0) ? __x : min(__x, __y)); }) 164 - 165 - /** 166 - * clamp - return a value clamped to a given range with strict typechecking 167 - * @val: current value 168 - * @lo: lowest allowable value 169 - * @hi: highest allowable value 170 - * 171 - * This macro does strict typechecking of @lo/@hi to make sure they are of the 172 - * same type as @val. See the unnecessary pointer comparisons. 173 - */ 174 - #define clamp(val, lo, hi) __careful_clamp(val, lo, hi) 175 - 176 - /* 177 - * ..and if you can't take the strict 178 - * types, you can specify one yourself. 179 - * 180 - * Or not use min/max/clamp at all, of course. 181 - */ 182 - 183 - /** 184 180 * min_t - return minimum of two values, using the specified type 185 181 * @type: data type to use 186 182 * @x: first value ··· 167 219 * @y: second value 168 220 */ 169 221 #define max_t(type, x, y) __cmp_once(max, type, x, y) 222 + 223 + /** 224 + * min_not_zero - return the minimum that is _not_ zero, unless both are zero 225 + * @x: value1 226 + * @y: value2 227 + */ 228 + #define min_not_zero(x, y) ({ \ 229 + typeof(x) __x = (x); \ 230 + typeof(y) __y = (y); \ 231 + __x == 0 ? __y : ((__y == 0) ? __x : min(__x, __y)); }) 232 + 233 + #define __clamp(val, lo, hi) \ 234 + ((val) >= (hi) ? (hi) : ((val) <= (lo) ? (lo) : (val))) 235 + 236 + #define __clamp_once(type, val, lo, hi, uval, ulo, uhi) ({ \ 237 + type uval = (val); \ 238 + type ulo = (lo); \ 239 + type uhi = (hi); \ 240 + BUILD_BUG_ON_MSG(statically_true(ulo > uhi), \ 241 + "clamp() low limit " #lo " greater than high limit " #hi); \ 242 + BUILD_BUG_ON_MSG(!__types_ok3(uval, ulo, uhi), \ 243 + "clamp("#val", "#lo", "#hi") signedness error"); \ 244 + __clamp(uval, ulo, uhi); }) 245 + 246 + #define __careful_clamp(type, val, lo, hi) \ 247 + __clamp_once(type, val, lo, hi, __UNIQUE_ID(v_), __UNIQUE_ID(l_), __UNIQUE_ID(h_)) 248 + 249 + /** 250 + * clamp - return a value clamped to a given range with typechecking 251 + * @val: current value 252 + * @lo: lowest allowable value 253 + * @hi: highest allowable value 254 + * 255 + * This macro checks @val/@lo/@hi to make sure they have compatible 256 + * signedness. 257 + */ 258 + #define clamp(val, lo, hi) __careful_clamp(__auto_type, val, lo, hi) 259 + 260 + /** 261 + * clamp_t - return a value clamped to a given range using a given type 262 + * @type: the type of variable to use 263 + * @val: current value 264 + * @lo: minimum allowable value 265 + * @hi: maximum allowable value 266 + * 267 + * This macro does no typechecking and uses temporary variables of type 268 + * @type to make all the comparisons. 269 + */ 270 + #define clamp_t(type, val, lo, hi) __careful_clamp(type, val, lo, hi) 271 + 272 + /** 273 + * clamp_val - return a value clamped to a given range using val's type 274 + * @val: current value 275 + * @lo: minimum allowable value 276 + * @hi: maximum allowable value 277 + * 278 + * This macro does no typechecking and uses temporary variables of whatever 279 + * type the input argument @val is. This is useful when @val is an unsigned 280 + * type and @lo and @hi are literals that will otherwise be assigned a signed 281 + * integer type. 282 + */ 283 + #define clamp_val(val, lo, hi) __careful_clamp(typeof(val), val, lo, hi) 170 284 171 285 /* 172 286 * Do not check the array parameter using __must_be_array(). ··· 273 263 */ 274 264 #define max_array(array, len) __minmax_array(max, array, len) 275 265 276 - /** 277 - * clamp_t - return a value clamped to a given range using a given type 278 - * @type: the type of variable to use 279 - * @val: current value 280 - * @lo: minimum allowable value 281 - * @hi: maximum allowable value 282 - * 283 - * This macro does no typechecking and uses temporary variables of type 284 - * @type to make all the comparisons. 285 - */ 286 - #define clamp_t(type, val, lo, hi) __careful_clamp((type)(val), (type)(lo), (type)(hi)) 287 - 288 - /** 289 - * clamp_val - return a value clamped to a given range using val's type 290 - * @val: current value 291 - * @lo: minimum allowable value 292 - * @hi: maximum allowable value 293 - * 294 - * This macro does no typechecking and uses temporary variables of whatever 295 - * type the input argument @val is. This is useful when @val is an unsigned 296 - * type and @lo and @hi are literals that will otherwise be assigned a signed 297 - * integer type. 298 - */ 299 - #define clamp_val(val, lo, hi) clamp_t(typeof(val), val, lo, hi) 300 - 301 266 static inline bool in_range64(u64 val, u64 start, u64 len) 302 267 { 303 268 return (val - start) < len; ··· 311 326 * Use these carefully: no type checking, and uses the arguments 312 327 * multiple times. Use for obvious constants only. 313 328 */ 314 - #define MIN(a,b) __cmp(min,a,b) 315 - #define MAX(a,b) __cmp(max,a,b) 316 - #define MIN_T(type,a,b) __cmp(min,(type)(a),(type)(b)) 317 - #define MAX_T(type,a,b) __cmp(max,(type)(a),(type)(b)) 329 + #define MIN(a, b) __cmp(min, a, b) 330 + #define MAX(a, b) __cmp(max, a, b) 331 + #define MIN_T(type, a, b) __cmp(min, (type)(a), (type)(b)) 332 + #define MAX_T(type, a, b) __cmp(max, (type)(a), (type)(b)) 318 333 319 334 #endif /* _LINUX_MINMAX_H */
+6
include/linux/sched.h
··· 398 398 /* Time spent waiting on a runqueue: */ 399 399 unsigned long long run_delay; 400 400 401 + /* Max time spent waiting on a runqueue: */ 402 + unsigned long long max_run_delay; 403 + 404 + /* Min time spent waiting on a runqueue: */ 405 + unsigned long long min_run_delay; 406 + 401 407 /* Timestamps: */ 402 408 403 409 /* When did we last run on a CPU? */
+1 -1
include/linux/types.h
··· 43 43 typedef long intptr_t; 44 44 45 45 #ifdef CONFIG_HAVE_UID16 46 - /* This is defined by include/asm-{arch}/posix_types.h */ 46 + /* This is defined by arch/{arch}/include/asm/posix_types.h */ 47 47 typedef __kernel_old_uid_t old_uid_t; 48 48 typedef __kernel_old_gid_t old_gid_t; 49 49 #endif /* CONFIG_UID16 */
+17
include/uapi/linux/taskstats.h
··· 72 72 */ 73 73 __u64 cpu_count __attribute__((aligned(8))); 74 74 __u64 cpu_delay_total; 75 + __u64 cpu_delay_max; 76 + __u64 cpu_delay_min; 75 77 76 78 /* Following four fields atomically updated using task->delays->lock */ 77 79 ··· 82 80 */ 83 81 __u64 blkio_count; 84 82 __u64 blkio_delay_total; 83 + __u64 blkio_delay_max; 84 + __u64 blkio_delay_min; 85 85 86 86 /* Delay waiting for page fault I/O (swap in only) */ 87 87 __u64 swapin_count; 88 88 __u64 swapin_delay_total; 89 + __u64 swapin_delay_max; 90 + __u64 swapin_delay_min; 89 91 90 92 /* cpu "wall-clock" running time 91 93 * On some architectures, value will adjust for cpu time stolen ··· 172 166 /* Delay waiting for memory reclaim */ 173 167 __u64 freepages_count; 174 168 __u64 freepages_delay_total; 169 + __u64 freepages_delay_max; 170 + __u64 freepages_delay_min; 175 171 176 172 /* Delay waiting for thrashing page */ 177 173 __u64 thrashing_count; 178 174 __u64 thrashing_delay_total; 175 + __u64 thrashing_delay_max; 176 + __u64 thrashing_delay_min; 179 177 180 178 /* v10: 64-bit btime to avoid overflow */ 181 179 __u64 ac_btime64; /* 64-bit begin time */ ··· 187 177 /* v11: Delay waiting for memory compact */ 188 178 __u64 compact_count; 189 179 __u64 compact_delay_total; 180 + __u64 compact_delay_max; 181 + __u64 compact_delay_min; 190 182 191 183 /* v12 begin */ 192 184 __u32 ac_tgid; /* thread group ID */ ··· 210 198 /* v13: Delay waiting for write-protect copy */ 211 199 __u64 wpcopy_count; 212 200 __u64 wpcopy_delay_total; 201 + __u64 wpcopy_delay_max; 202 + __u64 wpcopy_delay_min; 213 203 214 204 /* v14: Delay waiting for IRQ/SOFTIRQ */ 215 205 __u64 irq_count; 216 206 __u64 irq_delay_total; 207 + __u64 irq_delay_max; 208 + __u64 irq_delay_min; 209 + /* v15: add Delay max */ 217 210 }; 218 211 219 212
+1 -1
init/do_mounts_initrd.c
··· 89 89 extern char *envp_init[]; 90 90 int error; 91 91 92 - pr_warn("using deprecated initrd support, will be removed in 2021.\n"); 92 + pr_warn("using deprecated initrd support, will be removed soon.\n"); 93 93 94 94 real_root_dev = new_encode_dev(ROOT_DEV); 95 95 create_dev("/dev/root.old", Root_RAM0);
+4 -7
ipc/util.c
··· 615 615 } 616 616 617 617 /** 618 - * ipc_obtain_object_idr 618 + * ipc_obtain_object_idr - Look for an id in the ipc ids idr and 619 + * return associated ipc object. 619 620 * @ids: ipc identifier set 620 621 * @id: ipc id to look for 621 - * 622 - * Look for an id in the ipc ids idr and return associated ipc object. 623 622 * 624 623 * Call inside the RCU critical section. 625 624 * The ipc object is *not* locked on exit. ··· 636 637 } 637 638 638 639 /** 639 - * ipc_obtain_object_check 640 + * ipc_obtain_object_check - Similar to ipc_obtain_object_idr() but 641 + * also checks the ipc object sequence number. 640 642 * @ids: ipc identifier set 641 643 * @id: ipc id to look for 642 - * 643 - * Similar to ipc_obtain_object_idr() but also checks the ipc object 644 - * sequence number. 645 644 * 646 645 * Call inside the RCU critical section. 647 646 * The ipc object is *not* locked on exit.
+2 -6
kernel/capability.c
··· 38 38 39 39 static void warn_legacy_capability_use(void) 40 40 { 41 - char name[sizeof(current->comm)]; 42 - 43 41 pr_info_once("warning: `%s' uses 32-bit capabilities (legacy support in use)\n", 44 - get_task_comm(name, current)); 42 + current->comm); 45 43 } 46 44 47 45 /* ··· 60 62 61 63 static void warn_deprecated_v2(void) 62 64 { 63 - char name[sizeof(current->comm)]; 64 - 65 65 pr_info_once("warning: `%s' uses deprecated v2 capabilities in a way that may be insecure\n", 66 - get_task_comm(name, current)); 66 + current->comm); 67 67 } 68 68 69 69 /*
+45 -10
kernel/delayacct.c
··· 93 93 94 94 /* 95 95 * Finish delay accounting for a statistic using its timestamps (@start), 96 - * accumalator (@total) and @count 96 + * accumulator (@total) and @count 97 97 */ 98 - static void delayacct_end(raw_spinlock_t *lock, u64 *start, u64 *total, u32 *count) 98 + static void delayacct_end(raw_spinlock_t *lock, u64 *start, u64 *total, u32 *count, u64 *max, u64 *min) 99 99 { 100 100 s64 ns = local_clock() - *start; 101 101 unsigned long flags; ··· 104 104 raw_spin_lock_irqsave(lock, flags); 105 105 *total += ns; 106 106 (*count)++; 107 + if (ns > *max) 108 + *max = ns; 109 + if (*min == 0 || ns < *min) 110 + *min = ns; 107 111 raw_spin_unlock_irqrestore(lock, flags); 108 112 } 109 113 } ··· 126 122 delayacct_end(&p->delays->lock, 127 123 &p->delays->blkio_start, 128 124 &p->delays->blkio_delay, 129 - &p->delays->blkio_count); 125 + &p->delays->blkio_count, 126 + &p->delays->blkio_delay_max, 127 + &p->delays->blkio_delay_min); 130 128 } 131 129 132 130 int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk) ··· 159 153 160 154 d->cpu_count += t1; 161 155 156 + d->cpu_delay_max = tsk->sched_info.max_run_delay; 157 + d->cpu_delay_min = tsk->sched_info.min_run_delay; 162 158 tmp = (s64)d->cpu_delay_total + t2; 163 159 d->cpu_delay_total = (tmp < (s64)d->cpu_delay_total) ? 0 : tmp; 164 - 165 160 tmp = (s64)d->cpu_run_virtual_total + t3; 161 + 166 162 d->cpu_run_virtual_total = 167 163 (tmp < (s64)d->cpu_run_virtual_total) ? 0 : tmp; 168 164 ··· 172 164 return 0; 173 165 174 166 /* zero XXX_total, non-zero XXX_count implies XXX stat overflowed */ 175 - 176 167 raw_spin_lock_irqsave(&tsk->delays->lock, flags); 168 + d->blkio_delay_max = tsk->delays->blkio_delay_max; 169 + d->blkio_delay_min = tsk->delays->blkio_delay_min; 177 170 tmp = d->blkio_delay_total + tsk->delays->blkio_delay; 178 171 d->blkio_delay_total = (tmp < d->blkio_delay_total) ? 0 : tmp; 172 + d->swapin_delay_max = tsk->delays->swapin_delay_max; 173 + d->swapin_delay_min = tsk->delays->swapin_delay_min; 179 174 tmp = d->swapin_delay_total + tsk->delays->swapin_delay; 180 175 d->swapin_delay_total = (tmp < d->swapin_delay_total) ? 0 : tmp; 176 + d->freepages_delay_max = tsk->delays->freepages_delay_max; 177 + d->freepages_delay_min = tsk->delays->freepages_delay_min; 181 178 tmp = d->freepages_delay_total + tsk->delays->freepages_delay; 182 179 d->freepages_delay_total = (tmp < d->freepages_delay_total) ? 0 : tmp; 180 + d->thrashing_delay_max = tsk->delays->thrashing_delay_max; 181 + d->thrashing_delay_min = tsk->delays->thrashing_delay_min; 183 182 tmp = d->thrashing_delay_total + tsk->delays->thrashing_delay; 184 183 d->thrashing_delay_total = (tmp < d->thrashing_delay_total) ? 0 : tmp; 184 + d->compact_delay_max = tsk->delays->compact_delay_max; 185 + d->compact_delay_min = tsk->delays->compact_delay_min; 185 186 tmp = d->compact_delay_total + tsk->delays->compact_delay; 186 187 d->compact_delay_total = (tmp < d->compact_delay_total) ? 0 : tmp; 188 + d->wpcopy_delay_max = tsk->delays->wpcopy_delay_max; 189 + d->wpcopy_delay_min = tsk->delays->wpcopy_delay_min; 187 190 tmp = d->wpcopy_delay_total + tsk->delays->wpcopy_delay; 188 191 d->wpcopy_delay_total = (tmp < d->wpcopy_delay_total) ? 0 : tmp; 192 + d->irq_delay_max = tsk->delays->irq_delay_max; 193 + d->irq_delay_min = tsk->delays->irq_delay_min; 189 194 tmp = d->irq_delay_total + tsk->delays->irq_delay; 190 195 d->irq_delay_total = (tmp < d->irq_delay_total) ? 0 : tmp; 191 196 d->blkio_count += tsk->delays->blkio_count; ··· 234 213 delayacct_end(&current->delays->lock, 235 214 &current->delays->freepages_start, 236 215 &current->delays->freepages_delay, 237 - &current->delays->freepages_count); 216 + &current->delays->freepages_count, 217 + &current->delays->freepages_delay_max, 218 + &current->delays->freepages_delay_min); 238 219 } 239 220 240 221 void __delayacct_thrashing_start(bool *in_thrashing) ··· 258 235 delayacct_end(&current->delays->lock, 259 236 &current->delays->thrashing_start, 260 237 &current->delays->thrashing_delay, 261 - &current->delays->thrashing_count); 238 + &current->delays->thrashing_count, 239 + &current->delays->thrashing_delay_max, 240 + &current->delays->thrashing_delay_min); 262 241 } 263 242 264 243 void __delayacct_swapin_start(void) ··· 273 248 delayacct_end(&current->delays->lock, 274 249 &current->delays->swapin_start, 275 250 &current->delays->swapin_delay, 276 - &current->delays->swapin_count); 251 + &current->delays->swapin_count, 252 + &current->delays->swapin_delay_max, 253 + &current->delays->swapin_delay_min); 277 254 } 278 255 279 256 void __delayacct_compact_start(void) ··· 288 261 delayacct_end(&current->delays->lock, 289 262 &current->delays->compact_start, 290 263 &current->delays->compact_delay, 291 - &current->delays->compact_count); 264 + &current->delays->compact_count, 265 + &current->delays->compact_delay_max, 266 + &current->delays->compact_delay_min); 292 267 } 293 268 294 269 void __delayacct_wpcopy_start(void) ··· 303 274 delayacct_end(&current->delays->lock, 304 275 &current->delays->wpcopy_start, 305 276 &current->delays->wpcopy_delay, 306 - &current->delays->wpcopy_count); 277 + &current->delays->wpcopy_count, 278 + &current->delays->wpcopy_delay_max, 279 + &current->delays->wpcopy_delay_min); 307 280 } 308 281 309 282 void __delayacct_irq(struct task_struct *task, u32 delta) ··· 315 284 raw_spin_lock_irqsave(&task->delays->lock, flags); 316 285 task->delays->irq_delay += delta; 317 286 task->delays->irq_count++; 287 + if (delta > task->delays->irq_delay_max) 288 + task->delays->irq_delay_max = delta; 289 + if (delta && (!task->delays->irq_delay_min || delta < task->delays->irq_delay_min)) 290 + task->delays->irq_delay_min = delta; 318 291 raw_spin_unlock_irqrestore(&task->delays->lock, flags); 319 292 } 320 293
+5 -4
kernel/fork.c
··· 1511 1511 struct file *exe_file = NULL; 1512 1512 struct mm_struct *mm; 1513 1513 1514 + if (task->flags & PF_KTHREAD) 1515 + return NULL; 1516 + 1514 1517 task_lock(task); 1515 1518 mm = task->mm; 1516 - if (mm) { 1517 - if (!(task->flags & PF_KTHREAD)) 1518 - exe_file = get_mm_exe_file(mm); 1519 - } 1519 + if (mm) 1520 + exe_file = get_mm_exe_file(mm); 1520 1521 task_unlock(task); 1521 1522 return exe_file; 1522 1523 }
+1 -2
kernel/futex/waitwake.c
··· 210 210 211 211 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) { 212 212 if (oparg < 0 || oparg > 31) { 213 - char comm[sizeof(current->comm)]; 214 213 /* 215 214 * kill this print and return -EINVAL when userspace 216 215 * is sane again 217 216 */ 218 217 pr_info_ratelimited("futex_wake_op: %s tries to shift op by %d; fix this program\n", 219 - get_task_comm(comm, current), oparg); 218 + current->comm, oparg); 220 219 oparg &= 31; 221 220 } 222 221 oparg = 1 << oparg;
+3 -3
kernel/gcov/clang.c
··· 264 264 265 265 /** 266 266 * gcov_info_add - add up profiling data 267 - * @dest: profiling data set to which data is added 268 - * @source: profiling data set which is added 267 + * @dst: profiling data set to which data is added 268 + * @src: profiling data set which is added 269 269 * 270 - * Adds profiling counts of @source to @dest. 270 + * Adds profiling counts of @src to @dst. 271 271 */ 272 272 void gcov_info_add(struct gcov_info *dst, struct gcov_info *src) 273 273 {
+2
kernel/hung_task.c
··· 147 147 print_tainted(), init_utsname()->release, 148 148 (int)strcspn(init_utsname()->version, " "), 149 149 init_utsname()->version); 150 + if (t->flags & PF_POSTCOREDUMP) 151 + pr_err(" Blocked by coredump.\n"); 150 152 pr_err("\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\"" 151 153 " disables this message.\n"); 152 154 sched_show_task(t);
+1 -1
kernel/kthread.c
··· 1175 1175 * @work: kthread_work to queue 1176 1176 * 1177 1177 * Queue @work to work processor @task for async execution. @task 1178 - * must have been created with kthread_worker_create(). Returns %true 1178 + * must have been created with kthread_create_worker(). Returns %true 1179 1179 * if @work was successfully queued, %false if it was already pending. 1180 1180 * 1181 1181 * Reinitialize the work if it needs to be used by another worker.
+3 -3
kernel/latencytop.c
··· 158 158 159 159 /** 160 160 * __account_scheduler_latency - record an occurred latency 161 - * @tsk - the task struct of the task hitting the latency 162 - * @usecs - the duration of the latency in microseconds 163 - * @inter - 1 if the sleep was interruptible, 0 if uninterruptible 161 + * @tsk: the task struct of the task hitting the latency 162 + * @usecs: the duration of the latency in microseconds 163 + * @inter: 1 if the sleep was interruptible, 0 if uninterruptible 164 164 * 165 165 * This function is the main entry point for recording latency entries 166 166 * as called by the scheduler.
+1 -2
kernel/resource.c
··· 1683 1683 { 1684 1684 struct region_devres match_data = { parent, start, n }; 1685 1685 1686 - __release_region(parent, start, n); 1687 - WARN_ON(devres_destroy(dev, devm_region_release, devm_region_match, 1686 + WARN_ON(devres_release(dev, devm_region_release, devm_region_match, 1688 1687 &match_data)); 1689 1688 } 1690 1689 EXPORT_SYMBOL(__devm_release_region);
+2 -2
kernel/sched/core.c
··· 7709 7709 if (pid_alive(p)) 7710 7710 ppid = task_pid_nr(rcu_dereference(p->real_parent)); 7711 7711 rcu_read_unlock(); 7712 - pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n", 7712 + pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d task_flags:0x%04x flags:0x%08lx\n", 7713 7713 free, task_pid_nr(p), task_tgid_nr(p), 7714 - ppid, read_task_thread_flags(p)); 7714 + ppid, p->flags, read_task_thread_flags(p)); 7715 7715 7716 7716 print_worker_info(KERN_INFO, p); 7717 7717 print_stop_info(KERN_INFO, p);
+8 -1
kernel/sched/stats.h
··· 248 248 delta = rq_clock(rq) - t->sched_info.last_queued; 249 249 t->sched_info.last_queued = 0; 250 250 t->sched_info.run_delay += delta; 251 - 251 + if (delta > t->sched_info.max_run_delay) 252 + t->sched_info.max_run_delay = delta; 253 + if (delta && (!t->sched_info.min_run_delay || delta < t->sched_info.min_run_delay)) 254 + t->sched_info.min_run_delay = delta; 252 255 rq_sched_info_dequeue(rq, delta); 253 256 } 254 257 ··· 273 270 t->sched_info.run_delay += delta; 274 271 t->sched_info.last_arrival = now; 275 272 t->sched_info.pcount++; 273 + if (delta > t->sched_info.max_run_delay) 274 + t->sched_info.max_run_delay = delta; 275 + if (delta && (!t->sched_info.min_run_delay || delta < t->sched_info.min_run_delay)) 276 + t->sched_info.min_run_delay = delta; 276 277 277 278 rq_sched_info_arrive(rq, delta); 278 279 }
+4 -4
kernel/ucount.c
··· 164 164 struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid) 165 165 { 166 166 struct hlist_head *hashent = ucounts_hashentry(ns, uid); 167 - struct ucounts *ucounts, *new; 168 167 bool wrapped; 168 + struct ucounts *ucounts, *new = NULL; 169 169 170 170 spin_lock_irq(&ucounts_lock); 171 171 ucounts = find_ucounts(ns, uid, hashent); ··· 182 182 183 183 spin_lock_irq(&ucounts_lock); 184 184 ucounts = find_ucounts(ns, uid, hashent); 185 - if (ucounts) { 186 - kfree(new); 187 - } else { 185 + if (!ucounts) { 188 186 hlist_add_head(&new->node, hashent); 189 187 get_user_ns(new->ns); 190 188 spin_unlock_irq(&ucounts_lock); 191 189 return new; 192 190 } 193 191 } 192 + 194 193 wrapped = !get_ucounts_or_wrap(ucounts); 195 194 spin_unlock_irq(&ucounts_lock); 195 + kfree(new); 196 196 if (wrapped) { 197 197 put_ucounts(ucounts); 198 198 return NULL;
+1 -1
kernel/watchdog.c
··· 190 190 * with printk_cpu_sync_get_irqsave() that we can still at least 191 191 * get the message about the lockup out. 192 192 */ 193 - pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", cpu); 193 + pr_emerg("CPU%u: Watchdog detected hard LOCKUP on cpu %u\n", this_cpu, cpu); 194 194 printk_cpu_sync_get_irqsave(flags); 195 195 196 196 print_modules();
+31 -3
lib/Kconfig.debug
··· 2269 2269 config TEST_MIN_HEAP 2270 2270 tristate "Min heap test" 2271 2271 depends on DEBUG_KERNEL || m 2272 - select MIN_HEAP 2273 2272 help 2274 2273 Enable this to turn on min heap function tests. This test is 2275 2274 executed only once during system boot (so affects only boot time), ··· 2456 2457 config TEST_UUID 2457 2458 tristate "Test functions located in the uuid module at runtime" 2458 2459 2459 - config TEST_XARRAY 2460 - tristate "Test the XArray code at runtime" 2460 + config XARRAY_KUNIT 2461 + tristate "KUnit test XArray code at runtime" if !KUNIT_ALL_TESTS 2462 + depends on KUNIT 2463 + default KUNIT_ALL_TESTS 2464 + help 2465 + Enable this option to test the Xarray code at boot. 2466 + 2467 + KUnit tests run during boot and output the results to the debug log 2468 + in TAP format (http://testanything.org/). Only useful for kernel devs 2469 + running the KUnit test harness, and not intended for inclusion into a 2470 + production build. 2471 + 2472 + For more information on KUnit and unit tests in general please refer 2473 + to the KUnit documentation in Documentation/dev-tools/kunit/. 2474 + 2475 + If unsure, say N. 2461 2476 2462 2477 config TEST_MAPLE_TREE 2463 2478 tristate "Test the Maple Tree code at runtime or module load" ··· 3181 3168 3182 3169 Enabling this option will include tests that check various scenarios 3183 3170 and edge cases to ensure the accuracy and reliability of the exponentiation 3171 + function. 3172 + 3173 + If unsure, say N 3174 + 3175 + config INT_SQRT_KUNIT_TEST 3176 + tristate "Integer square root test" if !KUNIT_ALL_TESTS 3177 + depends on KUNIT 3178 + default KUNIT_ALL_TESTS 3179 + help 3180 + This option enables the KUnit test suite for the int_sqrt() function, 3181 + which performs square root calculation. The test suite checks 3182 + various scenarios, including edge cases, to ensure correctness. 3183 + 3184 + Enabling this option will include tests that check various scenarios 3185 + and edge cases to ensure the accuracy and reliability of the square root 3184 3186 function. 3185 3187 3186 3188 If unsure, say N
+1 -1
lib/Makefile
··· 94 94 endif 95 95 96 96 obj-$(CONFIG_TEST_UUID) += test_uuid.o 97 - obj-$(CONFIG_TEST_XARRAY) += test_xarray.o 98 97 obj-$(CONFIG_TEST_MAPLE_TREE) += test_maple_tree.o 99 98 obj-$(CONFIG_TEST_PARMAN) += test_parman.o 100 99 obj-$(CONFIG_TEST_KMOD) += test_kmod.o ··· 372 373 obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o 373 374 obj-$(CONFIG_CHECKSUM_KUNIT) += checksum_kunit.o 374 375 obj-$(CONFIG_UTIL_MACROS_KUNIT) += util_macros_kunit.o 376 + obj-$(CONFIG_XARRAY_KUNIT) += test_xarray.o 375 377 obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o 376 378 obj-$(CONFIG_HASHTABLE_KUNIT_TEST) += hashtable_test.o 377 379 obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
+26 -2
lib/fault-inject.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 #include <linux/kernel.h> 3 3 #include <linux/init.h> 4 - #include <linux/random.h> 4 + #include <linux/prandom.h> 5 5 #include <linux/debugfs.h> 6 6 #include <linux/sched.h> 7 7 #include <linux/stat.h> ··· 11 11 #include <linux/interrupt.h> 12 12 #include <linux/stacktrace.h> 13 13 #include <linux/fault-inject.h> 14 + 15 + /* 16 + * The should_fail() functions use prandom instead of the normal Linux RNG 17 + * since they don't need cryptographically secure random numbers. 18 + */ 19 + static DEFINE_PER_CPU(struct rnd_state, fault_rnd_state); 20 + 21 + static u32 fault_prandom_u32_below_100(void) 22 + { 23 + struct rnd_state *state; 24 + u32 res; 25 + 26 + state = &get_cpu_var(fault_rnd_state); 27 + res = prandom_u32_state(state); 28 + put_cpu_var(fault_rnd_state); 29 + 30 + return res % 100; 31 + } 14 32 15 33 /* 16 34 * setup_fault_attr() is a helper function for various __setup handlers, so it ··· 48 30 "FAULT_INJECTION: failed to parse arguments\n"); 49 31 return 0; 50 32 } 33 + 34 + prandom_init_once(&fault_rnd_state); 51 35 52 36 attr->probability = probability; 53 37 attr->interval = interval; ··· 166 146 return false; 167 147 } 168 148 169 - if (attr->probability <= get_random_u32_below(100)) 149 + if (attr->probability <= fault_prandom_u32_below_100()) 170 150 return false; 171 151 172 152 fail: ··· 238 218 dir = debugfs_create_dir(name, parent); 239 219 if (IS_ERR(dir)) 240 220 return dir; 221 + 222 + prandom_init_once(&fault_rnd_state); 241 223 242 224 debugfs_create_ul("probability", mode, dir, &attr->probability); 243 225 debugfs_create_ul("interval", mode, dir, &attr->interval); ··· 453 431 454 432 void fault_config_init(struct fault_config *config, const char *name) 455 433 { 434 + prandom_init_once(&fault_rnd_state); 435 + 456 436 config_group_init_type_name(&config->group, name, &fault_config_type); 457 437 } 458 438 EXPORT_SYMBOL_GPL(fault_config_init);
-2
lib/inflate.c
··· 1257 1257 /* Decompress */ 1258 1258 if ((res = inflate())) { 1259 1259 switch (res) { 1260 - case 0: 1261 - break; 1262 1260 case 1: 1263 1261 error("invalid compressed format (err=1)"); 1264 1262 break;
-3
lib/kunit_iov_iter.c
··· 63 63 KUNIT_ASSERT_EQ(test, got, npages); 64 64 } 65 65 66 - for (int i = 0; i < npages; i++) 67 - pages[i]->index = i; 68 - 69 66 buffer = vmap(pages, npages, VM_MAP | VM_MAP_PUT_PAGES, PAGE_KERNEL); 70 67 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer); 71 68
+7
lib/list_sort.c
··· 108 108 * and list_sort is a stable sort, so it is not necessary to distinguish 109 109 * the @a < @b and @a == @b cases. 110 110 * 111 + * The comparison function must adhere to specific mathematical properties 112 + * to ensure correct and stable sorting: 113 + * - Antisymmetry: cmp(@a, @b) must return the opposite sign of 114 + * cmp(@b, @a). 115 + * - Transitivity: if cmp(@a, @b) <= 0 and cmp(@b, @c) <= 0, then 116 + * cmp(@a, @c) <= 0. 117 + * 111 118 * This is compatible with two styles of @cmp function: 112 119 * - The traditional style which returns <0 / =0 / >0, or 113 120 * - Returning a boolean 0/1.
-1
lib/lz4/lz4_compress.c
··· 33 33 /*-************************************ 34 34 * Dependencies 35 35 **************************************/ 36 - #include <linux/lz4.h> 37 36 #include "lz4defs.h" 38 37 #include <linux/module.h> 39 38 #include <linux/kernel.h>
-1
lib/lz4/lz4_decompress.c
··· 33 33 /*-************************************ 34 34 * Dependencies 35 35 **************************************/ 36 - #include <linux/lz4.h> 37 36 #include "lz4defs.h" 38 37 #include <linux/init.h> 39 38 #include <linux/module.h>
+2 -2
lib/lz4/lz4defs.h
··· 39 39 40 40 #include <linux/bitops.h> 41 41 #include <linux/string.h> /* memset, memcpy */ 42 + #include <linux/lz4.h> 42 43 43 44 #define FORCE_INLINE __always_inline 44 45 ··· 93 92 #define MB (1 << 20) 94 93 #define GB (1U << 30) 95 94 96 - #define MAXD_LOG 16 97 - #define MAX_DISTANCE ((1 << MAXD_LOG) - 1) 95 + #define MAX_DISTANCE LZ4_DISTANCE_MAX 98 96 #define STEPSIZE sizeof(size_t) 99 97 100 98 #define ML_BITS 4
-1
lib/lz4/lz4hc_compress.c
··· 34 34 /*-************************************ 35 35 * Dependencies 36 36 **************************************/ 37 - #include <linux/lz4.h> 38 37 #include "lz4defs.h" 39 38 #include <linux/module.h> 40 39 #include <linux/kernel.h>
+1
lib/math/Makefile
··· 9 9 obj-$(CONFIG_TEST_DIV64) += test_div64.o 10 10 obj-$(CONFIG_TEST_MULDIV64) += test_mul_u64_u64_div_u64.o 11 11 obj-$(CONFIG_RATIONAL_KUNIT_TEST) += rational-test.o 12 + obj-$(CONFIG_INT_SQRT_KUNIT_TEST) += tests/int_sqrt_kunit.o
+1
lib/math/tests/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 3 obj-$(CONFIG_INT_POW_TEST) += int_pow_kunit.o 4 + obj-$(CONFIG_INT_SQRT_KUNIT_TEST) += int_sqrt_kunit.o
+66
lib/math/tests/int_sqrt_kunit.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + #include <kunit/test.h> 4 + #include <linux/limits.h> 5 + #include <linux/math.h> 6 + #include <linux/module.h> 7 + #include <linux/string.h> 8 + 9 + struct test_case_params { 10 + unsigned long x; 11 + unsigned long expected_result; 12 + const char *name; 13 + }; 14 + 15 + static const struct test_case_params params[] = { 16 + { 0, 0, "edge case: square root of 0" }, 17 + { 1, 1, "perfect square: square root of 1" }, 18 + { 2, 1, "non-perfect square: square root of 2" }, 19 + { 3, 1, "non-perfect square: square root of 3" }, 20 + { 4, 2, "perfect square: square root of 4" }, 21 + { 5, 2, "non-perfect square: square root of 5" }, 22 + { 6, 2, "non-perfect square: square root of 6" }, 23 + { 7, 2, "non-perfect square: square root of 7" }, 24 + { 8, 2, "non-perfect square: square root of 8" }, 25 + { 9, 3, "perfect square: square root of 9" }, 26 + { 15, 3, "non-perfect square: square root of 15 (N-1 from 16)" }, 27 + { 16, 4, "perfect square: square root of 16" }, 28 + { 17, 4, "non-perfect square: square root of 17 (N+1 from 16)" }, 29 + { 80, 8, "non-perfect square: square root of 80 (N-1 from 81)" }, 30 + { 81, 9, "perfect square: square root of 81" }, 31 + { 82, 9, "non-perfect square: square root of 82 (N+1 from 81)" }, 32 + { 255, 15, "non-perfect square: square root of 255 (N-1 from 256)" }, 33 + { 256, 16, "perfect square: square root of 256" }, 34 + { 257, 16, "non-perfect square: square root of 257 (N+1 from 256)" }, 35 + { 2147483648, 46340, "large input: square root of 2147483648" }, 36 + { 4294967295, 65535, "edge case: ULONG_MAX for 32-bit" }, 37 + }; 38 + 39 + static void get_desc(const struct test_case_params *tc, char *desc) 40 + { 41 + strscpy(desc, tc->name, KUNIT_PARAM_DESC_SIZE); 42 + } 43 + 44 + KUNIT_ARRAY_PARAM(int_sqrt, params, get_desc); 45 + 46 + static void int_sqrt_test(struct kunit *test) 47 + { 48 + const struct test_case_params *tc = (const struct test_case_params *)test->param_value; 49 + 50 + KUNIT_EXPECT_EQ(test, tc->expected_result, int_sqrt(tc->x)); 51 + } 52 + 53 + static struct kunit_case math_int_sqrt_test_cases[] = { 54 + KUNIT_CASE_PARAM(int_sqrt_test, int_sqrt_gen_params), 55 + {} 56 + }; 57 + 58 + static struct kunit_suite int_sqrt_test_suite = { 59 + .name = "math-int_sqrt", 60 + .test_cases = math_int_sqrt_test_cases, 61 + }; 62 + 63 + kunit_test_suites(&int_sqrt_test_suite); 64 + 65 + MODULE_DESCRIPTION("math.int_sqrt KUnit test suite"); 66 + MODULE_LICENSE("GPL");
+1 -1
lib/rhashtable.c
··· 669 669 * structure outside the hash table. 670 670 * 671 671 * This function may be called from any process context, including 672 - * non-preemptable context, but cannot be called from softirq or 672 + * non-preemptible context, but cannot be called from softirq or 673 673 * hardirq context. 674 674 * 675 675 * You must call rhashtable_walk_exit after this function returns.
+7
lib/sort.c
··· 200 200 * copy (e.g. fix up pointers or auxiliary data), but the built-in swap 201 201 * avoids a slow retpoline and so is significantly faster. 202 202 * 203 + * The comparison function must adhere to specific mathematical 204 + * properties to ensure correct and stable sorting: 205 + * - Antisymmetry: cmp_func(a, b) must return the opposite sign of 206 + * cmp_func(b, a). 207 + * - Transitivity: if cmp_func(a, b) <= 0 and cmp_func(b, c) <= 0, then 208 + * cmp_func(a, c) <= 0. 209 + * 203 210 * Sorting time is O(n log n) both on average and worst-case. While 204 211 * quicksort is slightly faster on average, it suffers from exploitable 205 212 * O(n*n) worst-case behavior and extra memory requirements that make
+15 -15
lib/test_min_heap.c
··· 32 32 int last; 33 33 34 34 last = values[0]; 35 - min_heap_pop(heap, funcs, NULL); 35 + min_heap_pop_inline(heap, funcs, NULL); 36 36 while (heap->nr > 0) { 37 37 if (min_heap) { 38 38 if (last > values[0]) { ··· 48 48 } 49 49 } 50 50 last = values[0]; 51 - min_heap_pop(heap, funcs, NULL); 51 + min_heap_pop_inline(heap, funcs, NULL); 52 52 } 53 53 return err; 54 54 } ··· 69 69 int i, err; 70 70 71 71 /* Test with known set of values. */ 72 - min_heapify_all(&heap, &funcs, NULL); 72 + min_heapify_all_inline(&heap, &funcs, NULL); 73 73 err = pop_verify_heap(min_heap, &heap, &funcs); 74 74 75 75 ··· 78 78 for (i = 0; i < heap.nr; i++) 79 79 values[i] = get_random_u32(); 80 80 81 - min_heapify_all(&heap, &funcs, NULL); 81 + min_heapify_all_inline(&heap, &funcs, NULL); 82 82 err += pop_verify_heap(min_heap, &heap, &funcs); 83 83 84 84 return err; ··· 102 102 103 103 /* Test with known set of values copied from data. */ 104 104 for (i = 0; i < ARRAY_SIZE(data); i++) 105 - min_heap_push(&heap, &data[i], &funcs, NULL); 105 + min_heap_push_inline(&heap, &data[i], &funcs, NULL); 106 106 107 107 err = pop_verify_heap(min_heap, &heap, &funcs); 108 108 109 109 /* Test with randomly generated values. */ 110 110 while (heap.nr < heap.size) { 111 111 temp = get_random_u32(); 112 - min_heap_push(&heap, &temp, &funcs, NULL); 112 + min_heap_push_inline(&heap, &temp, &funcs, NULL); 113 113 } 114 114 err += pop_verify_heap(min_heap, &heap, &funcs); 115 115 ··· 135 135 /* Fill values with data to pop and replace. */ 136 136 temp = min_heap ? 0x80000000 : 0x7FFFFFFF; 137 137 for (i = 0; i < ARRAY_SIZE(data); i++) 138 - min_heap_push(&heap, &temp, &funcs, NULL); 138 + min_heap_push_inline(&heap, &temp, &funcs, NULL); 139 139 140 140 /* Test with known set of values copied from data. */ 141 141 for (i = 0; i < ARRAY_SIZE(data); i++) 142 - min_heap_pop_push(&heap, &data[i], &funcs, NULL); 142 + min_heap_pop_push_inline(&heap, &data[i], &funcs, NULL); 143 143 144 144 err = pop_verify_heap(min_heap, &heap, &funcs); 145 145 146 146 heap.nr = 0; 147 147 for (i = 0; i < ARRAY_SIZE(data); i++) 148 - min_heap_push(&heap, &temp, &funcs, NULL); 148 + min_heap_push_inline(&heap, &temp, &funcs, NULL); 149 149 150 150 /* Test with randomly generated values. */ 151 151 for (i = 0; i < ARRAY_SIZE(data); i++) { 152 152 temp = get_random_u32(); 153 - min_heap_pop_push(&heap, &temp, &funcs, NULL); 153 + min_heap_pop_push_inline(&heap, &temp, &funcs, NULL); 154 154 } 155 155 err += pop_verify_heap(min_heap, &heap, &funcs); 156 156 ··· 163 163 -3, -1, -2, -4, 0x8000000, 0x7FFFFFF }; 164 164 struct min_heap_test heap; 165 165 166 - min_heap_init(&heap, values, ARRAY_SIZE(values)); 166 + min_heap_init_inline(&heap, values, ARRAY_SIZE(values)); 167 167 heap.nr = ARRAY_SIZE(values); 168 168 struct min_heap_callbacks funcs = { 169 169 .less = min_heap ? less_than : greater_than, ··· 172 172 int i, err; 173 173 174 174 /* Test with known set of values. */ 175 - min_heapify_all(&heap, &funcs, NULL); 175 + min_heapify_all_inline(&heap, &funcs, NULL); 176 176 for (i = 0; i < ARRAY_SIZE(values) / 2; i++) 177 - min_heap_del(&heap, get_random_u32() % heap.nr, &funcs, NULL); 177 + min_heap_del_inline(&heap, get_random_u32() % heap.nr, &funcs, NULL); 178 178 err = pop_verify_heap(min_heap, &heap, &funcs); 179 179 180 180 ··· 182 182 heap.nr = ARRAY_SIZE(values); 183 183 for (i = 0; i < heap.nr; i++) 184 184 values[i] = get_random_u32(); 185 - min_heapify_all(&heap, &funcs, NULL); 185 + min_heapify_all_inline(&heap, &funcs, NULL); 186 186 187 187 for (i = 0; i < ARRAY_SIZE(values) / 2; i++) 188 - min_heap_del(&heap, get_random_u32() % heap.nr, &funcs, NULL); 188 + min_heap_del_inline(&heap, get_random_u32() % heap.nr, &funcs, NULL); 189 189 err += pop_verify_heap(min_heap, &heap, &funcs); 190 190 191 191 return err;
+421 -271
lib/test_xarray.c
··· 6 6 * Author: Matthew Wilcox <willy@infradead.org> 7 7 */ 8 8 9 - #include <linux/xarray.h> 10 - #include <linux/module.h> 9 + #include <kunit/test.h> 11 10 12 - static unsigned int tests_run; 13 - static unsigned int tests_passed; 11 + #include <linux/module.h> 12 + #include <linux/xarray.h> 14 13 15 14 static const unsigned int order_limit = 16 15 IS_ENABLED(CONFIG_XARRAY_MULTI) ? BITS_PER_LONG : 1; ··· 19 20 void xa_dump(const struct xarray *xa) { } 20 21 # endif 21 22 #undef XA_BUG_ON 22 - #define XA_BUG_ON(xa, x) do { \ 23 - tests_run++; \ 24 - if (x) { \ 25 - printk("BUG at %s:%d\n", __func__, __LINE__); \ 26 - xa_dump(xa); \ 27 - dump_stack(); \ 28 - } else { \ 29 - tests_passed++; \ 30 - } \ 23 + #define XA_BUG_ON(xa, x) do { \ 24 + if (x) { \ 25 + KUNIT_FAIL(test, #x); \ 26 + xa_dump(xa); \ 27 + dump_stack(); \ 28 + } \ 31 29 } while (0) 32 30 #endif 33 31 ··· 38 42 return xa_store(xa, index, xa_mk_index(index), gfp); 39 43 } 40 44 41 - static void xa_insert_index(struct xarray *xa, unsigned long index) 45 + static void xa_insert_index(struct kunit *test, struct xarray *xa, unsigned long index) 42 46 { 43 47 XA_BUG_ON(xa, xa_insert(xa, index, xa_mk_index(index), 44 48 GFP_KERNEL) != 0); 45 49 } 46 50 47 - static void xa_alloc_index(struct xarray *xa, unsigned long index, gfp_t gfp) 51 + static void xa_alloc_index(struct kunit *test, struct xarray *xa, unsigned long index, gfp_t gfp) 48 52 { 49 53 u32 id; 50 54 ··· 53 57 XA_BUG_ON(xa, id != index); 54 58 } 55 59 56 - static void xa_erase_index(struct xarray *xa, unsigned long index) 60 + static void xa_erase_index(struct kunit *test, struct xarray *xa, unsigned long index) 57 61 { 58 62 XA_BUG_ON(xa, xa_erase(xa, index) != xa_mk_index(index)); 59 63 XA_BUG_ON(xa, xa_load(xa, index) != NULL); ··· 79 83 return curr; 80 84 } 81 85 82 - static noinline void check_xa_err(struct xarray *xa) 86 + static inline struct xarray *xa_param(struct kunit *test) 83 87 { 88 + return *(struct xarray **)test->param_value; 89 + } 90 + 91 + static noinline void check_xa_err(struct kunit *test) 92 + { 93 + struct xarray *xa = xa_param(test); 94 + 84 95 XA_BUG_ON(xa, xa_err(xa_store_index(xa, 0, GFP_NOWAIT)) != 0); 85 96 XA_BUG_ON(xa, xa_err(xa_erase(xa, 0)) != 0); 86 97 #ifndef __KERNEL__ ··· 102 99 // XA_BUG_ON(xa, xa_err(xa_store(xa, 0, xa_mk_internal(0), 0)) != -EINVAL); 103 100 } 104 101 105 - static noinline void check_xas_retry(struct xarray *xa) 102 + static noinline void check_xas_retry(struct kunit *test) 106 103 { 104 + struct xarray *xa = xa_param(test); 105 + 107 106 XA_STATE(xas, xa, 0); 108 107 void *entry; 109 108 ··· 114 109 115 110 rcu_read_lock(); 116 111 XA_BUG_ON(xa, xas_find(&xas, ULONG_MAX) != xa_mk_value(0)); 117 - xa_erase_index(xa, 1); 112 + xa_erase_index(test, xa, 1); 118 113 XA_BUG_ON(xa, !xa_is_retry(xas_reload(&xas))); 119 114 XA_BUG_ON(xa, xas_retry(&xas, NULL)); 120 115 XA_BUG_ON(xa, xas_retry(&xas, xa_mk_value(0))); ··· 145 140 } 146 141 xas_unlock(&xas); 147 142 148 - xa_erase_index(xa, 0); 149 - xa_erase_index(xa, 1); 143 + xa_erase_index(test, xa, 0); 144 + xa_erase_index(test, xa, 1); 150 145 } 151 146 152 - static noinline void check_xa_load(struct xarray *xa) 147 + static noinline void check_xa_load(struct kunit *test) 153 148 { 149 + struct xarray *xa = xa_param(test); 150 + 154 151 unsigned long i, j; 155 152 156 153 for (i = 0; i < 1024; i++) { ··· 174 167 else 175 168 XA_BUG_ON(xa, entry); 176 169 } 177 - xa_erase_index(xa, i); 170 + xa_erase_index(test, xa, i); 178 171 } 179 172 XA_BUG_ON(xa, !xa_empty(xa)); 180 173 } 181 174 182 - static noinline void check_xa_mark_1(struct xarray *xa, unsigned long index) 175 + static noinline void check_xa_mark_1(struct kunit *test, unsigned long index) 183 176 { 177 + struct xarray *xa = xa_param(test); 178 + 184 179 unsigned int order; 185 180 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 8 : 1; 186 181 ··· 202 193 XA_BUG_ON(xa, xa_get_mark(xa, index, XA_MARK_1)); 203 194 204 195 /* Storing NULL clears marks, and they can't be set again */ 205 - xa_erase_index(xa, index); 196 + xa_erase_index(test, xa, index); 206 197 XA_BUG_ON(xa, !xa_empty(xa)); 207 198 XA_BUG_ON(xa, xa_get_mark(xa, index, XA_MARK_0)); 208 199 xa_set_mark(xa, index, XA_MARK_0); ··· 253 244 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0)); 254 245 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_1)); 255 246 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_2)); 256 - xa_erase_index(xa, index); 257 - xa_erase_index(xa, next); 247 + xa_erase_index(test, xa, index); 248 + xa_erase_index(test, xa, next); 258 249 XA_BUG_ON(xa, !xa_empty(xa)); 259 250 } 260 251 XA_BUG_ON(xa, !xa_empty(xa)); 261 252 } 262 253 263 - static noinline void check_xa_mark_2(struct xarray *xa) 254 + static noinline void check_xa_mark_2(struct kunit *test) 264 255 { 256 + struct xarray *xa = xa_param(test); 257 + 265 258 XA_STATE(xas, xa, 0); 266 259 unsigned long index; 267 260 unsigned int count = 0; ··· 300 289 xa_destroy(xa); 301 290 } 302 291 303 - static noinline void check_xa_mark_3(struct xarray *xa) 292 + static noinline void check_xa_mark_3(struct kunit *test) 304 293 { 305 294 #ifdef CONFIG_XARRAY_MULTI 295 + struct xarray *xa = xa_param(test); 296 + 306 297 XA_STATE(xas, xa, 0x41); 307 298 void *entry; 308 299 int count = 0; ··· 323 310 #endif 324 311 } 325 312 326 - static noinline void check_xa_mark(struct xarray *xa) 313 + static noinline void check_xa_mark(struct kunit *test) 327 314 { 328 315 unsigned long index; 329 316 330 317 for (index = 0; index < 16384; index += 4) 331 - check_xa_mark_1(xa, index); 318 + check_xa_mark_1(test, index); 332 319 333 - check_xa_mark_2(xa); 334 - check_xa_mark_3(xa); 320 + check_xa_mark_2(test); 321 + check_xa_mark_3(test); 335 322 } 336 323 337 - static noinline void check_xa_shrink(struct xarray *xa) 324 + static noinline void check_xa_shrink(struct kunit *test) 338 325 { 326 + struct xarray *xa = xa_param(test); 327 + 339 328 XA_STATE(xas, xa, 1); 340 329 struct xa_node *node; 341 330 unsigned int order; ··· 362 347 XA_BUG_ON(xa, xas_load(&xas) != NULL); 363 348 xas_unlock(&xas); 364 349 XA_BUG_ON(xa, xa_load(xa, 0) != xa_mk_value(0)); 365 - xa_erase_index(xa, 0); 350 + xa_erase_index(test, xa, 0); 366 351 XA_BUG_ON(xa, !xa_empty(xa)); 367 352 368 353 for (order = 0; order < max_order; order++) { ··· 379 364 XA_BUG_ON(xa, xa_head(xa) == node); 380 365 rcu_read_unlock(); 381 366 XA_BUG_ON(xa, xa_load(xa, max + 1) != NULL); 382 - xa_erase_index(xa, ULONG_MAX); 367 + xa_erase_index(test, xa, ULONG_MAX); 383 368 XA_BUG_ON(xa, xa->xa_head != node); 384 - xa_erase_index(xa, 0); 369 + xa_erase_index(test, xa, 0); 385 370 } 386 371 } 387 372 388 - static noinline void check_insert(struct xarray *xa) 373 + static noinline void check_insert(struct kunit *test) 389 374 { 375 + struct xarray *xa = xa_param(test); 376 + 390 377 unsigned long i; 391 378 392 379 for (i = 0; i < 1024; i++) { 393 - xa_insert_index(xa, i); 380 + xa_insert_index(test, xa, i); 394 381 XA_BUG_ON(xa, xa_load(xa, i - 1) != NULL); 395 382 XA_BUG_ON(xa, xa_load(xa, i + 1) != NULL); 396 - xa_erase_index(xa, i); 383 + xa_erase_index(test, xa, i); 397 384 } 398 385 399 386 for (i = 10; i < BITS_PER_LONG; i++) { 400 - xa_insert_index(xa, 1UL << i); 387 + xa_insert_index(test, xa, 1UL << i); 401 388 XA_BUG_ON(xa, xa_load(xa, (1UL << i) - 1) != NULL); 402 389 XA_BUG_ON(xa, xa_load(xa, (1UL << i) + 1) != NULL); 403 - xa_erase_index(xa, 1UL << i); 390 + xa_erase_index(test, xa, 1UL << i); 404 391 405 - xa_insert_index(xa, (1UL << i) - 1); 392 + xa_insert_index(test, xa, (1UL << i) - 1); 406 393 XA_BUG_ON(xa, xa_load(xa, (1UL << i) - 2) != NULL); 407 394 XA_BUG_ON(xa, xa_load(xa, 1UL << i) != NULL); 408 - xa_erase_index(xa, (1UL << i) - 1); 395 + xa_erase_index(test, xa, (1UL << i) - 1); 409 396 } 410 397 411 - xa_insert_index(xa, ~0UL); 398 + xa_insert_index(test, xa, ~0UL); 412 399 XA_BUG_ON(xa, xa_load(xa, 0UL) != NULL); 413 400 XA_BUG_ON(xa, xa_load(xa, ~1UL) != NULL); 414 - xa_erase_index(xa, ~0UL); 401 + xa_erase_index(test, xa, ~0UL); 415 402 416 403 XA_BUG_ON(xa, !xa_empty(xa)); 417 404 } 418 405 419 - static noinline void check_cmpxchg(struct xarray *xa) 406 + static noinline void check_cmpxchg(struct kunit *test) 420 407 { 408 + struct xarray *xa = xa_param(test); 409 + 421 410 void *FIVE = xa_mk_value(5); 422 411 void *SIX = xa_mk_value(6); 423 412 void *LOTS = xa_mk_value(12345678); ··· 437 418 XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) != -EBUSY); 438 419 XA_BUG_ON(xa, xa_cmpxchg(xa, 5, FIVE, NULL, GFP_KERNEL) != FIVE); 439 420 XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) == -EBUSY); 440 - xa_erase_index(xa, 12345678); 441 - xa_erase_index(xa, 5); 421 + xa_erase_index(test, xa, 12345678); 422 + xa_erase_index(test, xa, 5); 442 423 XA_BUG_ON(xa, !xa_empty(xa)); 443 424 } 444 425 445 - static noinline void check_cmpxchg_order(struct xarray *xa) 426 + static noinline void check_cmpxchg_order(struct kunit *test) 446 427 { 447 428 #ifdef CONFIG_XARRAY_MULTI 429 + struct xarray *xa = xa_param(test); 430 + 448 431 void *FIVE = xa_mk_value(5); 449 432 unsigned int i, order = 3; 450 433 ··· 497 476 #endif 498 477 } 499 478 500 - static noinline void check_reserve(struct xarray *xa) 479 + static noinline void check_reserve(struct kunit *test) 501 480 { 481 + struct xarray *xa = xa_param(test); 482 + 502 483 void *entry; 503 484 unsigned long index; 504 485 int count; ··· 517 494 XA_BUG_ON(xa, xa_reserve(xa, 12345678, GFP_KERNEL) != 0); 518 495 XA_BUG_ON(xa, xa_store_index(xa, 12345678, GFP_NOWAIT) != NULL); 519 496 xa_release(xa, 12345678); 520 - xa_erase_index(xa, 12345678); 497 + xa_erase_index(test, xa, 12345678); 521 498 XA_BUG_ON(xa, !xa_empty(xa)); 522 499 523 500 /* cmpxchg sees a reserved entry as ZERO */ ··· 525 502 XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, XA_ZERO_ENTRY, 526 503 xa_mk_value(12345678), GFP_NOWAIT) != NULL); 527 504 xa_release(xa, 12345678); 528 - xa_erase_index(xa, 12345678); 505 + xa_erase_index(test, xa, 12345678); 529 506 XA_BUG_ON(xa, !xa_empty(xa)); 530 507 531 508 /* xa_insert treats it as busy */ ··· 565 542 xa_destroy(xa); 566 543 } 567 544 568 - static noinline void check_xas_erase(struct xarray *xa) 545 + static noinline void check_xas_erase(struct kunit *test) 569 546 { 547 + struct xarray *xa = xa_param(test); 548 + 570 549 XA_STATE(xas, xa, 0); 571 550 void *entry; 572 551 unsigned long i, j; ··· 606 581 } 607 582 608 583 #ifdef CONFIG_XARRAY_MULTI 609 - static noinline void check_multi_store_1(struct xarray *xa, unsigned long index, 584 + static noinline void check_multi_store_1(struct kunit *test, unsigned long index, 610 585 unsigned int order) 611 586 { 587 + struct xarray *xa = xa_param(test); 588 + 612 589 XA_STATE(xas, xa, index); 613 590 unsigned long min = index & ~((1UL << order) - 1); 614 591 unsigned long max = min + (1UL << order); ··· 629 602 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 630 603 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 631 604 632 - xa_erase_index(xa, min); 605 + xa_erase_index(test, xa, min); 633 606 XA_BUG_ON(xa, !xa_empty(xa)); 634 607 } 635 608 636 - static noinline void check_multi_store_2(struct xarray *xa, unsigned long index, 609 + static noinline void check_multi_store_2(struct kunit *test, unsigned long index, 637 610 unsigned int order) 638 611 { 612 + struct xarray *xa = xa_param(test); 613 + 639 614 XA_STATE(xas, xa, index); 640 615 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL); 641 616 ··· 649 620 XA_BUG_ON(xa, !xa_empty(xa)); 650 621 } 651 622 652 - static noinline void check_multi_store_3(struct xarray *xa, unsigned long index, 623 + static noinline void check_multi_store_3(struct kunit *test, unsigned long index, 653 624 unsigned int order) 654 625 { 626 + struct xarray *xa = xa_param(test); 627 + 655 628 XA_STATE(xas, xa, 0); 656 629 void *entry; 657 630 int n = 0; ··· 678 647 } 679 648 #endif 680 649 681 - static noinline void check_multi_store(struct xarray *xa) 650 + static noinline void check_multi_store(struct kunit *test) 682 651 { 683 652 #ifdef CONFIG_XARRAY_MULTI 653 + struct xarray *xa = xa_param(test); 654 + 684 655 unsigned long i, j, k; 685 656 unsigned int max_order = (sizeof(long) == 4) ? 30 : 60; 686 657 ··· 747 714 } 748 715 749 716 for (i = 0; i < 20; i++) { 750 - check_multi_store_1(xa, 200, i); 751 - check_multi_store_1(xa, 0, i); 752 - check_multi_store_1(xa, (1UL << i) + 1, i); 717 + check_multi_store_1(test, 200, i); 718 + check_multi_store_1(test, 0, i); 719 + check_multi_store_1(test, (1UL << i) + 1, i); 753 720 } 754 - check_multi_store_2(xa, 4095, 9); 721 + check_multi_store_2(test, 4095, 9); 755 722 756 723 for (i = 1; i < 20; i++) { 757 - check_multi_store_3(xa, 0, i); 758 - check_multi_store_3(xa, 1UL << i, i); 724 + check_multi_store_3(test, 0, i); 725 + check_multi_store_3(test, 1UL << i, i); 759 726 } 760 727 #endif 761 728 } 762 729 763 730 #ifdef CONFIG_XARRAY_MULTI 764 731 /* mimics page cache __filemap_add_folio() */ 765 - static noinline void check_xa_multi_store_adv_add(struct xarray *xa, 732 + static noinline void check_xa_multi_store_adv_add(struct kunit *test, 766 733 unsigned long index, 767 734 unsigned int order, 768 735 void *p) 769 736 { 737 + struct xarray *xa = xa_param(test); 738 + 770 739 XA_STATE(xas, xa, index); 771 740 unsigned int nrpages = 1UL << order; 772 741 ··· 796 761 } 797 762 798 763 /* mimics page_cache_delete() */ 799 - static noinline void check_xa_multi_store_adv_del_entry(struct xarray *xa, 764 + static noinline void check_xa_multi_store_adv_del_entry(struct kunit *test, 800 765 unsigned long index, 801 766 unsigned int order) 802 767 { 768 + struct xarray *xa = xa_param(test); 769 + 803 770 XA_STATE(xas, xa, index); 804 771 805 772 xas_set_order(&xas, index, order); ··· 809 772 xas_init_marks(&xas); 810 773 } 811 774 812 - static noinline void check_xa_multi_store_adv_delete(struct xarray *xa, 775 + static noinline void check_xa_multi_store_adv_delete(struct kunit *test, 813 776 unsigned long index, 814 777 unsigned int order) 815 778 { 779 + struct xarray *xa = xa_param(test); 780 + 816 781 xa_lock_irq(xa); 817 - check_xa_multi_store_adv_del_entry(xa, index, order); 782 + check_xa_multi_store_adv_del_entry(test, index, order); 818 783 xa_unlock_irq(xa); 819 784 } 820 785 ··· 853 814 static unsigned long some_val_2 = 0xdeaddead; 854 815 855 816 /* mimics the page cache usage */ 856 - static noinline void check_xa_multi_store_adv(struct xarray *xa, 817 + static noinline void check_xa_multi_store_adv(struct kunit *test, 857 818 unsigned long pos, 858 819 unsigned int order) 859 820 { 821 + struct xarray *xa = xa_param(test); 822 + 860 823 unsigned int nrpages = 1UL << order; 861 824 unsigned long index, base, next_index, next_next_index; 862 825 unsigned int i; ··· 868 827 next_index = round_down(base + nrpages, nrpages); 869 828 next_next_index = round_down(next_index + nrpages, nrpages); 870 829 871 - check_xa_multi_store_adv_add(xa, base, order, &some_val); 830 + check_xa_multi_store_adv_add(test, base, order, &some_val); 872 831 873 832 for (i = 0; i < nrpages; i++) 874 833 XA_BUG_ON(xa, test_get_entry(xa, base + i) != &some_val); ··· 876 835 XA_BUG_ON(xa, test_get_entry(xa, next_index) != NULL); 877 836 878 837 /* Use order 0 for the next item */ 879 - check_xa_multi_store_adv_add(xa, next_index, 0, &some_val_2); 838 + check_xa_multi_store_adv_add(test, next_index, 0, &some_val_2); 880 839 XA_BUG_ON(xa, test_get_entry(xa, next_index) != &some_val_2); 881 840 882 841 /* Remove the next item */ 883 - check_xa_multi_store_adv_delete(xa, next_index, 0); 842 + check_xa_multi_store_adv_delete(test, next_index, 0); 884 843 885 844 /* Now use order for a new pointer */ 886 - check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2); 845 + check_xa_multi_store_adv_add(test, next_index, order, &some_val_2); 887 846 888 847 for (i = 0; i < nrpages; i++) 889 848 XA_BUG_ON(xa, test_get_entry(xa, next_index + i) != &some_val_2); 890 849 891 - check_xa_multi_store_adv_delete(xa, next_index, order); 892 - check_xa_multi_store_adv_delete(xa, base, order); 850 + check_xa_multi_store_adv_delete(test, next_index, order); 851 + check_xa_multi_store_adv_delete(test, base, order); 893 852 XA_BUG_ON(xa, !xa_empty(xa)); 894 853 895 854 /* starting fresh again */ ··· 897 856 /* let's test some holes now */ 898 857 899 858 /* hole at base and next_next */ 900 - check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2); 859 + check_xa_multi_store_adv_add(test, next_index, order, &some_val_2); 901 860 902 861 for (i = 0; i < nrpages; i++) 903 862 XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); ··· 908 867 for (i = 0; i < nrpages; i++) 909 868 XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != NULL); 910 869 911 - check_xa_multi_store_adv_delete(xa, next_index, order); 870 + check_xa_multi_store_adv_delete(test, next_index, order); 912 871 XA_BUG_ON(xa, !xa_empty(xa)); 913 872 914 873 /* hole at base and next */ 915 874 916 - check_xa_multi_store_adv_add(xa, next_next_index, order, &some_val_2); 875 + check_xa_multi_store_adv_add(test, next_next_index, order, &some_val_2); 917 876 918 877 for (i = 0; i < nrpages; i++) 919 878 XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); ··· 924 883 for (i = 0; i < nrpages; i++) 925 884 XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != &some_val_2); 926 885 927 - check_xa_multi_store_adv_delete(xa, next_next_index, order); 886 + check_xa_multi_store_adv_delete(test, next_next_index, order); 928 887 XA_BUG_ON(xa, !xa_empty(xa)); 929 888 } 930 889 #endif 931 890 932 - static noinline void check_multi_store_advanced(struct xarray *xa) 891 + static noinline void check_multi_store_advanced(struct kunit *test) 933 892 { 934 893 #ifdef CONFIG_XARRAY_MULTI 935 894 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; ··· 941 900 */ 942 901 for (pos = 7; pos < end; pos = (pos * pos) + 564) { 943 902 for (i = 0; i < max_order; i++) { 944 - check_xa_multi_store_adv(xa, pos, i); 945 - check_xa_multi_store_adv(xa, pos + 157, i); 903 + check_xa_multi_store_adv(test, pos, i); 904 + check_xa_multi_store_adv(test, pos + 157, i); 946 905 } 947 906 } 948 907 #endif 949 908 } 950 909 951 - static noinline void check_xa_alloc_1(struct xarray *xa, unsigned int base) 910 + static noinline void check_xa_alloc_1(struct kunit *test, struct xarray *xa, unsigned int base) 952 911 { 953 912 int i; 954 913 u32 id; 955 914 956 915 XA_BUG_ON(xa, !xa_empty(xa)); 957 916 /* An empty array should assign %base to the first alloc */ 958 - xa_alloc_index(xa, base, GFP_KERNEL); 917 + xa_alloc_index(test, xa, base, GFP_KERNEL); 959 918 960 919 /* Erasing it should make the array empty again */ 961 - xa_erase_index(xa, base); 920 + xa_erase_index(test, xa, base); 962 921 XA_BUG_ON(xa, !xa_empty(xa)); 963 922 964 923 /* And it should assign %base again */ 965 - xa_alloc_index(xa, base, GFP_KERNEL); 924 + xa_alloc_index(test, xa, base, GFP_KERNEL); 966 925 967 926 /* Allocating and then erasing a lot should not lose base */ 968 927 for (i = base + 1; i < 2 * XA_CHUNK_SIZE; i++) 969 - xa_alloc_index(xa, i, GFP_KERNEL); 928 + xa_alloc_index(test, xa, i, GFP_KERNEL); 970 929 for (i = base; i < 2 * XA_CHUNK_SIZE; i++) 971 - xa_erase_index(xa, i); 972 - xa_alloc_index(xa, base, GFP_KERNEL); 930 + xa_erase_index(test, xa, i); 931 + xa_alloc_index(test, xa, base, GFP_KERNEL); 973 932 974 933 /* Destroying the array should do the same as erasing */ 975 934 xa_destroy(xa); 976 935 977 936 /* And it should assign %base again */ 978 - xa_alloc_index(xa, base, GFP_KERNEL); 937 + xa_alloc_index(test, xa, base, GFP_KERNEL); 979 938 980 939 /* The next assigned ID should be base+1 */ 981 - xa_alloc_index(xa, base + 1, GFP_KERNEL); 982 - xa_erase_index(xa, base + 1); 940 + xa_alloc_index(test, xa, base + 1, GFP_KERNEL); 941 + xa_erase_index(test, xa, base + 1); 983 942 984 943 /* Storing a value should mark it used */ 985 944 xa_store_index(xa, base + 1, GFP_KERNEL); 986 - xa_alloc_index(xa, base + 2, GFP_KERNEL); 945 + xa_alloc_index(test, xa, base + 2, GFP_KERNEL); 987 946 988 947 /* If we then erase base, it should be free */ 989 - xa_erase_index(xa, base); 990 - xa_alloc_index(xa, base, GFP_KERNEL); 948 + xa_erase_index(test, xa, base); 949 + xa_alloc_index(test, xa, base, GFP_KERNEL); 991 950 992 - xa_erase_index(xa, base + 1); 993 - xa_erase_index(xa, base + 2); 951 + xa_erase_index(test, xa, base + 1); 952 + xa_erase_index(test, xa, base + 2); 994 953 995 954 for (i = 1; i < 5000; i++) { 996 - xa_alloc_index(xa, base + i, GFP_KERNEL); 955 + xa_alloc_index(test, xa, base + i, GFP_KERNEL); 997 956 } 998 957 999 958 xa_destroy(xa); ··· 1016 975 1017 976 XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5), 1018 977 GFP_KERNEL) != -EBUSY); 1019 - XA_BUG_ON(xa, xa_store_index(xa, 3, GFP_KERNEL) != 0); 978 + XA_BUG_ON(xa, xa_store_index(xa, 3, GFP_KERNEL) != NULL); 1020 979 XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5), 1021 980 GFP_KERNEL) != -EBUSY); 1022 - xa_erase_index(xa, 3); 981 + xa_erase_index(test, xa, 3); 1023 982 XA_BUG_ON(xa, !xa_empty(xa)); 1024 983 } 1025 984 1026 - static noinline void check_xa_alloc_2(struct xarray *xa, unsigned int base) 985 + static noinline void check_xa_alloc_2(struct kunit *test, struct xarray *xa, unsigned int base) 1027 986 { 1028 987 unsigned int i, id; 1029 988 unsigned long index; ··· 1059 1018 XA_BUG_ON(xa, id != 5); 1060 1019 1061 1020 xa_for_each(xa, index, entry) { 1062 - xa_erase_index(xa, index); 1021 + xa_erase_index(test, xa, index); 1063 1022 } 1064 1023 1065 1024 for (i = base; i < base + 9; i++) { ··· 1074 1033 xa_destroy(xa); 1075 1034 } 1076 1035 1077 - static noinline void check_xa_alloc_3(struct xarray *xa, unsigned int base) 1036 + static noinline void check_xa_alloc_3(struct kunit *test, struct xarray *xa, unsigned int base) 1078 1037 { 1079 1038 struct xa_limit limit = XA_LIMIT(1, 0x3fff); 1080 1039 u32 next = 0; ··· 1090 1049 XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(0x3ffd), limit, 1091 1050 &next, GFP_KERNEL) != 0); 1092 1051 XA_BUG_ON(xa, id != 0x3ffd); 1093 - xa_erase_index(xa, 0x3ffd); 1094 - xa_erase_index(xa, 1); 1052 + xa_erase_index(test, xa, 0x3ffd); 1053 + xa_erase_index(test, xa, 1); 1095 1054 XA_BUG_ON(xa, !xa_empty(xa)); 1096 1055 1097 1056 for (i = 0x3ffe; i < 0x4003; i++) { ··· 1106 1065 1107 1066 /* Check wrap-around is handled correctly */ 1108 1067 if (base != 0) 1109 - xa_erase_index(xa, base); 1110 - xa_erase_index(xa, base + 1); 1068 + xa_erase_index(test, xa, base); 1069 + xa_erase_index(test, xa, base + 1); 1111 1070 next = UINT_MAX; 1112 1071 XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(UINT_MAX), 1113 1072 xa_limit_32b, &next, GFP_KERNEL) != 0); ··· 1120 1079 XA_BUG_ON(xa, id != base + 1); 1121 1080 1122 1081 xa_for_each(xa, index, entry) 1123 - xa_erase_index(xa, index); 1082 + xa_erase_index(test, xa, index); 1124 1083 1125 1084 XA_BUG_ON(xa, !xa_empty(xa)); 1126 1085 } ··· 1128 1087 static DEFINE_XARRAY_ALLOC(xa0); 1129 1088 static DEFINE_XARRAY_ALLOC1(xa1); 1130 1089 1131 - static noinline void check_xa_alloc(void) 1090 + static noinline void check_xa_alloc(struct kunit *test) 1132 1091 { 1133 - check_xa_alloc_1(&xa0, 0); 1134 - check_xa_alloc_1(&xa1, 1); 1135 - check_xa_alloc_2(&xa0, 0); 1136 - check_xa_alloc_2(&xa1, 1); 1137 - check_xa_alloc_3(&xa0, 0); 1138 - check_xa_alloc_3(&xa1, 1); 1092 + check_xa_alloc_1(test, &xa0, 0); 1093 + check_xa_alloc_1(test, &xa1, 1); 1094 + check_xa_alloc_2(test, &xa0, 0); 1095 + check_xa_alloc_2(test, &xa1, 1); 1096 + check_xa_alloc_3(test, &xa0, 0); 1097 + check_xa_alloc_3(test, &xa1, 1); 1139 1098 } 1140 1099 1141 - static noinline void __check_store_iter(struct xarray *xa, unsigned long start, 1100 + static noinline void __check_store_iter(struct kunit *test, unsigned long start, 1142 1101 unsigned int order, unsigned int present) 1143 1102 { 1103 + struct xarray *xa = xa_param(test); 1104 + 1144 1105 XA_STATE_ORDER(xas, xa, start, order); 1145 1106 void *entry; 1146 1107 unsigned int count = 0; ··· 1166 1123 XA_BUG_ON(xa, xa_load(xa, start) != xa_mk_index(start)); 1167 1124 XA_BUG_ON(xa, xa_load(xa, start + (1UL << order) - 1) != 1168 1125 xa_mk_index(start)); 1169 - xa_erase_index(xa, start); 1126 + xa_erase_index(test, xa, start); 1170 1127 } 1171 1128 1172 - static noinline void check_store_iter(struct xarray *xa) 1129 + static noinline void check_store_iter(struct kunit *test) 1173 1130 { 1131 + struct xarray *xa = xa_param(test); 1132 + 1174 1133 unsigned int i, j; 1175 1134 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; 1176 1135 1177 1136 for (i = 0; i < max_order; i++) { 1178 1137 unsigned int min = 1 << i; 1179 1138 unsigned int max = (2 << i) - 1; 1180 - __check_store_iter(xa, 0, i, 0); 1139 + __check_store_iter(test, 0, i, 0); 1181 1140 XA_BUG_ON(xa, !xa_empty(xa)); 1182 - __check_store_iter(xa, min, i, 0); 1141 + __check_store_iter(test, min, i, 0); 1183 1142 XA_BUG_ON(xa, !xa_empty(xa)); 1184 1143 1185 1144 xa_store_index(xa, min, GFP_KERNEL); 1186 - __check_store_iter(xa, min, i, 1); 1145 + __check_store_iter(test, min, i, 1); 1187 1146 XA_BUG_ON(xa, !xa_empty(xa)); 1188 1147 xa_store_index(xa, max, GFP_KERNEL); 1189 - __check_store_iter(xa, min, i, 1); 1148 + __check_store_iter(test, min, i, 1); 1190 1149 XA_BUG_ON(xa, !xa_empty(xa)); 1191 1150 1192 1151 for (j = 0; j < min; j++) 1193 1152 xa_store_index(xa, j, GFP_KERNEL); 1194 - __check_store_iter(xa, 0, i, min); 1153 + __check_store_iter(test, 0, i, min); 1195 1154 XA_BUG_ON(xa, !xa_empty(xa)); 1196 1155 for (j = 0; j < min; j++) 1197 1156 xa_store_index(xa, min + j, GFP_KERNEL); 1198 - __check_store_iter(xa, min, i, min); 1157 + __check_store_iter(test, min, i, min); 1199 1158 XA_BUG_ON(xa, !xa_empty(xa)); 1200 1159 } 1201 1160 #ifdef CONFIG_XARRAY_MULTI 1202 1161 xa_store_index(xa, 63, GFP_KERNEL); 1203 1162 xa_store_index(xa, 65, GFP_KERNEL); 1204 - __check_store_iter(xa, 64, 2, 1); 1205 - xa_erase_index(xa, 63); 1163 + __check_store_iter(test, 64, 2, 1); 1164 + xa_erase_index(test, xa, 63); 1206 1165 #endif 1207 1166 XA_BUG_ON(xa, !xa_empty(xa)); 1208 1167 } 1209 1168 1210 - static noinline void check_multi_find_1(struct xarray *xa, unsigned order) 1169 + static noinline void check_multi_find_1(struct kunit *test, unsigned int order) 1211 1170 { 1212 1171 #ifdef CONFIG_XARRAY_MULTI 1172 + struct xarray *xa = xa_param(test); 1173 + 1213 1174 unsigned long multi = 3 << order; 1214 1175 unsigned long next = 4 << order; 1215 1176 unsigned long index; ··· 1236 1189 XA_BUG_ON(xa, xa_find_after(xa, &index, next, XA_PRESENT) != NULL); 1237 1190 XA_BUG_ON(xa, index != next); 1238 1191 1239 - xa_erase_index(xa, multi); 1240 - xa_erase_index(xa, next); 1241 - xa_erase_index(xa, next + 1); 1192 + xa_erase_index(test, xa, multi); 1193 + xa_erase_index(test, xa, next); 1194 + xa_erase_index(test, xa, next + 1); 1242 1195 XA_BUG_ON(xa, !xa_empty(xa)); 1243 1196 #endif 1244 1197 } 1245 1198 1246 - static noinline void check_multi_find_2(struct xarray *xa) 1199 + static noinline void check_multi_find_2(struct kunit *test) 1247 1200 { 1201 + struct xarray *xa = xa_param(test); 1202 + 1248 1203 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 10 : 1; 1249 1204 unsigned int i, j; 1250 1205 void *entry; ··· 1260 1211 GFP_KERNEL); 1261 1212 rcu_read_lock(); 1262 1213 xas_for_each(&xas, entry, ULONG_MAX) { 1263 - xa_erase_index(xa, index); 1214 + xa_erase_index(test, xa, index); 1264 1215 } 1265 1216 rcu_read_unlock(); 1266 - xa_erase_index(xa, index - 1); 1217 + xa_erase_index(test, xa, index - 1); 1267 1218 XA_BUG_ON(xa, !xa_empty(xa)); 1268 1219 } 1269 1220 } 1270 1221 } 1271 1222 1272 - static noinline void check_multi_find_3(struct xarray *xa) 1223 + static noinline void check_multi_find_3(struct kunit *test) 1273 1224 { 1225 + struct xarray *xa = xa_param(test); 1226 + 1274 1227 unsigned int order; 1275 1228 1276 1229 for (order = 5; order < order_limit; order++) { ··· 1281 1230 XA_BUG_ON(xa, !xa_empty(xa)); 1282 1231 xa_store_order(xa, 0, order - 4, xa_mk_index(0), GFP_KERNEL); 1283 1232 XA_BUG_ON(xa, xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT)); 1284 - xa_erase_index(xa, 0); 1233 + xa_erase_index(test, xa, 0); 1285 1234 } 1286 1235 } 1287 1236 1288 - static noinline void check_find_1(struct xarray *xa) 1237 + static noinline void check_find_1(struct kunit *test) 1289 1238 { 1239 + struct xarray *xa = xa_param(test); 1240 + 1290 1241 unsigned long i, j, k; 1291 1242 1292 1243 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1325 1272 else 1326 1273 XA_BUG_ON(xa, entry != NULL); 1327 1274 } 1328 - xa_erase_index(xa, j); 1275 + xa_erase_index(test, xa, j); 1329 1276 XA_BUG_ON(xa, xa_get_mark(xa, j, XA_MARK_0)); 1330 1277 XA_BUG_ON(xa, !xa_get_mark(xa, i, XA_MARK_0)); 1331 1278 } 1332 - xa_erase_index(xa, i); 1279 + xa_erase_index(test, xa, i); 1333 1280 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0)); 1334 1281 } 1335 1282 XA_BUG_ON(xa, !xa_empty(xa)); 1336 1283 } 1337 1284 1338 - static noinline void check_find_2(struct xarray *xa) 1285 + static noinline void check_find_2(struct kunit *test) 1339 1286 { 1287 + struct xarray *xa = xa_param(test); 1288 + 1340 1289 void *entry; 1341 1290 unsigned long i, j, index; 1342 1291 ··· 1358 1303 xa_destroy(xa); 1359 1304 } 1360 1305 1361 - static noinline void check_find_3(struct xarray *xa) 1306 + static noinline void check_find_3(struct kunit *test) 1362 1307 { 1308 + struct xarray *xa = xa_param(test); 1309 + 1363 1310 XA_STATE(xas, xa, 0); 1364 1311 unsigned long i, j, k; 1365 1312 void *entry; ··· 1385 1328 xa_destroy(xa); 1386 1329 } 1387 1330 1388 - static noinline void check_find_4(struct xarray *xa) 1331 + static noinline void check_find_4(struct kunit *test) 1389 1332 { 1333 + struct xarray *xa = xa_param(test); 1334 + 1390 1335 unsigned long index = 0; 1391 1336 void *entry; 1392 1337 ··· 1400 1341 entry = xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT); 1401 1342 XA_BUG_ON(xa, entry); 1402 1343 1403 - xa_erase_index(xa, ULONG_MAX); 1344 + xa_erase_index(test, xa, ULONG_MAX); 1404 1345 } 1405 1346 1406 - static noinline void check_find(struct xarray *xa) 1347 + static noinline void check_find(struct kunit *test) 1407 1348 { 1408 1349 unsigned i; 1409 1350 1410 - check_find_1(xa); 1411 - check_find_2(xa); 1412 - check_find_3(xa); 1413 - check_find_4(xa); 1351 + check_find_1(test); 1352 + check_find_2(test); 1353 + check_find_3(test); 1354 + check_find_4(test); 1414 1355 1415 1356 for (i = 2; i < 10; i++) 1416 - check_multi_find_1(xa, i); 1417 - check_multi_find_2(xa); 1418 - check_multi_find_3(xa); 1357 + check_multi_find_1(test, i); 1358 + check_multi_find_2(test); 1359 + check_multi_find_3(test); 1419 1360 } 1420 1361 1421 1362 /* See find_swap_entry() in mm/shmem.c */ ··· 1441 1382 return entry ? xas.xa_index : -1; 1442 1383 } 1443 1384 1444 - static noinline void check_find_entry(struct xarray *xa) 1385 + static noinline void check_find_entry(struct kunit *test) 1445 1386 { 1387 + struct xarray *xa = xa_param(test); 1388 + 1446 1389 #ifdef CONFIG_XARRAY_MULTI 1447 1390 unsigned int order; 1448 1391 unsigned long offset, index; ··· 1471 1410 xa_store_index(xa, ULONG_MAX, GFP_KERNEL); 1472 1411 XA_BUG_ON(xa, xa_find_entry(xa, xa) != -1); 1473 1412 XA_BUG_ON(xa, xa_find_entry(xa, xa_mk_index(ULONG_MAX)) != -1); 1474 - xa_erase_index(xa, ULONG_MAX); 1413 + xa_erase_index(test, xa, ULONG_MAX); 1475 1414 XA_BUG_ON(xa, !xa_empty(xa)); 1476 1415 } 1477 1416 1478 - static noinline void check_pause(struct xarray *xa) 1417 + static noinline void check_pause(struct kunit *test) 1479 1418 { 1419 + struct xarray *xa = xa_param(test); 1420 + 1480 1421 XA_STATE(xas, xa, 0); 1481 1422 void *entry; 1482 1423 unsigned int order; ··· 1511 1448 XA_BUG_ON(xa, count != order_limit); 1512 1449 1513 1450 xa_destroy(xa); 1451 + 1452 + index = 0; 1453 + for (order = XA_CHUNK_SHIFT; order > 0; order--) { 1454 + XA_BUG_ON(xa, xa_store_order(xa, index, order, 1455 + xa_mk_index(index), GFP_KERNEL)); 1456 + index += 1UL << order; 1457 + } 1458 + 1459 + index = 0; 1460 + count = 0; 1461 + xas_set(&xas, 0); 1462 + rcu_read_lock(); 1463 + xas_for_each(&xas, entry, ULONG_MAX) { 1464 + XA_BUG_ON(xa, entry != xa_mk_index(index)); 1465 + index += 1UL << (XA_CHUNK_SHIFT - count); 1466 + count++; 1467 + } 1468 + rcu_read_unlock(); 1469 + XA_BUG_ON(xa, count != XA_CHUNK_SHIFT); 1470 + 1471 + index = 0; 1472 + count = 0; 1473 + xas_set(&xas, XA_CHUNK_SIZE / 2 + 1); 1474 + rcu_read_lock(); 1475 + xas_for_each(&xas, entry, ULONG_MAX) { 1476 + XA_BUG_ON(xa, entry != xa_mk_index(index)); 1477 + index += 1UL << (XA_CHUNK_SHIFT - count); 1478 + count++; 1479 + xas_pause(&xas); 1480 + } 1481 + rcu_read_unlock(); 1482 + XA_BUG_ON(xa, count != XA_CHUNK_SHIFT); 1483 + 1484 + xa_destroy(xa); 1485 + 1514 1486 } 1515 1487 1516 - static noinline void check_move_tiny(struct xarray *xa) 1488 + static noinline void check_move_tiny(struct kunit *test) 1517 1489 { 1490 + struct xarray *xa = xa_param(test); 1491 + 1518 1492 XA_STATE(xas, xa, 0); 1519 1493 1520 1494 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1568 1468 XA_BUG_ON(xa, xas_prev(&xas) != xa_mk_index(0)); 1569 1469 XA_BUG_ON(xa, xas_prev(&xas) != NULL); 1570 1470 rcu_read_unlock(); 1571 - xa_erase_index(xa, 0); 1471 + xa_erase_index(test, xa, 0); 1572 1472 XA_BUG_ON(xa, !xa_empty(xa)); 1573 1473 } 1574 1474 1575 - static noinline void check_move_max(struct xarray *xa) 1475 + static noinline void check_move_max(struct kunit *test) 1576 1476 { 1477 + struct xarray *xa = xa_param(test); 1478 + 1577 1479 XA_STATE(xas, xa, 0); 1578 1480 1579 1481 xa_store_index(xa, ULONG_MAX, GFP_KERNEL); ··· 1591 1489 XA_BUG_ON(xa, xas_find(&xas, ULONG_MAX) != NULL); 1592 1490 rcu_read_unlock(); 1593 1491 1594 - xa_erase_index(xa, ULONG_MAX); 1492 + xa_erase_index(test, xa, ULONG_MAX); 1595 1493 XA_BUG_ON(xa, !xa_empty(xa)); 1596 1494 } 1597 1495 1598 - static noinline void check_move_small(struct xarray *xa, unsigned long idx) 1496 + static noinline void check_move_small(struct kunit *test, unsigned long idx) 1599 1497 { 1498 + struct xarray *xa = xa_param(test); 1499 + 1600 1500 XA_STATE(xas, xa, 0); 1601 1501 unsigned long i; 1602 1502 ··· 1640 1536 XA_BUG_ON(xa, xas.xa_index != ULONG_MAX); 1641 1537 rcu_read_unlock(); 1642 1538 1643 - xa_erase_index(xa, 0); 1644 - xa_erase_index(xa, idx); 1539 + xa_erase_index(test, xa, 0); 1540 + xa_erase_index(test, xa, idx); 1645 1541 XA_BUG_ON(xa, !xa_empty(xa)); 1646 1542 } 1647 1543 1648 - static noinline void check_move(struct xarray *xa) 1544 + static noinline void check_move(struct kunit *test) 1649 1545 { 1546 + struct xarray *xa = xa_param(test); 1547 + 1650 1548 XA_STATE(xas, xa, (1 << 16) - 1); 1651 1549 unsigned long i; 1652 1550 ··· 1675 1569 rcu_read_unlock(); 1676 1570 1677 1571 for (i = (1 << 8); i < (1 << 15); i++) 1678 - xa_erase_index(xa, i); 1572 + xa_erase_index(test, xa, i); 1679 1573 1680 1574 i = xas.xa_index; 1681 1575 ··· 1706 1600 1707 1601 xa_destroy(xa); 1708 1602 1709 - check_move_tiny(xa); 1710 - check_move_max(xa); 1603 + check_move_tiny(test); 1604 + check_move_max(test); 1711 1605 1712 1606 for (i = 0; i < 16; i++) 1713 - check_move_small(xa, 1UL << i); 1607 + check_move_small(test, 1UL << i); 1714 1608 1715 1609 for (i = 2; i < 16; i++) 1716 - check_move_small(xa, (1UL << i) - 1); 1610 + check_move_small(test, (1UL << i) - 1); 1717 1611 } 1718 1612 1719 - static noinline void xa_store_many_order(struct xarray *xa, 1613 + static noinline void xa_store_many_order(struct kunit *test, struct xarray *xa, 1720 1614 unsigned long index, unsigned order) 1721 1615 { 1722 1616 XA_STATE_ORDER(xas, xa, index, order); ··· 1739 1633 XA_BUG_ON(xa, xas_error(&xas)); 1740 1634 } 1741 1635 1742 - static noinline void check_create_range_1(struct xarray *xa, 1636 + static noinline void check_create_range_1(struct kunit *test, 1743 1637 unsigned long index, unsigned order) 1744 1638 { 1639 + struct xarray *xa = xa_param(test); 1640 + 1745 1641 unsigned long i; 1746 1642 1747 - xa_store_many_order(xa, index, order); 1643 + xa_store_many_order(test, xa, index, order); 1748 1644 for (i = index; i < index + (1UL << order); i++) 1749 - xa_erase_index(xa, i); 1645 + xa_erase_index(test, xa, i); 1750 1646 XA_BUG_ON(xa, !xa_empty(xa)); 1751 1647 } 1752 1648 1753 - static noinline void check_create_range_2(struct xarray *xa, unsigned order) 1649 + static noinline void check_create_range_2(struct kunit *test, unsigned int order) 1754 1650 { 1651 + struct xarray *xa = xa_param(test); 1652 + 1755 1653 unsigned long i; 1756 1654 unsigned long nr = 1UL << order; 1757 1655 1758 1656 for (i = 0; i < nr * nr; i += nr) 1759 - xa_store_many_order(xa, i, order); 1657 + xa_store_many_order(test, xa, i, order); 1760 1658 for (i = 0; i < nr * nr; i++) 1761 - xa_erase_index(xa, i); 1659 + xa_erase_index(test, xa, i); 1762 1660 XA_BUG_ON(xa, !xa_empty(xa)); 1763 1661 } 1764 1662 1765 - static noinline void check_create_range_3(void) 1663 + static noinline void check_create_range_3(struct kunit *test) 1766 1664 { 1767 1665 XA_STATE(xas, NULL, 0); 1768 1666 xas_set_err(&xas, -EEXIST); ··· 1774 1664 XA_BUG_ON(NULL, xas_error(&xas) != -EEXIST); 1775 1665 } 1776 1666 1777 - static noinline void check_create_range_4(struct xarray *xa, 1667 + static noinline void check_create_range_4(struct kunit *test, 1778 1668 unsigned long index, unsigned order) 1779 1669 { 1670 + struct xarray *xa = xa_param(test); 1671 + 1780 1672 XA_STATE_ORDER(xas, xa, index, order); 1781 1673 unsigned long base = xas.xa_index; 1782 1674 unsigned long i = 0; ··· 1804 1692 XA_BUG_ON(xa, xas_error(&xas)); 1805 1693 1806 1694 for (i = base; i < base + (1UL << order); i++) 1807 - xa_erase_index(xa, i); 1695 + xa_erase_index(test, xa, i); 1808 1696 XA_BUG_ON(xa, !xa_empty(xa)); 1809 1697 } 1810 1698 1811 - static noinline void check_create_range_5(struct xarray *xa, 1699 + static noinline void check_create_range_5(struct kunit *test, 1812 1700 unsigned long index, unsigned int order) 1813 1701 { 1702 + struct xarray *xa = xa_param(test); 1703 + 1814 1704 XA_STATE_ORDER(xas, xa, index, order); 1815 1705 unsigned int i; 1816 1706 ··· 1829 1715 xa_destroy(xa); 1830 1716 } 1831 1717 1832 - static noinline void check_create_range(struct xarray *xa) 1718 + static noinline void check_create_range(struct kunit *test) 1833 1719 { 1834 1720 unsigned int order; 1835 1721 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 12 : 1; 1836 1722 1837 1723 for (order = 0; order < max_order; order++) { 1838 - check_create_range_1(xa, 0, order); 1839 - check_create_range_1(xa, 1U << order, order); 1840 - check_create_range_1(xa, 2U << order, order); 1841 - check_create_range_1(xa, 3U << order, order); 1842 - check_create_range_1(xa, 1U << 24, order); 1724 + check_create_range_1(test, 0, order); 1725 + check_create_range_1(test, 1U << order, order); 1726 + check_create_range_1(test, 2U << order, order); 1727 + check_create_range_1(test, 3U << order, order); 1728 + check_create_range_1(test, 1U << 24, order); 1843 1729 if (order < 10) 1844 - check_create_range_2(xa, order); 1730 + check_create_range_2(test, order); 1845 1731 1846 - check_create_range_4(xa, 0, order); 1847 - check_create_range_4(xa, 1U << order, order); 1848 - check_create_range_4(xa, 2U << order, order); 1849 - check_create_range_4(xa, 3U << order, order); 1850 - check_create_range_4(xa, 1U << 24, order); 1732 + check_create_range_4(test, 0, order); 1733 + check_create_range_4(test, 1U << order, order); 1734 + check_create_range_4(test, 2U << order, order); 1735 + check_create_range_4(test, 3U << order, order); 1736 + check_create_range_4(test, 1U << 24, order); 1851 1737 1852 - check_create_range_4(xa, 1, order); 1853 - check_create_range_4(xa, (1U << order) + 1, order); 1854 - check_create_range_4(xa, (2U << order) + 1, order); 1855 - check_create_range_4(xa, (2U << order) - 1, order); 1856 - check_create_range_4(xa, (3U << order) + 1, order); 1857 - check_create_range_4(xa, (3U << order) - 1, order); 1858 - check_create_range_4(xa, (1U << 24) + 1, order); 1738 + check_create_range_4(test, 1, order); 1739 + check_create_range_4(test, (1U << order) + 1, order); 1740 + check_create_range_4(test, (2U << order) + 1, order); 1741 + check_create_range_4(test, (2U << order) - 1, order); 1742 + check_create_range_4(test, (3U << order) + 1, order); 1743 + check_create_range_4(test, (3U << order) - 1, order); 1744 + check_create_range_4(test, (1U << 24) + 1, order); 1859 1745 1860 - check_create_range_5(xa, 0, order); 1861 - check_create_range_5(xa, (1U << order), order); 1746 + check_create_range_5(test, 0, order); 1747 + check_create_range_5(test, (1U << order), order); 1862 1748 } 1863 1749 1864 - check_create_range_3(); 1750 + check_create_range_3(test); 1865 1751 } 1866 1752 1867 - static noinline void __check_store_range(struct xarray *xa, unsigned long first, 1753 + static noinline void __check_store_range(struct kunit *test, unsigned long first, 1868 1754 unsigned long last) 1869 1755 { 1756 + struct xarray *xa = xa_param(test); 1757 + 1870 1758 #ifdef CONFIG_XARRAY_MULTI 1871 1759 xa_store_range(xa, first, last, xa_mk_index(first), GFP_KERNEL); 1872 1760 ··· 1883 1767 XA_BUG_ON(xa, !xa_empty(xa)); 1884 1768 } 1885 1769 1886 - static noinline void check_store_range(struct xarray *xa) 1770 + static noinline void check_store_range(struct kunit *test) 1887 1771 { 1888 1772 unsigned long i, j; 1889 1773 1890 1774 for (i = 0; i < 128; i++) { 1891 1775 for (j = i; j < 128; j++) { 1892 - __check_store_range(xa, i, j); 1893 - __check_store_range(xa, 128 + i, 128 + j); 1894 - __check_store_range(xa, 4095 + i, 4095 + j); 1895 - __check_store_range(xa, 4096 + i, 4096 + j); 1896 - __check_store_range(xa, 123456 + i, 123456 + j); 1897 - __check_store_range(xa, (1 << 24) + i, (1 << 24) + j); 1776 + __check_store_range(test, i, j); 1777 + __check_store_range(test, 128 + i, 128 + j); 1778 + __check_store_range(test, 4095 + i, 4095 + j); 1779 + __check_store_range(test, 4096 + i, 4096 + j); 1780 + __check_store_range(test, 123456 + i, 123456 + j); 1781 + __check_store_range(test, (1 << 24) + i, (1 << 24) + j); 1898 1782 } 1899 1783 } 1900 1784 } 1901 1785 1902 1786 #ifdef CONFIG_XARRAY_MULTI 1903 - static void check_split_1(struct xarray *xa, unsigned long index, 1787 + static void check_split_1(struct kunit *test, unsigned long index, 1904 1788 unsigned int order, unsigned int new_order) 1905 1789 { 1790 + struct xarray *xa = xa_param(test); 1791 + 1906 1792 XA_STATE_ORDER(xas, xa, index, new_order); 1907 1793 unsigned int i, found; 1908 1794 void *entry; ··· 1940 1822 xa_destroy(xa); 1941 1823 } 1942 1824 1943 - static noinline void check_split(struct xarray *xa) 1825 + static noinline void check_split(struct kunit *test) 1944 1826 { 1827 + struct xarray *xa = xa_param(test); 1828 + 1945 1829 unsigned int order, new_order; 1946 1830 1947 1831 XA_BUG_ON(xa, !xa_empty(xa)); 1948 1832 1949 1833 for (order = 1; order < 2 * XA_CHUNK_SHIFT; order++) { 1950 1834 for (new_order = 0; new_order < order; new_order++) { 1951 - check_split_1(xa, 0, order, new_order); 1952 - check_split_1(xa, 1UL << order, order, new_order); 1953 - check_split_1(xa, 3UL << order, order, new_order); 1835 + check_split_1(test, 0, order, new_order); 1836 + check_split_1(test, 1UL << order, order, new_order); 1837 + check_split_1(test, 3UL << order, order, new_order); 1954 1838 } 1955 1839 } 1956 1840 } 1957 1841 #else 1958 - static void check_split(struct xarray *xa) { } 1842 + static void check_split(struct kunit *test) { } 1959 1843 #endif 1960 1844 1961 - static void check_align_1(struct xarray *xa, char *name) 1845 + static void check_align_1(struct kunit *test, char *name) 1962 1846 { 1847 + struct xarray *xa = xa_param(test); 1848 + 1963 1849 int i; 1964 1850 unsigned int id; 1965 1851 unsigned long index; ··· 1983 1861 * We should always be able to store without allocating memory after 1984 1862 * reserving a slot. 1985 1863 */ 1986 - static void check_align_2(struct xarray *xa, char *name) 1864 + static void check_align_2(struct kunit *test, char *name) 1987 1865 { 1866 + struct xarray *xa = xa_param(test); 1867 + 1988 1868 int i; 1989 1869 1990 1870 XA_BUG_ON(xa, !xa_empty(xa)); ··· 2005 1881 XA_BUG_ON(xa, !xa_empty(xa)); 2006 1882 } 2007 1883 2008 - static noinline void check_align(struct xarray *xa) 1884 + static noinline void check_align(struct kunit *test) 2009 1885 { 2010 1886 char name[] = "Motorola 68000"; 2011 1887 2012 - check_align_1(xa, name); 2013 - check_align_1(xa, name + 1); 2014 - check_align_1(xa, name + 2); 2015 - check_align_1(xa, name + 3); 2016 - check_align_2(xa, name); 1888 + check_align_1(test, name); 1889 + check_align_1(test, name + 1); 1890 + check_align_1(test, name + 2); 1891 + check_align_1(test, name + 3); 1892 + check_align_2(test, name); 2017 1893 } 2018 1894 2019 1895 static LIST_HEAD(shadow_nodes); ··· 2029 1905 } 2030 1906 } 2031 1907 2032 - static noinline void shadow_remove(struct xarray *xa) 1908 + static noinline void shadow_remove(struct kunit *test, struct xarray *xa) 2033 1909 { 2034 1910 struct xa_node *node; 2035 1911 ··· 2043 1919 xa_unlock(xa); 2044 1920 } 2045 1921 2046 - static noinline void check_workingset(struct xarray *xa, unsigned long index) 1922 + struct workingset_testcase { 1923 + struct xarray *xa; 1924 + unsigned long index; 1925 + }; 1926 + 1927 + static noinline void check_workingset(struct kunit *test) 2047 1928 { 1929 + struct workingset_testcase tc = *(struct workingset_testcase *)test->param_value; 1930 + struct xarray *xa = tc.xa; 1931 + unsigned long index = tc.index; 1932 + 2048 1933 XA_STATE(xas, xa, index); 2049 1934 xas_set_update(&xas, test_update_node); 2050 1935 ··· 2076 1943 xas_unlock(&xas); 2077 1944 XA_BUG_ON(xa, list_empty(&shadow_nodes)); 2078 1945 2079 - shadow_remove(xa); 1946 + shadow_remove(test, xa); 2080 1947 XA_BUG_ON(xa, !list_empty(&shadow_nodes)); 2081 1948 XA_BUG_ON(xa, !xa_empty(xa)); 2082 1949 } ··· 2085 1952 * Check that the pointer / value / sibling entries are accounted the 2086 1953 * way we expect them to be. 2087 1954 */ 2088 - static noinline void check_account(struct xarray *xa) 1955 + static noinline void check_account(struct kunit *test) 2089 1956 { 2090 1957 #ifdef CONFIG_XARRAY_MULTI 1958 + struct xarray *xa = xa_param(test); 1959 + 2091 1960 unsigned int order; 2092 1961 2093 1962 for (order = 1; order < 12; order++) { ··· 2116 1981 #endif 2117 1982 } 2118 1983 2119 - static noinline void check_get_order(struct xarray *xa) 1984 + static noinline void check_get_order(struct kunit *test) 2120 1985 { 1986 + struct xarray *xa = xa_param(test); 1987 + 2121 1988 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; 2122 1989 unsigned int order; 2123 1990 unsigned long i, j; ··· 2138 2001 } 2139 2002 } 2140 2003 2141 - static noinline void check_xas_get_order(struct xarray *xa) 2004 + static noinline void check_xas_get_order(struct kunit *test) 2142 2005 { 2006 + struct xarray *xa = xa_param(test); 2007 + 2143 2008 XA_STATE(xas, xa, 0); 2144 2009 2145 2010 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; ··· 2173 2034 } 2174 2035 } 2175 2036 2176 - static noinline void check_xas_conflict_get_order(struct xarray *xa) 2037 + static noinline void check_xas_conflict_get_order(struct kunit *test) 2177 2038 { 2039 + struct xarray *xa = xa_param(test); 2040 + 2178 2041 XA_STATE(xas, xa, 0); 2179 2042 2180 2043 void *entry; ··· 2233 2092 } 2234 2093 2235 2094 2236 - static noinline void check_destroy(struct xarray *xa) 2095 + static noinline void check_destroy(struct kunit *test) 2237 2096 { 2097 + struct xarray *xa = xa_param(test); 2098 + 2238 2099 unsigned long index; 2239 2100 2240 2101 XA_BUG_ON(xa, !xa_empty(xa)); ··· 2269 2126 } 2270 2127 2271 2128 static DEFINE_XARRAY(array); 2129 + static struct xarray *arrays[] = { &array }; 2130 + KUNIT_ARRAY_PARAM(array, arrays, NULL); 2272 2131 2273 - static int xarray_checks(void) 2274 - { 2275 - check_xa_err(&array); 2276 - check_xas_retry(&array); 2277 - check_xa_load(&array); 2278 - check_xa_mark(&array); 2279 - check_xa_shrink(&array); 2280 - check_xas_erase(&array); 2281 - check_insert(&array); 2282 - check_cmpxchg(&array); 2283 - check_cmpxchg_order(&array); 2284 - check_reserve(&array); 2285 - check_reserve(&xa0); 2286 - check_multi_store(&array); 2287 - check_multi_store_advanced(&array); 2288 - check_get_order(&array); 2289 - check_xas_get_order(&array); 2290 - check_xas_conflict_get_order(&array); 2291 - check_xa_alloc(); 2292 - check_find(&array); 2293 - check_find_entry(&array); 2294 - check_pause(&array); 2295 - check_account(&array); 2296 - check_destroy(&array); 2297 - check_move(&array); 2298 - check_create_range(&array); 2299 - check_store_range(&array); 2300 - check_store_iter(&array); 2301 - check_align(&xa0); 2302 - check_split(&array); 2132 + static struct xarray *xa0s[] = { &xa0 }; 2133 + KUNIT_ARRAY_PARAM(xa0, xa0s, NULL); 2303 2134 2304 - check_workingset(&array, 0); 2305 - check_workingset(&array, 64); 2306 - check_workingset(&array, 4096); 2135 + static struct workingset_testcase workingset_testcases[] = { 2136 + { &array, 0 }, 2137 + { &array, 64 }, 2138 + { &array, 4096 }, 2139 + }; 2140 + KUNIT_ARRAY_PARAM(workingset, workingset_testcases, NULL); 2307 2141 2308 - printk("XArray: %u of %u tests passed\n", tests_passed, tests_run); 2309 - return (tests_run == tests_passed) ? 0 : -EINVAL; 2310 - } 2142 + static struct kunit_case xarray_cases[] = { 2143 + KUNIT_CASE_PARAM(check_xa_err, array_gen_params), 2144 + KUNIT_CASE_PARAM(check_xas_retry, array_gen_params), 2145 + KUNIT_CASE_PARAM(check_xa_load, array_gen_params), 2146 + KUNIT_CASE_PARAM(check_xa_mark, array_gen_params), 2147 + KUNIT_CASE_PARAM(check_xa_shrink, array_gen_params), 2148 + KUNIT_CASE_PARAM(check_xas_erase, array_gen_params), 2149 + KUNIT_CASE_PARAM(check_insert, array_gen_params), 2150 + KUNIT_CASE_PARAM(check_cmpxchg, array_gen_params), 2151 + KUNIT_CASE_PARAM(check_cmpxchg_order, array_gen_params), 2152 + KUNIT_CASE_PARAM(check_reserve, array_gen_params), 2153 + KUNIT_CASE_PARAM(check_reserve, xa0_gen_params), 2154 + KUNIT_CASE_PARAM(check_multi_store, array_gen_params), 2155 + KUNIT_CASE_PARAM(check_multi_store_advanced, array_gen_params), 2156 + KUNIT_CASE_PARAM(check_get_order, array_gen_params), 2157 + KUNIT_CASE_PARAM(check_xas_get_order, array_gen_params), 2158 + KUNIT_CASE_PARAM(check_xas_conflict_get_order, array_gen_params), 2159 + KUNIT_CASE(check_xa_alloc), 2160 + KUNIT_CASE_PARAM(check_find, array_gen_params), 2161 + KUNIT_CASE_PARAM(check_find_entry, array_gen_params), 2162 + KUNIT_CASE_PARAM(check_pause, array_gen_params), 2163 + KUNIT_CASE_PARAM(check_account, array_gen_params), 2164 + KUNIT_CASE_PARAM(check_destroy, array_gen_params), 2165 + KUNIT_CASE_PARAM(check_move, array_gen_params), 2166 + KUNIT_CASE_PARAM(check_create_range, array_gen_params), 2167 + KUNIT_CASE_PARAM(check_store_range, array_gen_params), 2168 + KUNIT_CASE_PARAM(check_store_iter, array_gen_params), 2169 + KUNIT_CASE_PARAM(check_align, xa0_gen_params), 2170 + KUNIT_CASE_PARAM(check_split, array_gen_params), 2171 + KUNIT_CASE_PARAM(check_workingset, workingset_gen_params), 2172 + {}, 2173 + }; 2311 2174 2312 - static void xarray_exit(void) 2313 - { 2314 - } 2175 + static struct kunit_suite xarray_suite = { 2176 + .name = "xarray", 2177 + .test_cases = xarray_cases, 2178 + }; 2315 2179 2316 - module_init(xarray_checks); 2317 - module_exit(xarray_exit); 2180 + kunit_test_suite(xarray_suite); 2181 + 2318 2182 MODULE_AUTHOR("Matthew Wilcox <willy@infradead.org>"); 2319 2183 MODULE_DESCRIPTION("XArray API test module"); 2320 2184 MODULE_LICENSE("GPL");
+41 -37
lib/xarray.c
··· 125 125 */ 126 126 static void xas_squash_marks(const struct xa_state *xas) 127 127 { 128 - unsigned int mark = 0; 128 + xa_mark_t mark = 0; 129 129 unsigned int limit = xas->xa_offset + xas->xa_sibs + 1; 130 130 131 - if (!xas->xa_sibs) 132 - return; 131 + for (;;) { 132 + unsigned long *marks = node_marks(xas->xa_node, mark); 133 133 134 - do { 135 - unsigned long *marks = xas->xa_node->marks[mark]; 136 - if (find_next_bit(marks, limit, xas->xa_offset + 1) == limit) 137 - continue; 138 - __set_bit(xas->xa_offset, marks); 139 - bitmap_clear(marks, xas->xa_offset + 1, xas->xa_sibs); 140 - } while (mark++ != (__force unsigned)XA_MARK_MAX); 134 + if (find_next_bit(marks, limit, xas->xa_offset + 1) != limit) { 135 + __set_bit(xas->xa_offset, marks); 136 + bitmap_clear(marks, xas->xa_offset + 1, xas->xa_sibs); 137 + } 138 + if (mark == XA_MARK_MAX) 139 + break; 140 + mark_inc(mark); 141 + } 141 142 } 142 143 143 144 /* extracts the offset within this node from the index */ ··· 436 435 return (XA_CHUNK_SIZE << xa_to_node(entry)->shift) - 1; 437 436 } 438 437 438 + static inline void *xa_zero_to_null(void *entry) 439 + { 440 + return xa_is_zero(entry) ? NULL : entry; 441 + } 442 + 439 443 static void xas_shrink(struct xa_state *xas) 440 444 { 441 445 struct xarray *xa = xas->xa; ··· 457 451 break; 458 452 if (!xa_is_node(entry) && node->shift) 459 453 break; 460 - if (xa_is_zero(entry) && xa_zero_busy(xa)) 461 - entry = NULL; 454 + if (xa_zero_busy(xa)) 455 + entry = xa_zero_to_null(entry); 462 456 xas->xa_node = XAS_BOUNDS; 463 457 464 458 RCU_INIT_POINTER(xa->xa_head, entry); ··· 1028 1022 unsigned int mask = xas->xa_sibs; 1029 1023 1030 1024 /* XXX: no support for splitting really large entries yet */ 1031 - if (WARN_ON(xas->xa_shift + 2 * XA_CHUNK_SHIFT < order)) 1025 + if (WARN_ON(xas->xa_shift + 2 * XA_CHUNK_SHIFT <= order)) 1032 1026 goto nomem; 1033 1027 if (xas->xa_shift + XA_CHUNK_SHIFT > order) 1034 1028 return; ··· 1153 1147 if (!xa_is_sibling(xa_entry(xas->xa, node, offset))) 1154 1148 break; 1155 1149 } 1150 + xas->xa_index &= ~0UL << node->shift; 1156 1151 xas->xa_index += (offset - xas->xa_offset) << node->shift; 1157 1152 if (xas->xa_index == 0) 1158 1153 xas->xa_node = XAS_BOUNDS; ··· 1389 1382 entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset); 1390 1383 if (!entry && !(xa_track_free(xas->xa) && mark == XA_FREE_MARK)) 1391 1384 continue; 1385 + if (xa_is_sibling(entry)) 1386 + continue; 1392 1387 if (!xa_is_node(entry)) 1393 1388 return entry; 1394 1389 xas->xa_node = xa_to_node(entry); ··· 1483 1474 1484 1475 rcu_read_lock(); 1485 1476 do { 1486 - entry = xas_load(&xas); 1487 - if (xa_is_zero(entry)) 1488 - entry = NULL; 1477 + entry = xa_zero_to_null(xas_load(&xas)); 1489 1478 } while (xas_retry(&xas, entry)); 1490 1479 rcu_read_unlock(); 1491 1480 ··· 1493 1486 1494 1487 static void *xas_result(struct xa_state *xas, void *curr) 1495 1488 { 1496 - if (xa_is_zero(curr)) 1497 - return NULL; 1498 1489 if (xas_error(xas)) 1499 1490 curr = xas->xa_node; 1500 1491 return curr; ··· 1513 1508 void *__xa_erase(struct xarray *xa, unsigned long index) 1514 1509 { 1515 1510 XA_STATE(xas, xa, index); 1516 - return xas_result(&xas, xas_store(&xas, NULL)); 1511 + return xas_result(&xas, xa_zero_to_null(xas_store(&xas, NULL))); 1517 1512 } 1518 1513 EXPORT_SYMBOL(__xa_erase); 1519 1514 ··· 1572 1567 xas_clear_mark(&xas, XA_FREE_MARK); 1573 1568 } while (__xas_nomem(&xas, gfp)); 1574 1569 1575 - return xas_result(&xas, curr); 1570 + return xas_result(&xas, xa_zero_to_null(curr)); 1576 1571 } 1577 1572 EXPORT_SYMBOL(__xa_store); 1578 1573 ··· 1605 1600 } 1606 1601 EXPORT_SYMBOL(xa_store); 1607 1602 1603 + static inline void *__xa_cmpxchg_raw(struct xarray *xa, unsigned long index, 1604 + void *old, void *entry, gfp_t gfp); 1605 + 1608 1606 /** 1609 1607 * __xa_cmpxchg() - Store this entry in the XArray. 1610 1608 * @xa: XArray. ··· 1627 1619 void *__xa_cmpxchg(struct xarray *xa, unsigned long index, 1628 1620 void *old, void *entry, gfp_t gfp) 1629 1621 { 1622 + return xa_zero_to_null(__xa_cmpxchg_raw(xa, index, old, entry, gfp)); 1623 + } 1624 + EXPORT_SYMBOL(__xa_cmpxchg); 1625 + 1626 + static inline void *__xa_cmpxchg_raw(struct xarray *xa, unsigned long index, 1627 + void *old, void *entry, gfp_t gfp) 1628 + { 1630 1629 XA_STATE(xas, xa, index); 1631 1630 void *curr; 1632 1631 ··· 1651 1636 1652 1637 return xas_result(&xas, curr); 1653 1638 } 1654 - EXPORT_SYMBOL(__xa_cmpxchg); 1655 1639 1656 1640 /** 1657 1641 * __xa_insert() - Store this entry in the XArray if no entry is present. ··· 1670 1656 */ 1671 1657 int __xa_insert(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) 1672 1658 { 1673 - XA_STATE(xas, xa, index); 1674 1659 void *curr; 1660 + int errno; 1675 1661 1676 - if (WARN_ON_ONCE(xa_is_advanced(entry))) 1677 - return -EINVAL; 1678 1662 if (!entry) 1679 1663 entry = XA_ZERO_ENTRY; 1680 - 1681 - do { 1682 - curr = xas_load(&xas); 1683 - if (!curr) { 1684 - xas_store(&xas, entry); 1685 - if (xa_track_free(xa)) 1686 - xas_clear_mark(&xas, XA_FREE_MARK); 1687 - } else { 1688 - xas_set_err(&xas, -EBUSY); 1689 - } 1690 - } while (__xas_nomem(&xas, gfp)); 1691 - 1692 - return xas_error(&xas); 1664 + curr = __xa_cmpxchg_raw(xa, index, NULL, entry, gfp); 1665 + errno = xa_err(curr); 1666 + if (errno) 1667 + return errno; 1668 + return (curr != NULL) ? -EBUSY : 0; 1693 1669 } 1694 1670 EXPORT_SYMBOL(__xa_insert); 1695 1671
+2 -2
mm/kmemleak.c
··· 1855 1855 * Wait before the first scan to allow the system to fully initialize. 1856 1856 */ 1857 1857 if (first_run) { 1858 - signed long timeout = msecs_to_jiffies(SECS_FIRST_SCAN * 1000); 1858 + signed long timeout = secs_to_jiffies(SECS_FIRST_SCAN); 1859 1859 first_run = 0; 1860 1860 while (timeout && !kthread_should_stop()) 1861 1861 timeout = schedule_timeout_interruptible(timeout); ··· 2241 2241 return; 2242 2242 2243 2243 jiffies_min_age = msecs_to_jiffies(MSECS_MIN_AGE); 2244 - jiffies_scan_wait = msecs_to_jiffies(SECS_SCAN_WAIT * 1000); 2244 + jiffies_scan_wait = secs_to_jiffies(SECS_SCAN_WAIT); 2245 2245 2246 2246 object_cache = KMEM_CACHE(kmemleak_object, SLAB_NOLEAKTRACE); 2247 2247 scan_area_cache = KMEM_CACHE(kmemleak_scan_area, SLAB_NOLEAKTRACE);
+1 -1
net/bluetooth/mgmt.c
··· 210 210 MGMT_EV_EXP_FEATURE_CHANGED, 211 211 }; 212 212 213 - #define CACHE_TIMEOUT msecs_to_jiffies(2 * 1000) 213 + #define CACHE_TIMEOUT secs_to_jiffies(2) 214 214 215 215 #define ZERO_KEY "\x00\x00\x00\x00\x00\x00\x00\x00" \ 216 216 "\x00\x00\x00\x00\x00\x00\x00\x00"
+8 -13
net/netfilter/nf_conntrack_proto_sctp.c
··· 39 39 [SCTP_CONNTRACK_HEARTBEAT_SENT] = "HEARTBEAT_SENT", 40 40 }; 41 41 42 - #define SECS * HZ 43 - #define MINS * 60 SECS 44 - #define HOURS * 60 MINS 45 - #define DAYS * 24 HOURS 46 - 47 42 static const unsigned int sctp_timeouts[SCTP_CONNTRACK_MAX] = { 48 - [SCTP_CONNTRACK_CLOSED] = 10 SECS, 49 - [SCTP_CONNTRACK_COOKIE_WAIT] = 3 SECS, 50 - [SCTP_CONNTRACK_COOKIE_ECHOED] = 3 SECS, 51 - [SCTP_CONNTRACK_ESTABLISHED] = 210 SECS, 52 - [SCTP_CONNTRACK_SHUTDOWN_SENT] = 3 SECS, 53 - [SCTP_CONNTRACK_SHUTDOWN_RECD] = 3 SECS, 54 - [SCTP_CONNTRACK_SHUTDOWN_ACK_SENT] = 3 SECS, 55 - [SCTP_CONNTRACK_HEARTBEAT_SENT] = 30 SECS, 43 + [SCTP_CONNTRACK_CLOSED] = secs_to_jiffies(10), 44 + [SCTP_CONNTRACK_COOKIE_WAIT] = secs_to_jiffies(3), 45 + [SCTP_CONNTRACK_COOKIE_ECHOED] = secs_to_jiffies(3), 46 + [SCTP_CONNTRACK_ESTABLISHED] = secs_to_jiffies(210), 47 + [SCTP_CONNTRACK_SHUTDOWN_SENT] = secs_to_jiffies(3), 48 + [SCTP_CONNTRACK_SHUTDOWN_RECD] = secs_to_jiffies(3), 49 + [SCTP_CONNTRACK_SHUTDOWN_ACK_SENT] = secs_to_jiffies(3), 50 + [SCTP_CONNTRACK_HEARTBEAT_SENT] = secs_to_jiffies(30), 56 51 }; 57 52 58 53 #define SCTP_FLAG_HEARTBEAT_VTAG_FAILED 1
+1 -3
net/wireless/wext-core.c
··· 640 640 #ifdef CONFIG_CFG80211_WEXT 641 641 static void wireless_warn_cfg80211_wext(void) 642 642 { 643 - char name[sizeof(current->comm)]; 644 - 645 643 pr_warn_once("warning: `%s' uses wireless extensions which will stop working for Wi-Fi 7 hardware; use nl80211\n", 646 - get_task_comm(name, current)); 644 + current->comm); 647 645 } 648 646 #endif 649 647
+1 -2
samples/livepatch/livepatch-callbacks-busymod.c
··· 44 44 static int livepatch_callbacks_mod_init(void) 45 45 { 46 46 pr_info("%s\n", __func__); 47 - schedule_delayed_work(&work, 48 - msecs_to_jiffies(1000 * 0)); 47 + schedule_delayed_work(&work, 0); 49 48 return 0; 50 49 } 51 50
+1 -2
samples/livepatch/livepatch-shadow-fix1.c
··· 72 72 if (!d) 73 73 return NULL; 74 74 75 - d->jiffies_expire = jiffies + 76 - msecs_to_jiffies(1000 * EXPIRE_PERIOD); 75 + d->jiffies_expire = jiffies + secs_to_jiffies(EXPIRE_PERIOD); 77 76 78 77 /* 79 78 * Patch: save the extra memory location into a SV_LEAK shadow
+5 -10
samples/livepatch/livepatch-shadow-mod.c
··· 101 101 if (!d) 102 102 return NULL; 103 103 104 - d->jiffies_expire = jiffies + 105 - msecs_to_jiffies(1000 * EXPIRE_PERIOD); 104 + d->jiffies_expire = jiffies + secs_to_jiffies(EXPIRE_PERIOD); 106 105 107 106 /* Oops, forgot to save leak! */ 108 107 leak = kzalloc(sizeof(*leak), GFP_KERNEL); ··· 151 152 list_add(&d->list, &dummy_list); 152 153 mutex_unlock(&dummy_list_mutex); 153 154 154 - schedule_delayed_work(&alloc_dwork, 155 - msecs_to_jiffies(1000 * ALLOC_PERIOD)); 155 + schedule_delayed_work(&alloc_dwork, secs_to_jiffies(ALLOC_PERIOD)); 156 156 } 157 157 158 158 /* ··· 182 184 } 183 185 mutex_unlock(&dummy_list_mutex); 184 186 185 - schedule_delayed_work(&cleanup_dwork, 186 - msecs_to_jiffies(1000 * CLEANUP_PERIOD)); 187 + schedule_delayed_work(&cleanup_dwork, secs_to_jiffies(CLEANUP_PERIOD)); 187 188 } 188 189 189 190 static int livepatch_shadow_mod_init(void) 190 191 { 191 - schedule_delayed_work(&alloc_dwork, 192 - msecs_to_jiffies(1000 * ALLOC_PERIOD)); 193 - schedule_delayed_work(&cleanup_dwork, 194 - msecs_to_jiffies(1000 * CLEANUP_PERIOD)); 192 + schedule_delayed_work(&alloc_dwork, secs_to_jiffies(ALLOC_PERIOD)); 193 + schedule_delayed_work(&cleanup_dwork, secs_to_jiffies(CLEANUP_PERIOD)); 195 194 196 195 return 0; 197 196 }
+8 -18
scripts/checkpatch.pl
··· 834 834 $mode_perms_search = "(?:${mode_perms_search})"; 835 835 836 836 our %deprecated_apis = ( 837 - "synchronize_rcu_bh" => "synchronize_rcu", 838 - "synchronize_rcu_bh_expedited" => "synchronize_rcu_expedited", 839 - "call_rcu_bh" => "call_rcu", 840 - "rcu_barrier_bh" => "rcu_barrier", 841 - "synchronize_sched" => "synchronize_rcu", 842 - "synchronize_sched_expedited" => "synchronize_rcu_expedited", 843 - "call_rcu_sched" => "call_rcu", 844 - "rcu_barrier_sched" => "rcu_barrier", 845 - "get_state_synchronize_sched" => "get_state_synchronize_rcu", 846 - "cond_synchronize_sched" => "cond_synchronize_rcu", 847 837 "kmap" => "kmap_local_page", 848 838 "kunmap" => "kunmap_local", 849 839 "kmap_atomic" => "kmap_local_page", ··· 2865 2875 2866 2876 if ($realfile =~ m@^include/asm/@) { 2867 2877 ERROR("MODIFIED_INCLUDE_ASM", 2868 - "do not modify files in include/asm, change architecture specific files in include/asm-<architecture>\n" . "$here$rawline\n"); 2878 + "do not modify files in include/asm, change architecture specific files in arch/<architecture>/include/asm\n" . "$here$rawline\n"); 2869 2879 } 2870 2880 $found_file = 1; 2871 2881 } ··· 3227 3237 my ($cid, $ctitle) = git_commit_info($orig_commit, $id, 3228 3238 $title); 3229 3239 3230 - if ($ctitle ne $title || $tag_case || $tag_space || 3231 - $id_length || $id_case || !$title_has_quotes) { 3240 + if (defined($cid) && ($ctitle ne $title || $tag_case || $tag_space || $id_length || $id_case || !$title_has_quotes)) { 3241 + my $fixed = "Fixes: $cid (\"$ctitle\")"; 3232 3242 if (WARN("BAD_FIXES_TAG", 3233 - "Please use correct Fixes: style 'Fixes: <12+ chars of sha1> (\"<title line>\")' - ie: 'Fixes: $cid (\"$ctitle\")'\n" . $herecurr) && 3243 + "Please use correct Fixes: style 'Fixes: <12+ chars of sha1> (\"<title line>\")' - ie: '$fixed'\n" . $herecurr) && 3234 3244 $fix) { 3235 - $fixed[$fixlinenr] = "Fixes: $cid (\"$ctitle\")"; 3245 + $fixed[$fixlinenr] = $fixed; 3236 3246 } 3237 3247 } 3238 3248 } ··· 5503 5513 } 5504 5514 } 5505 5515 5506 - # check for unnecessary parentheses around comparisons in if uses 5507 - # when !drivers/staging or command-line uses --strict 5508 - if (($realfile !~ m@^(?:drivers/staging/)@ || $check_orig) && 5516 + # check for unnecessary parentheses around comparisons 5517 + # except in drivers/staging 5518 + if (($realfile !~ m@^(?:drivers/staging/)@) && 5509 5519 $perl_version_ok && defined($stat) && 5510 5520 $stat =~ /(^.\s*if\s*($balanced_parens))/) { 5511 5521 my $if_stat = $1;
+22
scripts/coccinelle/misc/secs_to_jiffies.cocci
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /// 3 + /// Find usages of: 4 + /// - msecs_to_jiffies(value*1000) 5 + /// - msecs_to_jiffies(value*MSEC_PER_SEC) 6 + /// 7 + // Confidence: High 8 + // Copyright: (C) 2024 Easwar Hariharan, Microsoft 9 + // Keywords: secs, seconds, jiffies 10 + // 11 + 12 + virtual patch 13 + 14 + @depends on patch@ constant C; @@ 15 + 16 + - msecs_to_jiffies(C * 1000) 17 + + secs_to_jiffies(C) 18 + 19 + @depends on patch@ constant C; @@ 20 + 21 + - msecs_to_jiffies(C * MSEC_PER_SEC) 22 + + secs_to_jiffies(C)
+37
scripts/spelling.txt
··· 222 222 auxillary||auxiliary 223 223 auxilliary||auxiliary 224 224 avaiable||available 225 + avaialable||available 225 226 avaible||available 226 227 availabe||available 227 228 availabled||available ··· 268 267 broadcat||broadcast 269 268 bufer||buffer 270 269 bufferred||buffered 270 + bufferur||buffer 271 271 bufufer||buffer 272 272 cacluated||calculated 273 273 caculate||calculate ··· 407 405 congiuration||configuration 408 406 conider||consider 409 407 conjuction||conjunction 408 + connction||connection 410 409 connecetd||connected 411 410 connectinos||connections 412 411 connetor||connector ··· 416 413 consistancy||consistency 417 414 consistant||consistent 418 415 consits||consists 416 + constructred||constructed 419 417 containes||contains 420 418 containts||contains 421 419 contaisn||contains ··· 454 450 cryptocraphic||cryptographic 455 451 cummulative||cumulative 456 452 cunter||counter 453 + curent||current 457 454 curently||currently 458 455 cylic||cyclic 459 456 dafault||default ··· 466 461 decendants||descendants 467 462 decompres||decompress 468 463 decsribed||described 464 + decrese||decrease 469 465 decription||description 470 466 detault||default 471 467 dectected||detected ··· 491 485 delares||declares 492 486 delaring||declaring 493 487 delemiter||delimiter 488 + deley||delay 494 489 delibrately||deliberately 495 490 delievered||delivered 496 491 demodualtor||demodulator ··· 558 551 disired||desired 559 552 dispalying||displaying 560 553 dissable||disable 554 + dissapeared||disappeared 561 555 diplay||display 562 556 directon||direction 563 557 direcly||directly ··· 614 606 elementry||elementary 615 607 eletronic||electronic 616 608 embeded||embedded 609 + emtpy||empty 617 610 enabledi||enabled 618 611 enbale||enable 619 612 enble||enable ··· 678 669 expecially||especially 679 670 experies||expires 680 671 explicite||explicit 672 + explicity||explicitly 681 673 explicitely||explicitly 682 674 explict||explicit 683 675 explictely||explicitly ··· 733 723 followings||following 734 724 follwing||following 735 725 fonud||found 726 + forcebly||forcibly 736 727 forseeable||foreseeable 737 728 forse||force 738 729 fortan||fortran 739 730 forwardig||forwarding 731 + forwared||forwarded 740 732 frambuffer||framebuffer 741 733 framming||framing 742 734 framwork||framework ··· 779 767 granularty||granularity 780 768 grapic||graphic 781 769 grranted||granted 770 + grups||groups 782 771 guage||gauge 783 772 guarenteed||guaranteed 784 773 guarentee||guarantee ··· 793 780 harware||hardware 794 781 hardward||hardware 795 782 havind||having 783 + heigth||height 796 784 heirarchically||hierarchically 797 785 heirarchy||hierarchy 798 786 heirachy||hierarchy ··· 802 788 heterogenous||heterogeneous 803 789 hexdecimal||hexadecimal 804 790 hybernate||hibernate 791 + hiearchy||hierarchy 805 792 hierachy||hierarchy 806 793 hierarchie||hierarchy 807 794 homogenous||homogeneous 795 + horizental||horizontal 808 796 howver||however 809 797 hsould||should 810 798 hypervior||hypervisor ··· 858 842 indiate||indicate 859 843 indicat||indicate 860 844 inexpect||inexpected 845 + infalte||inflate 861 846 inferface||interface 862 847 infinit||infinite 863 848 infomation||information ··· 878 861 initialiazation||initialization 879 862 initializationg||initialization 880 863 initializiation||initialization 864 + initializtion||initialization 881 865 initialze||initialize 882 866 initialzed||initialized 883 867 initialzing||initializing ··· 895 877 instanciated||instantiated 896 878 instuments||instruments 897 879 insufficent||insufficient 880 + intead||instead 898 881 inteface||interface 899 882 integreated||integrated 900 883 integrety||integrity ··· 1100 1081 notifcations||notifications 1101 1082 notifed||notified 1102 1083 notity||notify 1084 + notfify||notify 1103 1085 nubmer||number 1104 1086 numebr||number 1105 1087 numer||number ··· 1142 1122 orientied||oriented 1143 1123 orignal||original 1144 1124 originial||original 1125 + orphanded||orphaned 1145 1126 otherise||otherwise 1146 1127 ouput||output 1147 1128 oustanding||outstanding ··· 1205 1184 persistance||persistence 1206 1185 persistant||persistent 1207 1186 phoneticly||phonetically 1187 + pipline||pipeline 1208 1188 plaform||platform 1209 1189 plalform||platform 1210 1190 platfoem||platform 1191 + platfomr||platform 1211 1192 platfrom||platform 1212 1193 plattform||platform 1213 1194 pleaes||please ··· 1234 1211 preceed||precede 1235 1212 precendence||precedence 1236 1213 precission||precision 1214 + predicition||prediction 1237 1215 preemptable||preemptible 1238 1216 prefered||preferred 1239 1217 prefferably||preferably ··· 1313 1289 queus||queues 1314 1290 randomally||randomly 1315 1291 raoming||roaming 1292 + readyness||readiness 1316 1293 reasearcher||researcher 1317 1294 reasearchers||researchers 1318 1295 reasearch||research ··· 1330 1305 recieving||receiving 1331 1306 recogniced||recognised 1332 1307 recognizeable||recognizable 1308 + recompte||recompute 1333 1309 recommanded||recommended 1334 1310 recyle||recycle 1311 + redect||reject 1335 1312 redircet||redirect 1336 1313 redirectrion||redirection 1337 1314 redundacy||redundancy ··· 1341 1314 refcounf||refcount 1342 1315 refence||reference 1343 1316 refered||referred 1317 + referencce||reference 1344 1318 referenace||reference 1345 1319 refererence||reference 1346 1320 refering||referring ··· 1376 1348 reponse||response 1377 1349 representaion||representation 1378 1350 repsonse||response 1351 + reqested||requested 1379 1352 reqeust||request 1380 1353 reqister||register 1381 1354 requed||requeued 1382 1355 requestied||requested 1383 1356 requiere||require 1357 + requieres||requires 1384 1358 requirment||requirement 1385 1359 requred||required 1386 1360 requried||required ··· 1470 1440 serivce||service 1471 1441 serveral||several 1472 1442 servive||service 1443 + sesion||session 1473 1444 setts||sets 1474 1445 settting||setting 1475 1446 shapshot||snapshot ··· 1633 1602 thses||these 1634 1603 tiggers||triggers 1635 1604 tiggered||triggered 1605 + tiggerring||triggering 1636 1606 tipically||typically 1637 1607 timeing||timing 1638 1608 timming||timing 1639 1609 timout||timeout 1640 1610 tmis||this 1611 + tolarance||tolerance 1641 1612 toogle||toggle 1642 1613 torerable||tolerable 1643 1614 torlence||tolerance ··· 1666 1633 trasmission||transmission 1667 1634 trasmitter||transmitter 1668 1635 treshold||threshold 1636 + trigged||triggered 1669 1637 triggerd||triggered 1670 1638 trigerred||triggered 1671 1639 trigerring||triggering ··· 1682 1648 usccess||success 1683 1649 uncommited||uncommitted 1684 1650 uncompatible||incompatible 1651 + uncomressed||uncompressed 1685 1652 unconditionaly||unconditionally 1686 1653 undeflow||underflow 1687 1654 undelying||underlying ··· 1750 1715 utitlty||utility 1751 1716 vaid||valid 1752 1717 vaild||valid 1718 + validationg||validating 1753 1719 valide||valid 1754 1720 variantions||variations 1755 1721 varible||variable ··· 1760 1724 veify||verify 1761 1725 verfication||verification 1762 1726 veriosn||version 1727 + versoin||version 1763 1728 verisons||versions 1764 1729 verison||version 1765 1730 veritical||vertical
+1 -3
security/yama/yama_lsm.c
··· 76 76 struct task_struct *agent) 77 77 { 78 78 struct access_report_info *info; 79 - char agent_comm[sizeof(agent->comm)]; 80 79 81 80 assert_spin_locked(&target->alloc_lock); /* for target->comm */ 82 81 ··· 85 86 */ 86 87 pr_notice_ratelimited( 87 88 "ptrace %s of \"%s\"[%d] was attempted by \"%s\"[%d]\n", 88 - access, target->comm, target->pid, 89 - get_task_comm(agent_comm, agent), agent->pid); 89 + access, target->comm, target->pid, agent->comm, agent->pid); 90 90 return; 91 91 } 92 92
+1 -1
sound/usb/line6/toneport.c
··· 386 386 toneport_update_led(toneport); 387 387 388 388 schedule_delayed_work(&toneport->line6.startup_work, 389 - msecs_to_jiffies(TONEPORT_PCM_DELAY * 1000)); 389 + secs_to_jiffies(TONEPORT_PCM_DELAY)); 390 390 return 0; 391 391 } 392 392
+42 -25
tools/accounting/getdelays.c
··· 192 192 } 193 193 194 194 #define average_ms(t, c) (t / 1000000ULL / (c ? c : 1)) 195 + #define delay_ms(t) (t / 1000000ULL) 195 196 196 197 static void print_delayacct(struct taskstats *t) 197 198 { 198 - printf("\n\nCPU %15s%15s%15s%15s%15s\n" 199 - " %15llu%15llu%15llu%15llu%15.3fms\n" 200 - "IO %15s%15s%15s\n" 201 - " %15llu%15llu%15.3fms\n" 202 - "SWAP %15s%15s%15s\n" 203 - " %15llu%15llu%15.3fms\n" 204 - "RECLAIM %12s%15s%15s\n" 205 - " %15llu%15llu%15.3fms\n" 206 - "THRASHING%12s%15s%15s\n" 207 - " %15llu%15llu%15.3fms\n" 208 - "COMPACT %12s%15s%15s\n" 209 - " %15llu%15llu%15.3fms\n" 210 - "WPCOPY %12s%15s%15s\n" 211 - " %15llu%15llu%15.3fms\n" 212 - "IRQ %15s%15s%15s\n" 213 - " %15llu%15llu%15.3fms\n", 199 + printf("\n\nCPU %15s%15s%15s%15s%15s%15s\n" 200 + " %15llu%15llu%15llu%15llu%15.3fms%13.6fms\n" 201 + "IO %15s%15s%15s%15s\n" 202 + " %15llu%15llu%15.3fms%13.6fms\n" 203 + "SWAP %15s%15s%15s%15s\n" 204 + " %15llu%15llu%15.3fms%13.6fms\n" 205 + "RECLAIM %12s%15s%15s%15s\n" 206 + " %15llu%15llu%15.3fms%13.6fms\n" 207 + "THRASHING%12s%15s%15s%15s\n" 208 + " %15llu%15llu%15.3fms%13.6fms\n" 209 + "COMPACT %12s%15s%15s%15s\n" 210 + " %15llu%15llu%15.3fms%13.6fms\n" 211 + "WPCOPY %12s%15s%15s%15s\n" 212 + " %15llu%15llu%15.3fms%13.6fms\n" 213 + "IRQ %15s%15s%15s%15s\n" 214 + " %15llu%15llu%15.3fms%13.6fms\n", 214 215 "count", "real total", "virtual total", 215 - "delay total", "delay average", 216 + "delay total", "delay average", "delay max", "delay min", 216 217 (unsigned long long)t->cpu_count, 217 218 (unsigned long long)t->cpu_run_real_total, 218 219 (unsigned long long)t->cpu_run_virtual_total, 219 220 (unsigned long long)t->cpu_delay_total, 220 221 average_ms((double)t->cpu_delay_total, t->cpu_count), 221 - "count", "delay total", "delay average", 222 + delay_ms((double)t->cpu_delay_max), 223 + delay_ms((double)t->cpu_delay_min), 224 + "count", "delay total", "delay average", "delay max", "delay min", 222 225 (unsigned long long)t->blkio_count, 223 226 (unsigned long long)t->blkio_delay_total, 224 227 average_ms((double)t->blkio_delay_total, t->blkio_count), 225 - "count", "delay total", "delay average", 228 + delay_ms((double)t->blkio_delay_max), 229 + delay_ms((double)t->blkio_delay_min), 230 + "count", "delay total", "delay average", "delay max", "delay min", 226 231 (unsigned long long)t->swapin_count, 227 232 (unsigned long long)t->swapin_delay_total, 228 233 average_ms((double)t->swapin_delay_total, t->swapin_count), 229 - "count", "delay total", "delay average", 234 + delay_ms((double)t->swapin_delay_max), 235 + delay_ms((double)t->swapin_delay_min), 236 + "count", "delay total", "delay average", "delay max", "delay min", 230 237 (unsigned long long)t->freepages_count, 231 238 (unsigned long long)t->freepages_delay_total, 232 239 average_ms((double)t->freepages_delay_total, t->freepages_count), 233 - "count", "delay total", "delay average", 240 + delay_ms((double)t->freepages_delay_max), 241 + delay_ms((double)t->freepages_delay_min), 242 + "count", "delay total", "delay average", "delay max", "delay min", 234 243 (unsigned long long)t->thrashing_count, 235 244 (unsigned long long)t->thrashing_delay_total, 236 245 average_ms((double)t->thrashing_delay_total, t->thrashing_count), 237 - "count", "delay total", "delay average", 246 + delay_ms((double)t->thrashing_delay_max), 247 + delay_ms((double)t->thrashing_delay_min), 248 + "count", "delay total", "delay average", "delay max", "delay min", 238 249 (unsigned long long)t->compact_count, 239 250 (unsigned long long)t->compact_delay_total, 240 251 average_ms((double)t->compact_delay_total, t->compact_count), 241 - "count", "delay total", "delay average", 252 + delay_ms((double)t->compact_delay_max), 253 + delay_ms((double)t->compact_delay_min), 254 + "count", "delay total", "delay average", "delay max", "delay min", 242 255 (unsigned long long)t->wpcopy_count, 243 256 (unsigned long long)t->wpcopy_delay_total, 244 257 average_ms((double)t->wpcopy_delay_total, t->wpcopy_count), 245 - "count", "delay total", "delay average", 258 + delay_ms((double)t->wpcopy_delay_max), 259 + delay_ms((double)t->wpcopy_delay_min), 260 + "count", "delay total", "delay average", "delay max", "delay min", 246 261 (unsigned long long)t->irq_count, 247 262 (unsigned long long)t->irq_delay_total, 248 - average_ms((double)t->irq_delay_total, t->irq_count)); 263 + average_ms((double)t->irq_delay_total, t->irq_count), 264 + delay_ms((double)t->irq_delay_max), 265 + delay_ms((double)t->irq_delay_min)); 249 266 } 250 267 251 268 static void task_context_switch_counts(struct taskstats *t)
+2 -3
tools/accounting/procacct.c
··· 274 274 int maskset = 0; 275 275 char *logfile = NULL; 276 276 int cfd = 0; 277 - int forking = 0; 278 277 279 278 struct msgtemplate msg; 280 279 281 - while (!forking) { 282 - c = getopt(argc, argv, "m:vr:"); 280 + while (1) { 281 + c = getopt(argc, argv, "m:vr:w:"); 283 282 if (c < 0) 284 283 break; 285 284
+4
tools/testing/radix-tree/multiorder.c
··· 227 227 unsigned long index = (3 << RADIX_TREE_MAP_SHIFT) - 228 228 (1 << order); 229 229 item_insert_order(tree, index, order); 230 + xa_set_mark(tree, index, XA_MARK_1); 230 231 item_delete_rcu(tree, index); 231 232 } 232 233 } ··· 243 242 244 243 rcu_register_thread(); 245 244 while (!stop_iteration) { 245 + unsigned long find_index = (2 << RADIX_TREE_MAP_SHIFT) + 1; 246 246 struct item *item = xa_load(ptr, index); 247 + assert(!xa_is_internal(item)); 248 + item = xa_find(ptr, &find_index, index, XA_MARK_1); 247 249 assert(!xa_is_internal(item)); 248 250 } 249 251 rcu_unregister_thread();
+1 -1
tools/testing/selftests/pidfd/pidfd_test.c
··· 497 497 pthread_create(&t2, NULL, test_pidfd_poll_leader_exit_thread, NULL); 498 498 499 499 /* 500 - * glibc exit calls exit_group syscall, so explicity call exit only 500 + * glibc exit calls exit_group syscall, so explicitly call exit only 501 501 * so that only the group leader exits, leaving the threads alone. 502 502 */ 503 503 *child_exit_secs = time(NULL);