Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mm-nonmm-stable-2024-11-24-02-05' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:

- The series "resource: A couple of cleanups" from Andy Shevchenko
performs some cleanups in the resource management code

- The series "Improve the copy of task comm" from Yafang Shao addresses
possible race-induced overflows in the management of
task_struct.comm[]

- The series "Remove unnecessary header includes from
{tools/}lib/list_sort.c" from Kuan-Wei Chiu adds some cleanups and a
small fix to the list_sort library code and to its selftest

- The series "Enhance min heap API with non-inline functions and
optimizations" also from Kuan-Wei Chiu optimizes and cleans up the
min_heap library code

- The series "nilfs2: Finish folio conversion" from Ryusuke Konishi
finishes off nilfs2's folioification

- The series "add detect count for hung tasks" from Lance Yang adds
more userspace visibility into the hung-task detector's activity

- Apart from that, singelton patches in many places - please see the
individual changelogs for details

* tag 'mm-nonmm-stable-2024-11-24-02-05' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (71 commits)
gdb: lx-symbols: do not error out on monolithic build
kernel/reboot: replace sprintf() with sysfs_emit()
lib: util_macros_kunit: add kunit test for util_macros.h
util_macros.h: fix/rework find_closest() macros
Improve consistency of '#error' directive messages
ocfs2: fix uninitialized value in ocfs2_file_read_iter()
hung_task: add docs for hung_task_detect_count
hung_task: add detect count for hung tasks
dma-buf: use atomic64_inc_return() in dma_buf_getfile()
fs/proc/kcore.c: fix coccinelle reported ERROR instances
resource: avoid unnecessary resource tree walking in __region_intersects()
ocfs2: remove unused errmsg function and table
ocfs2: cluster: fix a typo
lib/scatterlist: use sg_phys() helper
checkpatch: always parse orig_commit in fixes tag
nilfs2: convert metadata aops from writepage to writepages
nilfs2: convert nilfs_recovery_copy_block() to take a folio
nilfs2: convert nilfs_page_count_clean_buffers() to take a folio
nilfs2: remove nilfs_writepage
nilfs2: convert checkpoint file to be folio-based
...

+1952 -896
+9
Documentation/admin-guide/sysctl/kernel.rst
··· 401 401 This file shows up if ``CONFIG_DETECT_HUNG_TASK`` is enabled. 402 402 403 403 404 + hung_task_detect_count 405 + ====================== 406 + 407 + Indicates the total number of tasks that have been detected as hung since 408 + the system boot. 409 + 410 + This file shows up if ``CONFIG_DETECT_HUNG_TASK`` is enabled. 411 + 412 + 404 413 hung_task_timeout_secs 405 414 ====================== 406 415
+1
Documentation/core-api/index.rst
··· 52 52 wrappers/atomic_bitops 53 53 floating-point 54 54 union_find 55 + min_heap 55 56 56 57 Low level entry and exit 57 58 ========================
+300
Documentation/core-api/min_heap.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============ 4 + Min Heap API 5 + ============ 6 + 7 + Introduction 8 + ============ 9 + 10 + The Min Heap API provides a set of functions and macros for managing min-heaps 11 + in the Linux kernel. A min-heap is a binary tree structure where the value of 12 + each node is less than or equal to the values of its children, ensuring that 13 + the smallest element is always at the root. 14 + 15 + This document provides a guide to the Min Heap API, detailing how to define and 16 + use min-heaps. Users should not directly call functions with **__min_heap_*()** 17 + prefixes, but should instead use the provided macro wrappers. 18 + 19 + In addition to the standard version of the functions, the API also includes a 20 + set of inline versions for performance-critical scenarios. These inline 21 + functions have the same names as their non-inline counterparts but include an 22 + **_inline** suffix. For example, **__min_heap_init_inline** and its 23 + corresponding macro wrapper **min_heap_init_inline**. The inline versions allow 24 + custom comparison and swap functions to be called directly, rather than through 25 + indirect function calls. This can significantly reduce overhead, especially 26 + when CONFIG_MITIGATION_RETPOLINE is enabled, as indirect function calls become 27 + more expensive. As with the non-inline versions, it is important to use the 28 + macro wrappers for inline functions instead of directly calling the functions 29 + themselves. 30 + 31 + Data Structures 32 + =============== 33 + 34 + Min-Heap Definition 35 + ------------------- 36 + 37 + The core data structure for representing a min-heap is defined using the 38 + **MIN_HEAP_PREALLOCATED** and **DEFINE_MIN_HEAP** macros. These macros allow 39 + you to define a min-heap with a preallocated buffer or dynamically allocated 40 + memory. 41 + 42 + Example: 43 + 44 + .. code-block:: c 45 + 46 + #define MIN_HEAP_PREALLOCATED(_type, _name, _nr) 47 + struct _name { 48 + int nr; /* Number of elements in the heap */ 49 + int size; /* Maximum number of elements that can be held */ 50 + _type *data; /* Pointer to the heap data */ 51 + _type preallocated[_nr]; /* Static preallocated array */ 52 + } 53 + 54 + #define DEFINE_MIN_HEAP(_type, _name) MIN_HEAP_PREALLOCATED(_type, _name, 0) 55 + 56 + A typical heap structure will include a counter for the number of elements 57 + (`nr`), the maximum capacity of the heap (`size`), and a pointer to an array of 58 + elements (`data`). Optionally, you can specify a static array for preallocated 59 + heap storage using **MIN_HEAP_PREALLOCATED**. 60 + 61 + Min Heap Callbacks 62 + ------------------ 63 + 64 + The **struct min_heap_callbacks** provides customization options for ordering 65 + elements in the heap and swapping them. It contains two function pointers: 66 + 67 + .. code-block:: c 68 + 69 + struct min_heap_callbacks { 70 + bool (*less)(const void *lhs, const void *rhs, void *args); 71 + void (*swp)(void *lhs, void *rhs, void *args); 72 + }; 73 + 74 + - **less** is the comparison function used to establish the order of elements. 75 + - **swp** is a function for swapping elements in the heap. If swp is set to 76 + NULL, the default swap function will be used, which swaps the elements based on their size 77 + 78 + Macro Wrappers 79 + ============== 80 + 81 + The following macro wrappers are provided for interacting with the heap in a 82 + user-friendly manner. Each macro corresponds to a function that operates on the 83 + heap, and they abstract away direct calls to internal functions. 84 + 85 + Each macro accepts various parameters that are detailed below. 86 + 87 + Heap Initialization 88 + -------------------- 89 + 90 + .. code-block:: c 91 + 92 + min_heap_init(heap, data, size); 93 + 94 + - **heap**: A pointer to the min-heap structure to be initialized. 95 + - **data**: A pointer to the buffer where the heap elements will be stored. If 96 + `NULL`, the preallocated buffer within the heap structure will be used. 97 + - **size**: The maximum number of elements the heap can hold. 98 + 99 + This macro initializes the heap, setting its initial state. If `data` is 100 + `NULL`, the preallocated memory inside the heap structure will be used for 101 + storage. Otherwise, the user-provided buffer is used. The operation is **O(1)**. 102 + 103 + **Inline Version:** min_heap_init_inline(heap, data, size) 104 + 105 + Accessing the Top Element 106 + ------------------------- 107 + 108 + .. code-block:: c 109 + 110 + element = min_heap_peek(heap); 111 + 112 + - **heap**: A pointer to the min-heap from which to retrieve the smallest 113 + element. 114 + 115 + This macro returns a pointer to the smallest element (the root) of the heap, or 116 + `NULL` if the heap is empty. The operation is **O(1)**. 117 + 118 + **Inline Version:** min_heap_peek_inline(heap) 119 + 120 + Heap Insertion 121 + -------------- 122 + 123 + .. code-block:: c 124 + 125 + success = min_heap_push(heap, element, callbacks, args); 126 + 127 + - **heap**: A pointer to the min-heap into which the element should be inserted. 128 + - **element**: A pointer to the element to be inserted into the heap. 129 + - **callbacks**: A pointer to a `struct min_heap_callbacks` providing the 130 + `less` and `swp` functions. 131 + - **args**: Optional arguments passed to the `less` and `swp` functions. 132 + 133 + This macro inserts an element into the heap. It returns `true` if the insertion 134 + was successful and `false` if the heap is full. The operation is **O(log n)**. 135 + 136 + **Inline Version:** min_heap_push_inline(heap, element, callbacks, args) 137 + 138 + Heap Removal 139 + ------------ 140 + 141 + .. code-block:: c 142 + 143 + success = min_heap_pop(heap, callbacks, args); 144 + 145 + - **heap**: A pointer to the min-heap from which to remove the smallest element. 146 + - **callbacks**: A pointer to a `struct min_heap_callbacks` providing the 147 + `less` and `swp` functions. 148 + - **args**: Optional arguments passed to the `less` and `swp` functions. 149 + 150 + This macro removes the smallest element (the root) from the heap. It returns 151 + `true` if the element was successfully removed, or `false` if the heap is 152 + empty. The operation is **O(log n)**. 153 + 154 + **Inline Version:** min_heap_pop_inline(heap, callbacks, args) 155 + 156 + Heap Maintenance 157 + ---------------- 158 + 159 + You can use the following macros to maintain the heap's structure: 160 + 161 + .. code-block:: c 162 + 163 + min_heap_sift_down(heap, pos, callbacks, args); 164 + 165 + - **heap**: A pointer to the min-heap. 166 + - **pos**: The index from which to start sifting down. 167 + - **callbacks**: A pointer to a `struct min_heap_callbacks` providing the 168 + `less` and `swp` functions. 169 + - **args**: Optional arguments passed to the `less` and `swp` functions. 170 + 171 + This macro restores the heap property by moving the element at the specified 172 + index (`pos`) down the heap until it is in the correct position. The operation 173 + is **O(log n)**. 174 + 175 + **Inline Version:** min_heap_sift_down_inline(heap, pos, callbacks, args) 176 + 177 + .. code-block:: c 178 + 179 + min_heap_sift_up(heap, idx, callbacks, args); 180 + 181 + - **heap**: A pointer to the min-heap. 182 + - **idx**: The index of the element to sift up. 183 + - **callbacks**: A pointer to a `struct min_heap_callbacks` providing the 184 + `less` and `swp` functions. 185 + - **args**: Optional arguments passed to the `less` and `swp` functions. 186 + 187 + This macro restores the heap property by moving the element at the specified 188 + index (`idx`) up the heap. The operation is **O(log n)**. 189 + 190 + **Inline Version:** min_heap_sift_up_inline(heap, idx, callbacks, args) 191 + 192 + .. code-block:: c 193 + 194 + min_heapify_all(heap, callbacks, args); 195 + 196 + - **heap**: A pointer to the min-heap. 197 + - **callbacks**: A pointer to a `struct min_heap_callbacks` providing the 198 + `less` and `swp` functions. 199 + - **args**: Optional arguments passed to the `less` and `swp` functions. 200 + 201 + This macro ensures that the entire heap satisfies the heap property. It is 202 + called when the heap is built from scratch or after many modifications. The 203 + operation is **O(n)**. 204 + 205 + **Inline Version:** min_heapify_all_inline(heap, callbacks, args) 206 + 207 + Removing Specific Elements 208 + -------------------------- 209 + 210 + .. code-block:: c 211 + 212 + success = min_heap_del(heap, idx, callbacks, args); 213 + 214 + - **heap**: A pointer to the min-heap. 215 + - **idx**: The index of the element to delete. 216 + - **callbacks**: A pointer to a `struct min_heap_callbacks` providing the 217 + `less` and `swp` functions. 218 + - **args**: Optional arguments passed to the `less` and `swp` functions. 219 + 220 + This macro removes an element at the specified index (`idx`) from the heap and 221 + restores the heap property. The operation is **O(log n)**. 222 + 223 + **Inline Version:** min_heap_del_inline(heap, idx, callbacks, args) 224 + 225 + Other Utilities 226 + =============== 227 + 228 + - **min_heap_full(heap)**: Checks whether the heap is full. 229 + Complexity: **O(1)**. 230 + 231 + .. code-block:: c 232 + 233 + bool full = min_heap_full(heap); 234 + 235 + - `heap`: A pointer to the min-heap to check. 236 + 237 + This macro returns `true` if the heap is full, otherwise `false`. 238 + 239 + **Inline Version:** min_heap_full_inline(heap) 240 + 241 + - **min_heap_empty(heap)**: Checks whether the heap is empty. 242 + Complexity: **O(1)**. 243 + 244 + .. code-block:: c 245 + 246 + bool empty = min_heap_empty(heap); 247 + 248 + - `heap`: A pointer to the min-heap to check. 249 + 250 + This macro returns `true` if the heap is empty, otherwise `false`. 251 + 252 + **Inline Version:** min_heap_empty_inline(heap) 253 + 254 + Example Usage 255 + ============= 256 + 257 + An example usage of the min-heap API would involve defining a heap structure, 258 + initializing it, and inserting and removing elements as needed. 259 + 260 + .. code-block:: c 261 + 262 + #include <linux/min_heap.h> 263 + 264 + int my_less_function(const void *lhs, const void *rhs, void *args) { 265 + return (*(int *)lhs < *(int *)rhs); 266 + } 267 + 268 + struct min_heap_callbacks heap_cb = { 269 + .less = my_less_function, /* Comparison function for heap order */ 270 + .swp = NULL, /* Use default swap function */ 271 + }; 272 + 273 + void example_usage(void) { 274 + /* Pre-populate the buffer with elements */ 275 + int buffer[5] = {5, 2, 8, 1, 3}; 276 + /* Declare a min-heap */ 277 + DEFINE_MIN_HEAP(int, my_heap); 278 + 279 + /* Initialize the heap with preallocated buffer and size */ 280 + min_heap_init(&my_heap, buffer, 5); 281 + 282 + /* Build the heap using min_heapify_all */ 283 + my_heap.nr = 5; /* Set the number of elements in the heap */ 284 + min_heapify_all(&my_heap, &heap_cb, NULL); 285 + 286 + /* Peek at the top element (should be 1 in this case) */ 287 + int *top = min_heap_peek(&my_heap); 288 + pr_info("Top element: %d\n", *top); 289 + 290 + /* Pop the top element (1) and get the new top (2) */ 291 + min_heap_pop(&my_heap, &heap_cb, NULL); 292 + top = min_heap_peek(&my_heap); 293 + pr_info("New top element: %d\n", *top); 294 + 295 + /* Insert a new element (0) and recheck the top */ 296 + int new_element = 0; 297 + min_heap_push(&my_heap, &new_element, &heap_cb, NULL); 298 + top = min_heap_peek(&my_heap); 299 + pr_info("Top element after insertion: %d\n", *top); 300 + }
+9
MAINTAINERS
··· 15585 15585 F: arch/arm/boot/dts/marvell/armada-xp-crs328-4c-20s-4s-bit.dts 15586 15586 F: arch/arm/boot/dts/marvell/armada-xp-crs328-4c-20s-4s.dts 15587 15587 15588 + MIN HEAP 15589 + M: Kuan-Wei Chiu <visitorckw@gmail.com> 15590 + L: linux-kernel@vger.kernel.org 15591 + S: Maintained 15592 + F: Documentation/core-api/min_heap.rst 15593 + F: include/linux/min_heap.h 15594 + F: lib/min_heap.c 15595 + F: lib/test_min_heap.c 15596 + 15588 15597 MIPI CCS, SMIA AND SMIA++ IMAGE SENSOR DRIVER 15589 15598 M: Sakari Ailus <sakari.ailus@linux.intel.com> 15590 15599 L: linux-media@vger.kernel.org
+1 -1
arch/alpha/include/asm/spinlock_types.h
··· 3 3 #define _ALPHA_SPINLOCK_TYPES_H 4 4 5 5 #ifndef __LINUX_SPINLOCK_TYPES_RAW_H 6 - # error "please don't include this file directly" 6 + # error "Please do not include this file directly." 7 7 #endif 8 8 9 9 typedef struct {
+1 -1
arch/arm/include/asm/spinlock_types.h
··· 3 3 #define __ASM_SPINLOCK_TYPES_H 4 4 5 5 #ifndef __LINUX_SPINLOCK_TYPES_RAW_H 6 - # error "please don't include this file directly" 6 + # error "Please do not include this file directly." 7 7 #endif 8 8 9 9 #define TICKET_SHIFT 16
+1 -1
arch/arm64/include/asm/spinlock_types.h
··· 6 6 #define __ASM_SPINLOCK_TYPES_H 7 7 8 8 #if !defined(__LINUX_SPINLOCK_TYPES_RAW_H) && !defined(__ASM_SPINLOCK_H) 9 - # error "please don't include this file directly" 9 + # error "Please do not include this file directly." 10 10 #endif 11 11 12 12 #include <asm-generic/qspinlock_types.h>
+1 -1
arch/hexagon/include/asm/spinlock_types.h
··· 9 9 #define _ASM_SPINLOCK_TYPES_H 10 10 11 11 #ifndef __LINUX_SPINLOCK_TYPES_RAW_H 12 - # error "please don't include this file directly" 12 + # error "Please do not include this file directly." 13 13 #endif 14 14 15 15 typedef struct {
+1 -1
arch/powerpc/include/asm/simple_spinlock_types.h
··· 3 3 #define _ASM_POWERPC_SIMPLE_SPINLOCK_TYPES_H 4 4 5 5 #ifndef __LINUX_SPINLOCK_TYPES_RAW_H 6 - # error "please don't include this file directly" 6 + # error "Please do not include this file directly." 7 7 #endif 8 8 9 9 typedef struct {
+1 -1
arch/powerpc/include/asm/spinlock_types.h
··· 3 3 #define _ASM_POWERPC_SPINLOCK_TYPES_H 4 4 5 5 #ifndef __LINUX_SPINLOCK_TYPES_RAW_H 6 - # error "please don't include this file directly" 6 + # error "Please do not include this file directly." 7 7 #endif 8 8 9 9 #ifdef CONFIG_PPC_QUEUED_SPINLOCKS
+1 -1
arch/s390/include/asm/spinlock_types.h
··· 3 3 #define __ASM_SPINLOCK_TYPES_H 4 4 5 5 #ifndef __LINUX_SPINLOCK_TYPES_RAW_H 6 - # error "please don't include this file directly" 6 + # error "Please do not include this file directly." 7 7 #endif 8 8 9 9 typedef struct {
+1 -1
arch/sh/include/asm/spinlock_types.h
··· 3 3 #define __ASM_SH_SPINLOCK_TYPES_H 4 4 5 5 #ifndef __LINUX_SPINLOCK_TYPES_RAW_H 6 - # error "please don't include this file directly" 6 + # error "Please do not include this file directly." 7 7 #endif 8 8 9 9 typedef struct {
+1 -1
arch/xtensa/include/asm/spinlock_types.h
··· 3 3 #define __ASM_SPINLOCK_TYPES_H 4 4 5 5 #if !defined(__LINUX_SPINLOCK_TYPES_RAW_H) && !defined(__ASM_SPINLOCK_H) 6 - # error "please don't include this file directly" 6 + # error "Please do not include this file directly." 7 7 #endif 8 8 9 9 #include <asm-generic/qspinlock_types.h>
+1 -1
drivers/gpu/drm/drm_framebuffer.c
··· 870 870 INIT_LIST_HEAD(&fb->filp_head); 871 871 872 872 fb->funcs = funcs; 873 - strcpy(fb->comm, current->comm); 873 + strscpy(fb->comm, current->comm); 874 874 875 875 ret = __drm_mode_object_add(dev, &fb->base, DRM_MODE_OBJECT_FB, 876 876 false, drm_framebuffer_free);
+3 -3
drivers/gpu/drm/i915/i915_gpu_error.c
··· 1104 1104 } 1105 1105 1106 1106 INIT_LIST_HEAD(&dst->page_list); 1107 - strcpy(dst->name, name); 1107 + strscpy(dst->name, name); 1108 1108 dst->next = NULL; 1109 1109 1110 1110 dst->gtt_offset = vma_res->start; ··· 1404 1404 rcu_read_lock(); 1405 1405 task = pid_task(ctx->pid, PIDTYPE_PID); 1406 1406 if (task) { 1407 - strcpy(e->comm, task->comm); 1407 + strscpy(e->comm, task->comm); 1408 1408 e->pid = task->pid; 1409 1409 } 1410 1410 rcu_read_unlock(); ··· 1450 1450 return next; 1451 1451 } 1452 1452 1453 - strcpy(c->name, name); 1453 + strscpy(c->name, name); 1454 1454 c->vma_res = i915_vma_resource_get(vma_res); 1455 1455 1456 1456 c->next = next;
+1
drivers/md/bcache/Kconfig
··· 5 5 select BLOCK_HOLDER_DEPRECATED if SYSFS 6 6 select CRC64 7 7 select CLOSURES 8 + select MIN_HEAP 8 9 help 9 10 Allows a block device to be used as cache for other devices; uses 10 11 a btree for indexing and the layout is optimized for SSDs.
+2 -9
drivers/md/bcache/alloc.c
··· 189 189 return new_bucket_prio(ca, *lhs) < new_bucket_prio(ca, *rhs); 190 190 } 191 191 192 - static inline void new_bucket_swap(void *l, void *r, void __always_unused *args) 193 - { 194 - struct bucket **lhs = l, **rhs = r; 195 - 196 - swap(*lhs, *rhs); 197 - } 198 - 199 192 static void invalidate_buckets_lru(struct cache *ca) 200 193 { 201 194 struct bucket *b; 202 195 const struct min_heap_callbacks bucket_max_cmp_callback = { 203 196 .less = new_bucket_max_cmp, 204 - .swp = new_bucket_swap, 197 + .swp = NULL, 205 198 }; 206 199 const struct min_heap_callbacks bucket_min_cmp_callback = { 207 200 .less = new_bucket_min_cmp, 208 - .swp = new_bucket_swap, 201 + .swp = NULL, 209 202 }; 210 203 211 204 ca->heap.nr = 0;
+3 -11
drivers/md/bcache/bset.c
··· 1093 1093 return bkey_cmp(_l->k, _r->k) <= 0; 1094 1094 } 1095 1095 1096 - static inline void new_btree_iter_swap(void *iter1, void *iter2, void __always_unused *args) 1097 - { 1098 - struct btree_iter_set *_iter1 = iter1; 1099 - struct btree_iter_set *_iter2 = iter2; 1100 - 1101 - swap(*_iter1, *_iter2); 1102 - } 1103 - 1104 1096 static inline bool btree_iter_end(struct btree_iter *iter) 1105 1097 { 1106 1098 return !iter->heap.nr; ··· 1103 1111 { 1104 1112 const struct min_heap_callbacks callbacks = { 1105 1113 .less = new_btree_iter_cmp, 1106 - .swp = new_btree_iter_swap, 1114 + .swp = NULL, 1107 1115 }; 1108 1116 1109 1117 if (k != end) ··· 1149 1157 struct bkey *ret = NULL; 1150 1158 const struct min_heap_callbacks callbacks = { 1151 1159 .less = cmp, 1152 - .swp = new_btree_iter_swap, 1160 + .swp = NULL, 1153 1161 }; 1154 1162 1155 1163 if (!btree_iter_end(iter)) { ··· 1223 1231 : bch_ptr_invalid; 1224 1232 const struct min_heap_callbacks callbacks = { 1225 1233 .less = b->ops->sort_cmp, 1226 - .swp = new_btree_iter_swap, 1234 + .swp = NULL, 1227 1235 }; 1228 1236 1229 1237 /* Heapify the iterator, using our comparison function */
+1 -9
drivers/md/bcache/extents.c
··· 266 266 return !(c ? c > 0 : _l->k < _r->k); 267 267 } 268 268 269 - static inline void new_btree_iter_swap(void *iter1, void *iter2, void __always_unused *args) 270 - { 271 - struct btree_iter_set *_iter1 = iter1; 272 - struct btree_iter_set *_iter2 = iter2; 273 - 274 - swap(*_iter1, *_iter2); 275 - } 276 - 277 269 static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter, 278 270 struct bkey *tmp) 279 271 { 280 272 const struct min_heap_callbacks callbacks = { 281 273 .less = new_bch_extent_sort_cmp, 282 - .swp = new_btree_iter_swap, 274 + .swp = NULL, 283 275 }; 284 276 while (iter->heap.nr > 1) { 285 277 struct btree_iter_set *top = iter->heap.data, *i = top + 1;
+1 -9
drivers/md/bcache/movinggc.c
··· 190 190 return GC_SECTORS_USED(*_l) >= GC_SECTORS_USED(*_r); 191 191 } 192 192 193 - static void new_bucket_swap(void *l, void *r, void __always_unused *args) 194 - { 195 - struct bucket **_l = l; 196 - struct bucket **_r = r; 197 - 198 - swap(*_l, *_r); 199 - } 200 - 201 193 static unsigned int bucket_heap_top(struct cache *ca) 202 194 { 203 195 struct bucket *b; ··· 204 212 unsigned long sectors_to_move, reserve_sectors; 205 213 const struct min_heap_callbacks callbacks = { 206 214 .less = new_bucket_cmp, 207 - .swp = new_bucket_swap, 215 + .swp = NULL, 208 216 }; 209 217 210 218 if (!c->copy_gc_enabled)
+1
drivers/md/dm-vdo/Kconfig
··· 7 7 select DM_BUFIO 8 8 select LZ4_COMPRESS 9 9 select LZ4_DECOMPRESS 10 + select MIN_HEAP 10 11 help 11 12 This device mapper target presents a block device with 12 13 deduplication, compression and thin-provisioning.
+1 -1
drivers/md/dm-vdo/repair.c
··· 166 166 167 167 static const struct min_heap_callbacks repair_min_heap = { 168 168 .less = mapping_is_less_than, 169 - .swp = swap_mappings, 169 + .swp = NULL, 170 170 }; 171 171 172 172 static struct numbered_block_mapping *sort_next_heap_element(struct repair_completion *repair)
+1 -9
drivers/md/dm-vdo/slab-depot.c
··· 3301 3301 return info1->slab_number < info2->slab_number; 3302 3302 } 3303 3303 3304 - static void swap_slab_statuses(void *item1, void *item2, void __always_unused *args) 3305 - { 3306 - struct slab_status *info1 = item1; 3307 - struct slab_status *info2 = item2; 3308 - 3309 - swap(*info1, *info2); 3310 - } 3311 - 3312 3304 static const struct min_heap_callbacks slab_status_min_heap = { 3313 3305 .less = slab_status_is_less_than, 3314 - .swp = swap_slab_statuses, 3306 + .swp = NULL, 3315 3307 }; 3316 3308 3317 3309 /* Inform the slab actor that a action has finished on some slab; used by apply_to_slabs(). */
+1
fs/bcachefs/Kconfig
··· 24 24 select XXHASH 25 25 select SRCU 26 26 select SYMBOLIC_ERRNAME 27 + select MIN_HEAP 27 28 help 28 29 The bcachefs filesystem - a modern, copy on write filesystem, with 29 30 support for multiple devices, compression, checksumming, etc.
+4 -21
fs/bcachefs/clock.c
··· 14 14 return (*_l)->expire < (*_r)->expire; 15 15 } 16 16 17 - static inline void io_timer_swp(void *l, void *r, void __always_unused *args) 18 - { 19 - struct io_timer **_l = (struct io_timer **)l; 20 - struct io_timer **_r = (struct io_timer **)r; 21 - 22 - swap(*_l, *_r); 23 - } 17 + static const struct min_heap_callbacks callbacks = { 18 + .less = io_timer_cmp, 19 + .swp = NULL, 20 + }; 24 21 25 22 void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer) 26 23 { 27 - const struct min_heap_callbacks callbacks = { 28 - .less = io_timer_cmp, 29 - .swp = io_timer_swp, 30 - }; 31 - 32 24 spin_lock(&clock->timer_lock); 33 25 34 26 if (time_after_eq64((u64) atomic64_read(&clock->now), timer->expire)) { ··· 40 48 41 49 void bch2_io_timer_del(struct io_clock *clock, struct io_timer *timer) 42 50 { 43 - const struct min_heap_callbacks callbacks = { 44 - .less = io_timer_cmp, 45 - .swp = io_timer_swp, 46 - }; 47 - 48 51 spin_lock(&clock->timer_lock); 49 52 50 53 for (size_t i = 0; i < clock->timers.nr; i++) ··· 129 142 static struct io_timer *get_expired_timer(struct io_clock *clock, u64 now) 130 143 { 131 144 struct io_timer *ret = NULL; 132 - const struct min_heap_callbacks callbacks = { 133 - .less = io_timer_cmp, 134 - .swp = io_timer_swp, 135 - }; 136 145 137 146 if (clock->timers.nr && 138 147 time_after_eq64(now, clock->timers.data[0]->expire)) {
+5 -14
fs/bcachefs/ec.c
··· 1057 1057 ec_stripes_heap_set_backpointer(_h, j); 1058 1058 } 1059 1059 1060 + static const struct min_heap_callbacks callbacks = { 1061 + .less = ec_stripes_heap_cmp, 1062 + .swp = ec_stripes_heap_swap, 1063 + }; 1064 + 1060 1065 static void heap_verify_backpointer(struct bch_fs *c, size_t idx) 1061 1066 { 1062 1067 ec_stripes_heap *h = &c->ec_stripes_heap; ··· 1074 1069 void bch2_stripes_heap_del(struct bch_fs *c, 1075 1070 struct stripe *m, size_t idx) 1076 1071 { 1077 - const struct min_heap_callbacks callbacks = { 1078 - .less = ec_stripes_heap_cmp, 1079 - .swp = ec_stripes_heap_swap, 1080 - }; 1081 - 1082 1072 mutex_lock(&c->ec_stripes_heap_lock); 1083 1073 heap_verify_backpointer(c, idx); 1084 1074 ··· 1084 1084 void bch2_stripes_heap_insert(struct bch_fs *c, 1085 1085 struct stripe *m, size_t idx) 1086 1086 { 1087 - const struct min_heap_callbacks callbacks = { 1088 - .less = ec_stripes_heap_cmp, 1089 - .swp = ec_stripes_heap_swap, 1090 - }; 1091 - 1092 1087 mutex_lock(&c->ec_stripes_heap_lock); 1093 1088 BUG_ON(min_heap_full(&c->ec_stripes_heap)); 1094 1089 ··· 1102 1107 void bch2_stripes_heap_update(struct bch_fs *c, 1103 1108 struct stripe *m, size_t idx) 1104 1109 { 1105 - const struct min_heap_callbacks callbacks = { 1106 - .less = ec_stripes_heap_cmp, 1107 - .swp = ec_stripes_heap_swap, 1108 - }; 1109 1110 ec_stripes_heap *h = &c->ec_stripes_heap; 1110 1111 bool do_deletes; 1111 1112 size_t i;
-10
fs/exec.c
··· 1189 1189 return 0; 1190 1190 } 1191 1191 1192 - char *__get_task_comm(char *buf, size_t buf_size, struct task_struct *tsk) 1193 - { 1194 - task_lock(tsk); 1195 - /* Always NUL terminated and zero-padded */ 1196 - strscpy_pad(buf, tsk->comm, buf_size); 1197 - task_unlock(tsk); 1198 - return buf; 1199 - } 1200 - EXPORT_SYMBOL_GPL(__get_task_comm); 1201 - 1202 1192 /* 1203 1193 * These functions flushes out all traces of the currently running executable 1204 1194 * so that a new one can be started
+85 -65
fs/nilfs2/alloc.c
··· 177 177 * nilfs_palloc_desc_block_init - initialize buffer of a group descriptor block 178 178 * @inode: inode of metadata file 179 179 * @bh: buffer head of the buffer to be initialized 180 - * @kaddr: kernel address mapped for the page including the buffer 180 + * @from: kernel address mapped for a chunk of the block 181 + * 182 + * This function does not yet support the case where block size > PAGE_SIZE. 181 183 */ 182 184 static void nilfs_palloc_desc_block_init(struct inode *inode, 183 - struct buffer_head *bh, void *kaddr) 185 + struct buffer_head *bh, void *from) 184 186 { 185 - struct nilfs_palloc_group_desc *desc = kaddr + bh_offset(bh); 187 + struct nilfs_palloc_group_desc *desc = from; 186 188 unsigned long n = nilfs_palloc_groups_per_desc_block(inode); 187 189 __le32 nfrees; 188 190 ··· 339 337 } 340 338 341 339 /** 342 - * nilfs_palloc_block_get_group_desc - get kernel address of a group descriptor 340 + * nilfs_palloc_group_desc_offset - calculate the byte offset of a group 341 + * descriptor in the folio containing it 343 342 * @inode: inode of metadata file using this allocator 344 343 * @group: group number 345 - * @bh: buffer head of the buffer storing the group descriptor block 346 - * @kaddr: kernel address mapped for the page including the buffer 344 + * @bh: buffer head of the group descriptor block 345 + * 346 + * Return: Byte offset in the folio of the group descriptor for @group. 347 347 */ 348 - static struct nilfs_palloc_group_desc * 349 - nilfs_palloc_block_get_group_desc(const struct inode *inode, 350 - unsigned long group, 351 - const struct buffer_head *bh, void *kaddr) 348 + static size_t nilfs_palloc_group_desc_offset(const struct inode *inode, 349 + unsigned long group, 350 + const struct buffer_head *bh) 352 351 { 353 - return (struct nilfs_palloc_group_desc *)(kaddr + bh_offset(bh)) + 354 - group % nilfs_palloc_groups_per_desc_block(inode); 352 + return offset_in_folio(bh->b_folio, bh->b_data) + 353 + sizeof(struct nilfs_palloc_group_desc) * 354 + (group % nilfs_palloc_groups_per_desc_block(inode)); 355 355 } 356 356 357 357 /** 358 - * nilfs_palloc_block_get_entry - get kernel address of an entry 359 - * @inode: inode of metadata file using this allocator 360 - * @nr: serial number of the entry (e.g. inode number) 361 - * @bh: buffer head of the buffer storing the entry block 362 - * @kaddr: kernel address mapped for the page including the buffer 358 + * nilfs_palloc_bitmap_offset - calculate the byte offset of a bitmap block 359 + * in the folio containing it 360 + * @bh: buffer head of the bitmap block 361 + * 362 + * Return: Byte offset in the folio of the bitmap block for @bh. 363 363 */ 364 - void *nilfs_palloc_block_get_entry(const struct inode *inode, __u64 nr, 365 - const struct buffer_head *bh, void *kaddr) 364 + static size_t nilfs_palloc_bitmap_offset(const struct buffer_head *bh) 366 365 { 367 - unsigned long entry_offset, group_offset; 366 + return offset_in_folio(bh->b_folio, bh->b_data); 367 + } 368 368 369 - nilfs_palloc_group(inode, nr, &group_offset); 370 - entry_offset = group_offset % NILFS_MDT(inode)->mi_entries_per_block; 369 + /** 370 + * nilfs_palloc_entry_offset - calculate the byte offset of an entry in the 371 + * folio containing it 372 + * @inode: inode of metadata file using this allocator 373 + * @nr: serial number of the entry (e.g. inode number) 374 + * @bh: buffer head of the entry block 375 + * 376 + * Return: Byte offset in the folio of the entry @nr. 377 + */ 378 + size_t nilfs_palloc_entry_offset(const struct inode *inode, __u64 nr, 379 + const struct buffer_head *bh) 380 + { 381 + unsigned long entry_index_in_group, entry_index_in_block; 371 382 372 - return kaddr + bh_offset(bh) + 373 - entry_offset * NILFS_MDT(inode)->mi_entry_size; 383 + nilfs_palloc_group(inode, nr, &entry_index_in_group); 384 + entry_index_in_block = entry_index_in_group % 385 + NILFS_MDT(inode)->mi_entries_per_block; 386 + 387 + return offset_in_folio(bh->b_folio, bh->b_data) + 388 + entry_index_in_block * NILFS_MDT(inode)->mi_entry_size; 374 389 } 375 390 376 391 /** ··· 525 506 struct buffer_head *desc_bh, *bitmap_bh; 526 507 struct nilfs_palloc_group_desc *desc; 527 508 unsigned char *bitmap; 528 - void *desc_kaddr, *bitmap_kaddr; 509 + size_t doff, boff; 529 510 unsigned long group, maxgroup, ngroups; 530 511 unsigned long group_offset, maxgroup_offset; 531 512 unsigned long n, entries_per_group; ··· 548 529 ret = nilfs_palloc_get_desc_block(inode, group, 1, &desc_bh); 549 530 if (ret < 0) 550 531 return ret; 551 - desc_kaddr = kmap_local_page(desc_bh->b_page); 552 - desc = nilfs_palloc_block_get_group_desc( 553 - inode, group, desc_bh, desc_kaddr); 532 + 533 + doff = nilfs_palloc_group_desc_offset(inode, group, desc_bh); 534 + desc = kmap_local_folio(desc_bh->b_folio, doff); 554 535 n = nilfs_palloc_rest_groups_in_desc_block(inode, group, 555 536 maxgroup); 556 - for (j = 0; j < n; j++, desc++, group++, group_offset = 0) { 537 + for (j = 0; j < n; j++, group++, group_offset = 0) { 557 538 lock = nilfs_mdt_bgl_lock(inode, group); 558 - if (nilfs_palloc_group_desc_nfrees(desc, lock) == 0) 539 + if (nilfs_palloc_group_desc_nfrees(&desc[j], lock) == 0) 559 540 continue; 560 541 561 - kunmap_local(desc_kaddr); 542 + kunmap_local(desc); 562 543 ret = nilfs_palloc_get_bitmap_block(inode, group, 1, 563 544 &bitmap_bh); 564 545 if (unlikely(ret < 0)) { ··· 566 547 return ret; 567 548 } 568 549 569 - desc_kaddr = kmap_local_page(desc_bh->b_page); 570 - desc = nilfs_palloc_block_get_group_desc( 571 - inode, group, desc_bh, desc_kaddr); 550 + /* 551 + * Re-kmap the folio containing the first (and 552 + * subsequent) group descriptors. 553 + */ 554 + desc = kmap_local_folio(desc_bh->b_folio, doff); 572 555 573 - bitmap_kaddr = kmap_local_page(bitmap_bh->b_page); 574 - bitmap = bitmap_kaddr + bh_offset(bitmap_bh); 556 + boff = nilfs_palloc_bitmap_offset(bitmap_bh); 557 + bitmap = kmap_local_folio(bitmap_bh->b_folio, boff); 575 558 pos = nilfs_palloc_find_available_slot( 576 559 bitmap, group_offset, entries_per_group, lock, 577 560 wrap); ··· 583 562 * beginning, the wrap flag only has an effect on the 584 563 * first search. 585 564 */ 586 - kunmap_local(bitmap_kaddr); 565 + kunmap_local(bitmap); 587 566 if (pos >= 0) 588 567 goto found; 589 568 590 569 brelse(bitmap_bh); 591 570 } 592 571 593 - kunmap_local(desc_kaddr); 572 + kunmap_local(desc); 594 573 brelse(desc_bh); 595 574 } 596 575 ··· 599 578 600 579 found: 601 580 /* found a free entry */ 602 - nilfs_palloc_group_desc_add_entries(desc, lock, -1); 581 + nilfs_palloc_group_desc_add_entries(&desc[j], lock, -1); 603 582 req->pr_entry_nr = entries_per_group * group + pos; 604 - kunmap_local(desc_kaddr); 583 + kunmap_local(desc); 605 584 606 585 req->pr_desc_bh = desc_bh; 607 586 req->pr_bitmap_bh = bitmap_bh; ··· 632 611 void nilfs_palloc_commit_free_entry(struct inode *inode, 633 612 struct nilfs_palloc_req *req) 634 613 { 635 - struct nilfs_palloc_group_desc *desc; 636 614 unsigned long group, group_offset; 615 + size_t doff, boff; 616 + struct nilfs_palloc_group_desc *desc; 637 617 unsigned char *bitmap; 638 - void *desc_kaddr, *bitmap_kaddr; 639 618 spinlock_t *lock; 640 619 641 620 group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); 642 - desc_kaddr = kmap_local_page(req->pr_desc_bh->b_page); 643 - desc = nilfs_palloc_block_get_group_desc(inode, group, 644 - req->pr_desc_bh, desc_kaddr); 645 - bitmap_kaddr = kmap_local_page(req->pr_bitmap_bh->b_page); 646 - bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); 621 + doff = nilfs_palloc_group_desc_offset(inode, group, req->pr_desc_bh); 622 + desc = kmap_local_folio(req->pr_desc_bh->b_folio, doff); 623 + 624 + boff = nilfs_palloc_bitmap_offset(req->pr_bitmap_bh); 625 + bitmap = kmap_local_folio(req->pr_bitmap_bh->b_folio, boff); 647 626 lock = nilfs_mdt_bgl_lock(inode, group); 648 627 649 628 if (!nilfs_clear_bit_atomic(lock, group_offset, bitmap)) ··· 654 633 else 655 634 nilfs_palloc_group_desc_add_entries(desc, lock, 1); 656 635 657 - kunmap_local(bitmap_kaddr); 658 - kunmap_local(desc_kaddr); 636 + kunmap_local(bitmap); 637 + kunmap_local(desc); 659 638 660 639 mark_buffer_dirty(req->pr_desc_bh); 661 640 mark_buffer_dirty(req->pr_bitmap_bh); ··· 674 653 struct nilfs_palloc_req *req) 675 654 { 676 655 struct nilfs_palloc_group_desc *desc; 677 - void *desc_kaddr, *bitmap_kaddr; 656 + size_t doff, boff; 678 657 unsigned char *bitmap; 679 658 unsigned long group, group_offset; 680 659 spinlock_t *lock; 681 660 682 661 group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); 683 - desc_kaddr = kmap_local_page(req->pr_desc_bh->b_page); 684 - desc = nilfs_palloc_block_get_group_desc(inode, group, 685 - req->pr_desc_bh, desc_kaddr); 686 - bitmap_kaddr = kmap_local_page(req->pr_bitmap_bh->b_page); 687 - bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); 662 + doff = nilfs_palloc_group_desc_offset(inode, group, req->pr_desc_bh); 663 + desc = kmap_local_folio(req->pr_desc_bh->b_folio, doff); 664 + 665 + boff = nilfs_palloc_bitmap_offset(req->pr_bitmap_bh); 666 + bitmap = kmap_local_folio(req->pr_bitmap_bh->b_folio, boff); 688 667 lock = nilfs_mdt_bgl_lock(inode, group); 689 668 690 669 if (!nilfs_clear_bit_atomic(lock, group_offset, bitmap)) ··· 695 674 else 696 675 nilfs_palloc_group_desc_add_entries(desc, lock, 1); 697 676 698 - kunmap_local(bitmap_kaddr); 699 - kunmap_local(desc_kaddr); 677 + kunmap_local(bitmap); 678 + kunmap_local(desc); 700 679 701 680 brelse(req->pr_bitmap_bh); 702 681 brelse(req->pr_desc_bh); ··· 760 739 struct buffer_head *desc_bh, *bitmap_bh; 761 740 struct nilfs_palloc_group_desc *desc; 762 741 unsigned char *bitmap; 763 - void *desc_kaddr, *bitmap_kaddr; 742 + size_t doff, boff; 764 743 unsigned long group, group_offset; 765 744 __u64 group_min_nr, last_nrs[8]; 766 745 const unsigned long epg = nilfs_palloc_entries_per_group(inode); ··· 788 767 /* Get the first entry number of the group */ 789 768 group_min_nr = (__u64)group * epg; 790 769 791 - bitmap_kaddr = kmap_local_page(bitmap_bh->b_page); 792 - bitmap = bitmap_kaddr + bh_offset(bitmap_bh); 770 + boff = nilfs_palloc_bitmap_offset(bitmap_bh); 771 + bitmap = kmap_local_folio(bitmap_bh->b_folio, boff); 793 772 lock = nilfs_mdt_bgl_lock(inode, group); 794 773 795 774 j = i; ··· 834 813 entry_start = rounddown(group_offset, epb); 835 814 } while (true); 836 815 837 - kunmap_local(bitmap_kaddr); 816 + kunmap_local(bitmap); 838 817 mark_buffer_dirty(bitmap_bh); 839 818 brelse(bitmap_bh); 840 819 ··· 848 827 inode->i_ino); 849 828 } 850 829 851 - desc_kaddr = kmap_local_page(desc_bh->b_page); 852 - desc = nilfs_palloc_block_get_group_desc( 853 - inode, group, desc_bh, desc_kaddr); 830 + doff = nilfs_palloc_group_desc_offset(inode, group, desc_bh); 831 + desc = kmap_local_folio(desc_bh->b_folio, doff); 854 832 nfree = nilfs_palloc_group_desc_add_entries(desc, lock, n); 855 - kunmap_local(desc_kaddr); 833 + kunmap_local(desc); 856 834 mark_buffer_dirty(desc_bh); 857 835 nilfs_mdt_mark_dirty(inode); 858 836 brelse(desc_bh);
+2 -2
fs/nilfs2/alloc.h
··· 31 31 int nilfs_palloc_init_blockgroup(struct inode *, unsigned int); 32 32 int nilfs_palloc_get_entry_block(struct inode *, __u64, int, 33 33 struct buffer_head **); 34 - void *nilfs_palloc_block_get_entry(const struct inode *, __u64, 35 - const struct buffer_head *, void *); 34 + size_t nilfs_palloc_entry_offset(const struct inode *inode, __u64 nr, 35 + const struct buffer_head *bh); 36 36 37 37 int nilfs_palloc_count_max_entries(struct inode *, u64, u64 *); 38 38
+206 -177
fs/nilfs2/cpfile.c
··· 68 68 static unsigned int 69 69 nilfs_cpfile_block_add_valid_checkpoints(const struct inode *cpfile, 70 70 struct buffer_head *bh, 71 - void *kaddr, 72 71 unsigned int n) 73 72 { 74 - struct nilfs_checkpoint *cp = kaddr + bh_offset(bh); 73 + struct nilfs_checkpoint *cp; 75 74 unsigned int count; 76 75 76 + cp = kmap_local_folio(bh->b_folio, 77 + offset_in_folio(bh->b_folio, bh->b_data)); 77 78 count = le32_to_cpu(cp->cp_checkpoints_count) + n; 78 79 cp->cp_checkpoints_count = cpu_to_le32(count); 80 + kunmap_local(cp); 79 81 return count; 80 82 } 81 83 82 84 static unsigned int 83 85 nilfs_cpfile_block_sub_valid_checkpoints(const struct inode *cpfile, 84 86 struct buffer_head *bh, 85 - void *kaddr, 86 87 unsigned int n) 87 88 { 88 - struct nilfs_checkpoint *cp = kaddr + bh_offset(bh); 89 + struct nilfs_checkpoint *cp; 89 90 unsigned int count; 90 91 92 + cp = kmap_local_folio(bh->b_folio, 93 + offset_in_folio(bh->b_folio, bh->b_data)); 91 94 WARN_ON(le32_to_cpu(cp->cp_checkpoints_count) < n); 92 95 count = le32_to_cpu(cp->cp_checkpoints_count) - n; 93 96 cp->cp_checkpoints_count = cpu_to_le32(count); 97 + kunmap_local(cp); 94 98 return count; 95 - } 96 - 97 - static inline struct nilfs_cpfile_header * 98 - nilfs_cpfile_block_get_header(const struct inode *cpfile, 99 - struct buffer_head *bh, 100 - void *kaddr) 101 - { 102 - return kaddr + bh_offset(bh); 103 - } 104 - 105 - static struct nilfs_checkpoint * 106 - nilfs_cpfile_block_get_checkpoint(const struct inode *cpfile, __u64 cno, 107 - struct buffer_head *bh, 108 - void *kaddr) 109 - { 110 - return kaddr + bh_offset(bh) + nilfs_cpfile_get_offset(cpfile, cno) * 111 - NILFS_MDT(cpfile)->mi_entry_size; 112 99 } 113 100 114 101 static void nilfs_cpfile_block_init(struct inode *cpfile, 115 102 struct buffer_head *bh, 116 - void *kaddr) 103 + void *from) 117 104 { 118 - struct nilfs_checkpoint *cp = kaddr + bh_offset(bh); 105 + struct nilfs_checkpoint *cp = from; 119 106 size_t cpsz = NILFS_MDT(cpfile)->mi_entry_size; 120 107 int n = nilfs_cpfile_checkpoints_per_block(cpfile); 121 108 ··· 110 123 nilfs_checkpoint_set_invalid(cp); 111 124 cp = (void *)cp + cpsz; 112 125 } 126 + } 127 + 128 + /** 129 + * nilfs_cpfile_checkpoint_offset - calculate the byte offset of a checkpoint 130 + * entry in the folio containing it 131 + * @cpfile: checkpoint file inode 132 + * @cno: checkpoint number 133 + * @bh: buffer head of block containing checkpoint indexed by @cno 134 + * 135 + * Return: Byte offset in the folio of the checkpoint specified by @cno. 136 + */ 137 + static size_t nilfs_cpfile_checkpoint_offset(const struct inode *cpfile, 138 + __u64 cno, 139 + struct buffer_head *bh) 140 + { 141 + return offset_in_folio(bh->b_folio, bh->b_data) + 142 + nilfs_cpfile_get_offset(cpfile, cno) * 143 + NILFS_MDT(cpfile)->mi_entry_size; 144 + } 145 + 146 + /** 147 + * nilfs_cpfile_cp_snapshot_list_offset - calculate the byte offset of a 148 + * checkpoint snapshot list in the folio 149 + * containing it 150 + * @cpfile: checkpoint file inode 151 + * @cno: checkpoint number 152 + * @bh: buffer head of block containing checkpoint indexed by @cno 153 + * 154 + * Return: Byte offset in the folio of the checkpoint snapshot list specified 155 + * by @cno. 156 + */ 157 + static size_t nilfs_cpfile_cp_snapshot_list_offset(const struct inode *cpfile, 158 + __u64 cno, 159 + struct buffer_head *bh) 160 + { 161 + return nilfs_cpfile_checkpoint_offset(cpfile, cno, bh) + 162 + offsetof(struct nilfs_checkpoint, cp_snapshot_list); 163 + } 164 + 165 + /** 166 + * nilfs_cpfile_ch_snapshot_list_offset - calculate the byte offset of the 167 + * snapshot list in the header 168 + * 169 + * Return: Byte offset in the folio of the checkpoint snapshot list 170 + */ 171 + static size_t nilfs_cpfile_ch_snapshot_list_offset(void) 172 + { 173 + return offsetof(struct nilfs_cpfile_header, ch_snapshot_list); 113 174 } 114 175 115 176 static int nilfs_cpfile_get_header_block(struct inode *cpfile, ··· 249 214 { 250 215 struct buffer_head *cp_bh; 251 216 struct nilfs_checkpoint *cp; 252 - void *kaddr; 217 + size_t offset; 253 218 int ret; 254 219 255 220 if (cno < 1 || cno > nilfs_mdt_cno(cpfile)) ··· 263 228 goto out_sem; 264 229 } 265 230 266 - kaddr = kmap_local_page(cp_bh->b_page); 267 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 231 + offset = nilfs_cpfile_checkpoint_offset(cpfile, cno, cp_bh); 232 + cp = kmap_local_folio(cp_bh->b_folio, offset); 268 233 if (nilfs_checkpoint_invalid(cp)) { 269 234 ret = -EINVAL; 270 235 goto put_cp; ··· 289 254 root->ifile = ifile; 290 255 291 256 put_cp: 292 - kunmap_local(kaddr); 257 + kunmap_local(cp); 293 258 brelse(cp_bh); 294 259 out_sem: 295 260 up_read(&NILFS_MDT(cpfile)->mi_sem); ··· 317 282 struct buffer_head *header_bh, *cp_bh; 318 283 struct nilfs_cpfile_header *header; 319 284 struct nilfs_checkpoint *cp; 320 - void *kaddr; 285 + size_t offset; 321 286 int ret; 322 287 323 288 if (WARN_ON_ONCE(cno < 1)) ··· 332 297 if (unlikely(ret < 0)) 333 298 goto out_header; 334 299 335 - kaddr = kmap_local_page(cp_bh->b_page); 336 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 300 + offset = nilfs_cpfile_checkpoint_offset(cpfile, cno, cp_bh); 301 + cp = kmap_local_folio(cp_bh->b_folio, offset); 337 302 if (nilfs_checkpoint_invalid(cp)) { 338 303 /* a newly-created checkpoint */ 339 304 nilfs_checkpoint_clear_invalid(cp); 305 + kunmap_local(cp); 340 306 if (!nilfs_cpfile_is_in_first(cpfile, cno)) 341 307 nilfs_cpfile_block_add_valid_checkpoints(cpfile, cp_bh, 342 - kaddr, 1); 343 - kunmap_local(kaddr); 308 + 1); 344 309 345 - kaddr = kmap_local_page(header_bh->b_page); 346 - header = nilfs_cpfile_block_get_header(cpfile, header_bh, 347 - kaddr); 310 + header = kmap_local_folio(header_bh->b_folio, 0); 348 311 le64_add_cpu(&header->ch_ncheckpoints, 1); 349 - kunmap_local(kaddr); 312 + kunmap_local(header); 350 313 mark_buffer_dirty(header_bh); 351 314 } else { 352 - kunmap_local(kaddr); 315 + kunmap_local(cp); 353 316 } 354 317 355 318 /* Force the buffer and the inode to become dirty */ ··· 386 353 { 387 354 struct buffer_head *cp_bh; 388 355 struct nilfs_checkpoint *cp; 389 - void *kaddr; 356 + size_t offset; 390 357 int ret; 391 358 392 359 if (WARN_ON_ONCE(cno < 1)) ··· 400 367 goto out_sem; 401 368 } 402 369 403 - kaddr = kmap_local_page(cp_bh->b_page); 404 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 370 + offset = nilfs_cpfile_checkpoint_offset(cpfile, cno, cp_bh); 371 + cp = kmap_local_folio(cp_bh->b_folio, offset); 405 372 if (unlikely(nilfs_checkpoint_invalid(cp))) { 406 - kunmap_local(kaddr); 373 + kunmap_local(cp); 407 374 brelse(cp_bh); 408 375 goto error; 409 376 } ··· 424 391 nilfs_write_inode_common(root->ifile, &cp->cp_ifile_inode); 425 392 nilfs_bmap_write(NILFS_I(root->ifile)->i_bmap, &cp->cp_ifile_inode); 426 393 427 - kunmap_local(kaddr); 394 + kunmap_local(cp); 428 395 brelse(cp_bh); 429 396 out_sem: 430 397 up_write(&NILFS_MDT(cpfile)->mi_sem); ··· 465 432 struct nilfs_checkpoint *cp; 466 433 size_t cpsz = NILFS_MDT(cpfile)->mi_entry_size; 467 434 __u64 cno; 435 + size_t offset; 468 436 void *kaddr; 469 437 unsigned long tnicps; 470 438 int ret, ncps, nicps, nss, count, i; ··· 496 462 continue; 497 463 } 498 464 499 - kaddr = kmap_local_page(cp_bh->b_page); 500 - cp = nilfs_cpfile_block_get_checkpoint( 501 - cpfile, cno, cp_bh, kaddr); 465 + offset = nilfs_cpfile_checkpoint_offset(cpfile, cno, cp_bh); 466 + cp = kaddr = kmap_local_folio(cp_bh->b_folio, offset); 502 467 nicps = 0; 503 468 for (i = 0; i < ncps; i++, cp = (void *)cp + cpsz) { 504 469 if (nilfs_checkpoint_snapshot(cp)) { ··· 507 474 nicps++; 508 475 } 509 476 } 510 - if (nicps > 0) { 511 - tnicps += nicps; 512 - mark_buffer_dirty(cp_bh); 513 - nilfs_mdt_mark_dirty(cpfile); 514 - if (!nilfs_cpfile_is_in_first(cpfile, cno)) { 515 - count = 516 - nilfs_cpfile_block_sub_valid_checkpoints( 517 - cpfile, cp_bh, kaddr, nicps); 518 - if (count == 0) { 519 - /* make hole */ 520 - kunmap_local(kaddr); 521 - brelse(cp_bh); 522 - ret = 523 - nilfs_cpfile_delete_checkpoint_block( 524 - cpfile, cno); 525 - if (ret == 0) 526 - continue; 527 - nilfs_err(cpfile->i_sb, 528 - "error %d deleting checkpoint block", 529 - ret); 530 - break; 531 - } 532 - } 477 + kunmap_local(kaddr); 478 + 479 + if (nicps <= 0) { 480 + brelse(cp_bh); 481 + continue; 533 482 } 534 483 535 - kunmap_local(kaddr); 484 + tnicps += nicps; 485 + mark_buffer_dirty(cp_bh); 486 + nilfs_mdt_mark_dirty(cpfile); 487 + if (nilfs_cpfile_is_in_first(cpfile, cno)) { 488 + brelse(cp_bh); 489 + continue; 490 + } 491 + 492 + count = nilfs_cpfile_block_sub_valid_checkpoints(cpfile, cp_bh, 493 + nicps); 536 494 brelse(cp_bh); 495 + if (count) 496 + continue; 497 + 498 + /* Delete the block if there are no more valid checkpoints */ 499 + ret = nilfs_cpfile_delete_checkpoint_block(cpfile, cno); 500 + if (unlikely(ret)) { 501 + nilfs_err(cpfile->i_sb, 502 + "error %d deleting checkpoint block", ret); 503 + break; 504 + } 537 505 } 538 506 539 507 if (tnicps > 0) { 540 - kaddr = kmap_local_page(header_bh->b_page); 541 - header = nilfs_cpfile_block_get_header(cpfile, header_bh, 542 - kaddr); 508 + header = kmap_local_folio(header_bh->b_folio, 0); 543 509 le64_add_cpu(&header->ch_ncheckpoints, -(u64)tnicps); 544 510 mark_buffer_dirty(header_bh); 545 511 nilfs_mdt_mark_dirty(cpfile); 546 - kunmap_local(kaddr); 512 + kunmap_local(header); 547 513 } 548 514 549 515 brelse(header_bh); ··· 576 544 struct buffer_head *bh; 577 545 size_t cpsz = NILFS_MDT(cpfile)->mi_entry_size; 578 546 __u64 cur_cno = nilfs_mdt_cno(cpfile), cno = *cnop; 547 + size_t offset; 579 548 void *kaddr; 580 549 int n, ret; 581 550 int ncps, i; ··· 595 562 } 596 563 ncps = nilfs_cpfile_checkpoints_in_block(cpfile, cno, cur_cno); 597 564 598 - kaddr = kmap_local_page(bh->b_page); 599 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, bh, kaddr); 565 + offset = nilfs_cpfile_checkpoint_offset(cpfile, cno, bh); 566 + cp = kaddr = kmap_local_folio(bh->b_folio, offset); 600 567 for (i = 0; i < ncps && n < nci; i++, cp = (void *)cp + cpsz) { 601 568 if (!nilfs_checkpoint_invalid(cp)) { 602 569 nilfs_cpfile_checkpoint_to_cpinfo(cpfile, cp, ··· 630 597 struct nilfs_cpinfo *ci = buf; 631 598 __u64 curr = *cnop, next; 632 599 unsigned long curr_blkoff, next_blkoff; 633 - void *kaddr; 600 + size_t offset; 634 601 int n = 0, ret; 635 602 636 603 down_read(&NILFS_MDT(cpfile)->mi_sem); ··· 639 606 ret = nilfs_cpfile_get_header_block(cpfile, &bh); 640 607 if (ret < 0) 641 608 goto out; 642 - kaddr = kmap_local_page(bh->b_page); 643 - header = nilfs_cpfile_block_get_header(cpfile, bh, kaddr); 609 + header = kmap_local_folio(bh->b_folio, 0); 644 610 curr = le64_to_cpu(header->ch_snapshot_list.ssl_next); 645 - kunmap_local(kaddr); 611 + kunmap_local(header); 646 612 brelse(bh); 647 613 if (curr == 0) { 648 614 ret = 0; ··· 659 627 ret = 0; /* No snapshots (started from a hole block) */ 660 628 goto out; 661 629 } 662 - kaddr = kmap_local_page(bh->b_page); 630 + offset = nilfs_cpfile_checkpoint_offset(cpfile, curr, bh); 631 + cp = kmap_local_folio(bh->b_folio, offset); 663 632 while (n < nci) { 664 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, curr, bh, kaddr); 665 633 curr = ~(__u64)0; /* Terminator */ 666 634 if (unlikely(nilfs_checkpoint_invalid(cp) || 667 635 !nilfs_checkpoint_snapshot(cp))) ··· 673 641 if (next == 0) 674 642 break; /* reach end of the snapshot list */ 675 643 644 + kunmap_local(cp); 676 645 next_blkoff = nilfs_cpfile_get_blkoff(cpfile, next); 677 646 if (curr_blkoff != next_blkoff) { 678 - kunmap_local(kaddr); 679 647 brelse(bh); 680 648 ret = nilfs_cpfile_get_checkpoint_block(cpfile, next, 681 649 0, &bh); ··· 683 651 WARN_ON(ret == -ENOENT); 684 652 goto out; 685 653 } 686 - kaddr = kmap_local_page(bh->b_page); 687 654 } 655 + offset = nilfs_cpfile_checkpoint_offset(cpfile, next, bh); 656 + cp = kmap_local_folio(bh->b_folio, offset); 688 657 curr = next; 689 658 curr_blkoff = next_blkoff; 690 659 } 691 - kunmap_local(kaddr); 660 + kunmap_local(cp); 692 661 brelse(bh); 693 662 *cnop = curr; 694 663 ret = n; ··· 766 733 return nilfs_cpfile_delete_checkpoints(cpfile, cno, cno + 1); 767 734 } 768 735 769 - static struct nilfs_snapshot_list * 770 - nilfs_cpfile_block_get_snapshot_list(const struct inode *cpfile, 771 - __u64 cno, 772 - struct buffer_head *bh, 773 - void *kaddr) 774 - { 775 - struct nilfs_cpfile_header *header; 776 - struct nilfs_checkpoint *cp; 777 - struct nilfs_snapshot_list *list; 778 - 779 - if (cno != 0) { 780 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, bh, kaddr); 781 - list = &cp->cp_snapshot_list; 782 - } else { 783 - header = nilfs_cpfile_block_get_header(cpfile, bh, kaddr); 784 - list = &header->ch_snapshot_list; 785 - } 786 - return list; 787 - } 788 - 789 736 static int nilfs_cpfile_set_snapshot(struct inode *cpfile, __u64 cno) 790 737 { 791 738 struct buffer_head *header_bh, *curr_bh, *prev_bh, *cp_bh; ··· 774 761 struct nilfs_snapshot_list *list; 775 762 __u64 curr, prev; 776 763 unsigned long curr_blkoff, prev_blkoff; 777 - void *kaddr; 764 + size_t offset, curr_list_offset, prev_list_offset; 778 765 int ret; 779 766 780 767 if (cno == 0) 781 768 return -ENOENT; /* checkpoint number 0 is invalid */ 782 769 down_write(&NILFS_MDT(cpfile)->mi_sem); 783 770 771 + ret = nilfs_cpfile_get_header_block(cpfile, &header_bh); 772 + if (unlikely(ret < 0)) 773 + goto out_sem; 774 + 784 775 ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 0, &cp_bh); 785 776 if (ret < 0) 786 - goto out_sem; 787 - kaddr = kmap_local_page(cp_bh->b_page); 788 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 777 + goto out_header; 778 + 779 + offset = nilfs_cpfile_checkpoint_offset(cpfile, cno, cp_bh); 780 + cp = kmap_local_folio(cp_bh->b_folio, offset); 789 781 if (nilfs_checkpoint_invalid(cp)) { 790 782 ret = -ENOENT; 791 - kunmap_local(kaddr); 783 + kunmap_local(cp); 792 784 goto out_cp; 793 785 } 794 786 if (nilfs_checkpoint_snapshot(cp)) { 795 787 ret = 0; 796 - kunmap_local(kaddr); 788 + kunmap_local(cp); 797 789 goto out_cp; 798 790 } 799 - kunmap_local(kaddr); 791 + kunmap_local(cp); 800 792 801 - ret = nilfs_cpfile_get_header_block(cpfile, &header_bh); 802 - if (ret < 0) 803 - goto out_cp; 804 - kaddr = kmap_local_page(header_bh->b_page); 805 - header = nilfs_cpfile_block_get_header(cpfile, header_bh, kaddr); 793 + /* 794 + * Find the last snapshot before the checkpoint being changed to 795 + * snapshot mode by going backwards through the snapshot list. 796 + * Set "prev" to its checkpoint number, or 0 if not found. 797 + */ 798 + header = kmap_local_folio(header_bh->b_folio, 0); 806 799 list = &header->ch_snapshot_list; 807 800 curr_bh = header_bh; 808 801 get_bh(curr_bh); 809 802 curr = 0; 810 803 curr_blkoff = 0; 804 + curr_list_offset = nilfs_cpfile_ch_snapshot_list_offset(); 811 805 prev = le64_to_cpu(list->ssl_prev); 812 806 while (prev > cno) { 813 807 prev_blkoff = nilfs_cpfile_get_blkoff(cpfile, prev); 814 808 curr = prev; 809 + kunmap_local(list); 815 810 if (curr_blkoff != prev_blkoff) { 816 - kunmap_local(kaddr); 817 811 brelse(curr_bh); 818 812 ret = nilfs_cpfile_get_checkpoint_block(cpfile, curr, 819 813 0, &curr_bh); 820 - if (ret < 0) 821 - goto out_header; 822 - kaddr = kmap_local_page(curr_bh->b_page); 814 + if (unlikely(ret < 0)) 815 + goto out_cp; 823 816 } 817 + curr_list_offset = nilfs_cpfile_cp_snapshot_list_offset( 818 + cpfile, curr, curr_bh); 819 + list = kmap_local_folio(curr_bh->b_folio, curr_list_offset); 824 820 curr_blkoff = prev_blkoff; 825 - cp = nilfs_cpfile_block_get_checkpoint( 826 - cpfile, curr, curr_bh, kaddr); 827 - list = &cp->cp_snapshot_list; 828 821 prev = le64_to_cpu(list->ssl_prev); 829 822 } 830 - kunmap_local(kaddr); 823 + kunmap_local(list); 831 824 832 825 if (prev != 0) { 833 826 ret = nilfs_cpfile_get_checkpoint_block(cpfile, prev, 0, 834 827 &prev_bh); 835 828 if (ret < 0) 836 829 goto out_curr; 830 + 831 + prev_list_offset = nilfs_cpfile_cp_snapshot_list_offset( 832 + cpfile, prev, prev_bh); 837 833 } else { 838 834 prev_bh = header_bh; 839 835 get_bh(prev_bh); 836 + prev_list_offset = nilfs_cpfile_ch_snapshot_list_offset(); 840 837 } 841 838 842 - kaddr = kmap_local_page(curr_bh->b_page); 843 - list = nilfs_cpfile_block_get_snapshot_list( 844 - cpfile, curr, curr_bh, kaddr); 839 + /* Update the list entry for the next snapshot */ 840 + list = kmap_local_folio(curr_bh->b_folio, curr_list_offset); 845 841 list->ssl_prev = cpu_to_le64(cno); 846 - kunmap_local(kaddr); 842 + kunmap_local(list); 847 843 848 - kaddr = kmap_local_page(cp_bh->b_page); 849 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 844 + /* Update the checkpoint being changed to a snapshot */ 845 + offset = nilfs_cpfile_checkpoint_offset(cpfile, cno, cp_bh); 846 + cp = kmap_local_folio(cp_bh->b_folio, offset); 850 847 cp->cp_snapshot_list.ssl_next = cpu_to_le64(curr); 851 848 cp->cp_snapshot_list.ssl_prev = cpu_to_le64(prev); 852 849 nilfs_checkpoint_set_snapshot(cp); 853 - kunmap_local(kaddr); 850 + kunmap_local(cp); 854 851 855 - kaddr = kmap_local_page(prev_bh->b_page); 856 - list = nilfs_cpfile_block_get_snapshot_list( 857 - cpfile, prev, prev_bh, kaddr); 852 + /* Update the list entry for the previous snapshot */ 853 + list = kmap_local_folio(prev_bh->b_folio, prev_list_offset); 858 854 list->ssl_next = cpu_to_le64(cno); 859 - kunmap_local(kaddr); 855 + kunmap_local(list); 860 856 861 - kaddr = kmap_local_page(header_bh->b_page); 862 - header = nilfs_cpfile_block_get_header(cpfile, header_bh, kaddr); 857 + /* Update the statistics in the header */ 858 + header = kmap_local_folio(header_bh->b_folio, 0); 863 859 le64_add_cpu(&header->ch_nsnapshots, 1); 864 - kunmap_local(kaddr); 860 + kunmap_local(header); 865 861 866 862 mark_buffer_dirty(prev_bh); 867 863 mark_buffer_dirty(curr_bh); ··· 883 861 out_curr: 884 862 brelse(curr_bh); 885 863 886 - out_header: 887 - brelse(header_bh); 888 - 889 864 out_cp: 890 865 brelse(cp_bh); 866 + 867 + out_header: 868 + brelse(header_bh); 891 869 892 870 out_sem: 893 871 up_write(&NILFS_MDT(cpfile)->mi_sem); ··· 901 879 struct nilfs_checkpoint *cp; 902 880 struct nilfs_snapshot_list *list; 903 881 __u64 next, prev; 904 - void *kaddr; 882 + size_t offset, next_list_offset, prev_list_offset; 905 883 int ret; 906 884 907 885 if (cno == 0) 908 886 return -ENOENT; /* checkpoint number 0 is invalid */ 909 887 down_write(&NILFS_MDT(cpfile)->mi_sem); 910 888 889 + ret = nilfs_cpfile_get_header_block(cpfile, &header_bh); 890 + if (unlikely(ret < 0)) 891 + goto out_sem; 892 + 911 893 ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 0, &cp_bh); 912 894 if (ret < 0) 913 - goto out_sem; 914 - kaddr = kmap_local_page(cp_bh->b_page); 915 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 895 + goto out_header; 896 + 897 + offset = nilfs_cpfile_checkpoint_offset(cpfile, cno, cp_bh); 898 + cp = kmap_local_folio(cp_bh->b_folio, offset); 916 899 if (nilfs_checkpoint_invalid(cp)) { 917 900 ret = -ENOENT; 918 - kunmap_local(kaddr); 901 + kunmap_local(cp); 919 902 goto out_cp; 920 903 } 921 904 if (!nilfs_checkpoint_snapshot(cp)) { 922 905 ret = 0; 923 - kunmap_local(kaddr); 906 + kunmap_local(cp); 924 907 goto out_cp; 925 908 } 926 909 927 910 list = &cp->cp_snapshot_list; 928 911 next = le64_to_cpu(list->ssl_next); 929 912 prev = le64_to_cpu(list->ssl_prev); 930 - kunmap_local(kaddr); 913 + kunmap_local(cp); 931 914 932 - ret = nilfs_cpfile_get_header_block(cpfile, &header_bh); 933 - if (ret < 0) 934 - goto out_cp; 935 915 if (next != 0) { 936 916 ret = nilfs_cpfile_get_checkpoint_block(cpfile, next, 0, 937 917 &next_bh); 938 918 if (ret < 0) 939 - goto out_header; 919 + goto out_cp; 920 + 921 + next_list_offset = nilfs_cpfile_cp_snapshot_list_offset( 922 + cpfile, next, next_bh); 940 923 } else { 941 924 next_bh = header_bh; 942 925 get_bh(next_bh); 926 + next_list_offset = nilfs_cpfile_ch_snapshot_list_offset(); 943 927 } 944 928 if (prev != 0) { 945 929 ret = nilfs_cpfile_get_checkpoint_block(cpfile, prev, 0, 946 930 &prev_bh); 947 931 if (ret < 0) 948 932 goto out_next; 933 + 934 + prev_list_offset = nilfs_cpfile_cp_snapshot_list_offset( 935 + cpfile, prev, prev_bh); 949 936 } else { 950 937 prev_bh = header_bh; 951 938 get_bh(prev_bh); 939 + prev_list_offset = nilfs_cpfile_ch_snapshot_list_offset(); 952 940 } 953 941 954 - kaddr = kmap_local_page(next_bh->b_page); 955 - list = nilfs_cpfile_block_get_snapshot_list( 956 - cpfile, next, next_bh, kaddr); 942 + /* Update the list entry for the next snapshot */ 943 + list = kmap_local_folio(next_bh->b_folio, next_list_offset); 957 944 list->ssl_prev = cpu_to_le64(prev); 958 - kunmap_local(kaddr); 945 + kunmap_local(list); 959 946 960 - kaddr = kmap_local_page(prev_bh->b_page); 961 - list = nilfs_cpfile_block_get_snapshot_list( 962 - cpfile, prev, prev_bh, kaddr); 947 + /* Update the list entry for the previous snapshot */ 948 + list = kmap_local_folio(prev_bh->b_folio, prev_list_offset); 963 949 list->ssl_next = cpu_to_le64(next); 964 - kunmap_local(kaddr); 950 + kunmap_local(list); 965 951 966 - kaddr = kmap_local_page(cp_bh->b_page); 967 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); 952 + /* Update the snapshot being changed back to a plain checkpoint */ 953 + cp = kmap_local_folio(cp_bh->b_folio, offset); 968 954 cp->cp_snapshot_list.ssl_next = cpu_to_le64(0); 969 955 cp->cp_snapshot_list.ssl_prev = cpu_to_le64(0); 970 956 nilfs_checkpoint_clear_snapshot(cp); 971 - kunmap_local(kaddr); 957 + kunmap_local(cp); 972 958 973 - kaddr = kmap_local_page(header_bh->b_page); 974 - header = nilfs_cpfile_block_get_header(cpfile, header_bh, kaddr); 959 + /* Update the statistics in the header */ 960 + header = kmap_local_folio(header_bh->b_folio, 0); 975 961 le64_add_cpu(&header->ch_nsnapshots, -1); 976 - kunmap_local(kaddr); 962 + kunmap_local(header); 977 963 978 964 mark_buffer_dirty(next_bh); 979 965 mark_buffer_dirty(prev_bh); ··· 994 964 out_next: 995 965 brelse(next_bh); 996 966 997 - out_header: 998 - brelse(header_bh); 999 - 1000 967 out_cp: 1001 968 brelse(cp_bh); 969 + 970 + out_header: 971 + brelse(header_bh); 1002 972 1003 973 out_sem: 1004 974 up_write(&NILFS_MDT(cpfile)->mi_sem); ··· 1020 990 { 1021 991 struct buffer_head *bh; 1022 992 struct nilfs_checkpoint *cp; 1023 - void *kaddr; 993 + size_t offset; 1024 994 int ret; 1025 995 1026 996 /* ··· 1034 1004 ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 0, &bh); 1035 1005 if (ret < 0) 1036 1006 goto out; 1037 - kaddr = kmap_local_page(bh->b_page); 1038 - cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, bh, kaddr); 1007 + 1008 + offset = nilfs_cpfile_checkpoint_offset(cpfile, cno, bh); 1009 + cp = kmap_local_folio(bh->b_folio, offset); 1039 1010 if (nilfs_checkpoint_invalid(cp)) 1040 1011 ret = -ENOENT; 1041 1012 else 1042 1013 ret = nilfs_checkpoint_snapshot(cp); 1043 - kunmap_local(kaddr); 1014 + kunmap_local(cp); 1044 1015 brelse(bh); 1045 1016 1046 1017 out: ··· 1110 1079 { 1111 1080 struct buffer_head *bh; 1112 1081 struct nilfs_cpfile_header *header; 1113 - void *kaddr; 1114 1082 int ret; 1115 1083 1116 1084 down_read(&NILFS_MDT(cpfile)->mi_sem); ··· 1117 1087 ret = nilfs_cpfile_get_header_block(cpfile, &bh); 1118 1088 if (ret < 0) 1119 1089 goto out_sem; 1120 - kaddr = kmap_local_page(bh->b_page); 1121 - header = nilfs_cpfile_block_get_header(cpfile, bh, kaddr); 1090 + header = kmap_local_folio(bh->b_folio, 0); 1122 1091 cpstat->cs_cno = nilfs_mdt_cno(cpfile); 1123 1092 cpstat->cs_ncps = le64_to_cpu(header->ch_ncheckpoints); 1124 1093 cpstat->cs_nsss = le64_to_cpu(header->ch_nsnapshots); 1125 - kunmap_local(kaddr); 1094 + kunmap_local(header); 1126 1095 brelse(bh); 1127 1096 1128 1097 out_sem:
+52 -46
fs/nilfs2/dat.c
··· 89 89 void nilfs_dat_commit_alloc(struct inode *dat, struct nilfs_palloc_req *req) 90 90 { 91 91 struct nilfs_dat_entry *entry; 92 - void *kaddr; 92 + size_t offset; 93 93 94 - kaddr = kmap_local_page(req->pr_entry_bh->b_page); 95 - entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 96 - req->pr_entry_bh, kaddr); 94 + offset = nilfs_palloc_entry_offset(dat, req->pr_entry_nr, 95 + req->pr_entry_bh); 96 + entry = kmap_local_folio(req->pr_entry_bh->b_folio, offset); 97 97 entry->de_start = cpu_to_le64(NILFS_CNO_MIN); 98 98 entry->de_end = cpu_to_le64(NILFS_CNO_MAX); 99 99 entry->de_blocknr = cpu_to_le64(0); 100 - kunmap_local(kaddr); 100 + kunmap_local(entry); 101 101 102 102 nilfs_palloc_commit_alloc_entry(dat, req); 103 103 nilfs_dat_commit_entry(dat, req); ··· 113 113 struct nilfs_palloc_req *req) 114 114 { 115 115 struct nilfs_dat_entry *entry; 116 - void *kaddr; 116 + size_t offset; 117 117 118 - kaddr = kmap_local_page(req->pr_entry_bh->b_page); 119 - entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 120 - req->pr_entry_bh, kaddr); 118 + offset = nilfs_palloc_entry_offset(dat, req->pr_entry_nr, 119 + req->pr_entry_bh); 120 + entry = kmap_local_folio(req->pr_entry_bh->b_folio, offset); 121 121 entry->de_start = cpu_to_le64(NILFS_CNO_MIN); 122 122 entry->de_end = cpu_to_le64(NILFS_CNO_MIN); 123 123 entry->de_blocknr = cpu_to_le64(0); 124 - kunmap_local(kaddr); 124 + kunmap_local(entry); 125 125 126 126 nilfs_dat_commit_entry(dat, req); 127 127 ··· 143 143 sector_t blocknr) 144 144 { 145 145 struct nilfs_dat_entry *entry; 146 - void *kaddr; 146 + size_t offset; 147 147 148 - kaddr = kmap_local_page(req->pr_entry_bh->b_page); 149 - entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 150 - req->pr_entry_bh, kaddr); 148 + offset = nilfs_palloc_entry_offset(dat, req->pr_entry_nr, 149 + req->pr_entry_bh); 150 + entry = kmap_local_folio(req->pr_entry_bh->b_folio, offset); 151 151 entry->de_start = cpu_to_le64(nilfs_mdt_cno(dat)); 152 152 entry->de_blocknr = cpu_to_le64(blocknr); 153 - kunmap_local(kaddr); 153 + kunmap_local(entry); 154 154 155 155 nilfs_dat_commit_entry(dat, req); 156 156 } ··· 160 160 struct nilfs_dat_entry *entry; 161 161 __u64 start; 162 162 sector_t blocknr; 163 - void *kaddr; 163 + size_t offset; 164 164 int ret; 165 165 166 166 ret = nilfs_dat_prepare_entry(dat, req, 0); 167 167 if (ret < 0) 168 168 return ret; 169 169 170 - kaddr = kmap_local_page(req->pr_entry_bh->b_page); 171 - entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 172 - req->pr_entry_bh, kaddr); 170 + offset = nilfs_palloc_entry_offset(dat, req->pr_entry_nr, 171 + req->pr_entry_bh); 172 + entry = kmap_local_folio(req->pr_entry_bh->b_folio, offset); 173 173 start = le64_to_cpu(entry->de_start); 174 174 blocknr = le64_to_cpu(entry->de_blocknr); 175 - kunmap_local(kaddr); 175 + kunmap_local(entry); 176 176 177 177 if (blocknr == 0) { 178 178 ret = nilfs_palloc_prepare_free_entry(dat, req); ··· 200 200 struct nilfs_dat_entry *entry; 201 201 __u64 start, end; 202 202 sector_t blocknr; 203 - void *kaddr; 203 + size_t offset; 204 204 205 - kaddr = kmap_local_page(req->pr_entry_bh->b_page); 206 - entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 207 - req->pr_entry_bh, kaddr); 205 + offset = nilfs_palloc_entry_offset(dat, req->pr_entry_nr, 206 + req->pr_entry_bh); 207 + entry = kmap_local_folio(req->pr_entry_bh->b_folio, offset); 208 208 end = start = le64_to_cpu(entry->de_start); 209 209 if (!dead) { 210 210 end = nilfs_mdt_cno(dat); ··· 212 212 } 213 213 entry->de_end = cpu_to_le64(end); 214 214 blocknr = le64_to_cpu(entry->de_blocknr); 215 - kunmap_local(kaddr); 215 + kunmap_local(entry); 216 216 217 217 if (blocknr == 0) 218 218 nilfs_dat_commit_free(dat, req); ··· 225 225 struct nilfs_dat_entry *entry; 226 226 __u64 start; 227 227 sector_t blocknr; 228 - void *kaddr; 228 + size_t offset; 229 229 230 - kaddr = kmap_local_page(req->pr_entry_bh->b_page); 231 - entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 232 - req->pr_entry_bh, kaddr); 230 + offset = nilfs_palloc_entry_offset(dat, req->pr_entry_nr, 231 + req->pr_entry_bh); 232 + entry = kmap_local_folio(req->pr_entry_bh->b_folio, offset); 233 233 start = le64_to_cpu(entry->de_start); 234 234 blocknr = le64_to_cpu(entry->de_blocknr); 235 - kunmap_local(kaddr); 235 + kunmap_local(entry); 236 236 237 237 if (start == nilfs_mdt_cno(dat) && blocknr == 0) 238 238 nilfs_palloc_abort_free_entry(dat, req); ··· 336 336 { 337 337 struct buffer_head *entry_bh; 338 338 struct nilfs_dat_entry *entry; 339 - void *kaddr; 339 + size_t offset; 340 340 int ret; 341 341 342 342 ret = nilfs_palloc_get_entry_block(dat, vblocknr, 0, &entry_bh); ··· 359 359 } 360 360 } 361 361 362 - kaddr = kmap_local_page(entry_bh->b_page); 363 - entry = nilfs_palloc_block_get_entry(dat, vblocknr, entry_bh, kaddr); 362 + offset = nilfs_palloc_entry_offset(dat, vblocknr, entry_bh); 363 + entry = kmap_local_folio(entry_bh->b_folio, offset); 364 364 if (unlikely(entry->de_blocknr == cpu_to_le64(0))) { 365 365 nilfs_crit(dat->i_sb, 366 366 "%s: invalid vblocknr = %llu, [%llu, %llu)", 367 367 __func__, (unsigned long long)vblocknr, 368 368 (unsigned long long)le64_to_cpu(entry->de_start), 369 369 (unsigned long long)le64_to_cpu(entry->de_end)); 370 - kunmap_local(kaddr); 370 + kunmap_local(entry); 371 371 brelse(entry_bh); 372 372 return -EINVAL; 373 373 } 374 374 WARN_ON(blocknr == 0); 375 375 entry->de_blocknr = cpu_to_le64(blocknr); 376 - kunmap_local(kaddr); 376 + kunmap_local(entry); 377 377 378 378 mark_buffer_dirty(entry_bh); 379 379 nilfs_mdt_mark_dirty(dat); ··· 407 407 struct buffer_head *entry_bh, *bh; 408 408 struct nilfs_dat_entry *entry; 409 409 sector_t blocknr; 410 - void *kaddr; 410 + size_t offset; 411 411 int ret; 412 412 413 413 ret = nilfs_palloc_get_entry_block(dat, vblocknr, 0, &entry_bh); ··· 423 423 } 424 424 } 425 425 426 - kaddr = kmap_local_page(entry_bh->b_page); 427 - entry = nilfs_palloc_block_get_entry(dat, vblocknr, entry_bh, kaddr); 426 + offset = nilfs_palloc_entry_offset(dat, vblocknr, entry_bh); 427 + entry = kmap_local_folio(entry_bh->b_folio, offset); 428 428 blocknr = le64_to_cpu(entry->de_blocknr); 429 429 if (blocknr == 0) { 430 430 ret = -ENOENT; ··· 433 433 *blocknrp = blocknr; 434 434 435 435 out: 436 - kunmap_local(kaddr); 436 + kunmap_local(entry); 437 437 brelse(entry_bh); 438 438 return ret; 439 439 } ··· 442 442 size_t nvi) 443 443 { 444 444 struct buffer_head *entry_bh; 445 - struct nilfs_dat_entry *entry; 445 + struct nilfs_dat_entry *entry, *first_entry; 446 446 struct nilfs_vinfo *vinfo = buf; 447 447 __u64 first, last; 448 - void *kaddr; 448 + size_t offset; 449 449 unsigned long entries_per_block = NILFS_MDT(dat)->mi_entries_per_block; 450 + unsigned int entry_size = NILFS_MDT(dat)->mi_entry_size; 450 451 int i, j, n, ret; 451 452 452 453 for (i = 0; i < nvi; i += n) { ··· 455 454 0, &entry_bh); 456 455 if (ret < 0) 457 456 return ret; 458 - kaddr = kmap_local_page(entry_bh->b_page); 459 - /* last virtual block number in this block */ 457 + 460 458 first = vinfo->vi_vblocknr; 461 459 first = div64_ul(first, entries_per_block); 462 460 first *= entries_per_block; 461 + /* first virtual block number in this block */ 462 + 463 463 last = first + entries_per_block - 1; 464 + /* last virtual block number in this block */ 465 + 466 + offset = nilfs_palloc_entry_offset(dat, first, entry_bh); 467 + first_entry = kmap_local_folio(entry_bh->b_folio, offset); 464 468 for (j = i, n = 0; 465 469 j < nvi && vinfo->vi_vblocknr >= first && 466 470 vinfo->vi_vblocknr <= last; 467 471 j++, n++, vinfo = (void *)vinfo + visz) { 468 - entry = nilfs_palloc_block_get_entry( 469 - dat, vinfo->vi_vblocknr, entry_bh, kaddr); 472 + entry = (void *)first_entry + 473 + (vinfo->vi_vblocknr - first) * entry_size; 470 474 vinfo->vi_start = le64_to_cpu(entry->de_start); 471 475 vinfo->vi_end = le64_to_cpu(entry->de_end); 472 476 vinfo->vi_blocknr = le64_to_cpu(entry->de_blocknr); 473 477 } 474 - kunmap_local(kaddr); 478 + kunmap_local(first_entry); 475 479 brelse(entry_bh); 476 480 } 477 481
+1 -1
fs/nilfs2/dir.c
··· 95 95 unsigned int nr_dirty; 96 96 int err; 97 97 98 - nr_dirty = nilfs_page_count_clean_buffers(&folio->page, from, to); 98 + nr_dirty = nilfs_page_count_clean_buffers(folio, from, to); 99 99 copied = block_write_end(NULL, mapping, pos, len, len, folio, NULL); 100 100 if (pos + copied > dir->i_size) 101 101 i_size_write(dir, pos + copied);
+5 -5
fs/nilfs2/ifile.c
··· 98 98 .pr_entry_nr = ino, .pr_entry_bh = NULL 99 99 }; 100 100 struct nilfs_inode *raw_inode; 101 - void *kaddr; 101 + size_t offset; 102 102 int ret; 103 103 104 104 ret = nilfs_palloc_prepare_free_entry(ifile, &req); ··· 113 113 return ret; 114 114 } 115 115 116 - kaddr = kmap_local_page(req.pr_entry_bh->b_page); 117 - raw_inode = nilfs_palloc_block_get_entry(ifile, req.pr_entry_nr, 118 - req.pr_entry_bh, kaddr); 116 + offset = nilfs_palloc_entry_offset(ifile, req.pr_entry_nr, 117 + req.pr_entry_bh); 118 + raw_inode = kmap_local_folio(req.pr_entry_bh->b_folio, offset); 119 119 raw_inode->i_flags = 0; 120 - kunmap_local(kaddr); 120 + kunmap_local(raw_inode); 121 121 122 122 mark_buffer_dirty(req.pr_entry_bh); 123 123 brelse(req.pr_entry_bh);
+2 -2
fs/nilfs2/ifile.h
··· 21 21 static inline struct nilfs_inode * 22 22 nilfs_ifile_map_inode(struct inode *ifile, ino_t ino, struct buffer_head *ibh) 23 23 { 24 - void *kaddr = kmap_local_page(ibh->b_page); 24 + size_t __offset_in_folio = nilfs_palloc_entry_offset(ifile, ino, ibh); 25 25 26 - return nilfs_palloc_block_get_entry(ifile, ino, ibh, kaddr); 26 + return kmap_local_folio(ibh->b_folio, __offset_in_folio); 27 27 } 28 28 29 29 static inline void nilfs_ifile_unmap_inode(struct nilfs_inode *raw_inode)
+2 -33
fs/nilfs2/inode.c
··· 170 170 return err; 171 171 } 172 172 173 - static int nilfs_writepage(struct page *page, struct writeback_control *wbc) 174 - { 175 - struct folio *folio = page_folio(page); 176 - struct inode *inode = folio->mapping->host; 177 - int err; 178 - 179 - if (sb_rdonly(inode->i_sb)) { 180 - /* 181 - * It means that filesystem was remounted in read-only 182 - * mode because of error or metadata corruption. But we 183 - * have dirty pages that try to be flushed in background. 184 - * So, here we simply discard this dirty page. 185 - */ 186 - nilfs_clear_folio_dirty(folio); 187 - folio_unlock(folio); 188 - return -EROFS; 189 - } 190 - 191 - folio_redirty_for_writepage(wbc, folio); 192 - folio_unlock(folio); 193 - 194 - if (wbc->sync_mode == WB_SYNC_ALL) { 195 - err = nilfs_construct_segment(inode->i_sb); 196 - if (unlikely(err)) 197 - return err; 198 - } else if (wbc->for_reclaim) 199 - nilfs_flush_segment(inode->i_sb, inode->i_ino); 200 - 201 - return 0; 202 - } 203 - 204 173 static bool nilfs_dirty_folio(struct address_space *mapping, 205 174 struct folio *folio) 206 175 { ··· 242 273 unsigned int nr_dirty; 243 274 int err; 244 275 245 - nr_dirty = nilfs_page_count_clean_buffers(&folio->page, start, 276 + nr_dirty = nilfs_page_count_clean_buffers(folio, start, 246 277 start + copied); 247 278 copied = generic_write_end(file, mapping, pos, len, copied, folio, 248 279 fsdata); ··· 264 295 } 265 296 266 297 const struct address_space_operations nilfs_aops = { 267 - .writepage = nilfs_writepage, 268 298 .read_folio = nilfs_read_folio, 269 299 .writepages = nilfs_writepages, 270 300 .dirty_folio = nilfs_dirty_folio, ··· 272 304 .write_end = nilfs_write_end, 273 305 .invalidate_folio = block_invalidate_folio, 274 306 .direct_IO = nilfs_direct_IO, 307 + .migrate_folio = buffer_migrate_folio_norefs, 275 308 .is_partially_uptodate = block_is_partially_uptodate, 276 309 }; 277 310
+28 -12
fs/nilfs2/mdt.c
··· 33 33 struct buffer_head *, void *)) 34 34 { 35 35 struct nilfs_inode_info *ii = NILFS_I(inode); 36 - void *kaddr; 36 + struct folio *folio = bh->b_folio; 37 + void *from; 37 38 int ret; 38 39 39 40 /* Caller exclude read accesses using page lock */ ··· 48 47 49 48 set_buffer_mapped(bh); 50 49 51 - kaddr = kmap_local_page(bh->b_page); 52 - memset(kaddr + bh_offset(bh), 0, i_blocksize(inode)); 50 + /* Initialize block (block size > PAGE_SIZE not yet supported) */ 51 + from = kmap_local_folio(folio, offset_in_folio(folio, bh->b_data)); 52 + memset(from, 0, bh->b_size); 53 53 if (init_block) 54 - init_block(inode, bh, kaddr); 55 - flush_dcache_page(bh->b_page); 56 - kunmap_local(kaddr); 54 + init_block(inode, bh, from); 55 + kunmap_local(from); 56 + 57 + flush_dcache_folio(folio); 57 58 58 59 set_buffer_uptodate(bh); 59 60 mark_buffer_dirty(bh); ··· 398 395 return test_bit(NILFS_I_DIRTY, &ii->i_state); 399 396 } 400 397 401 - static int 402 - nilfs_mdt_write_page(struct page *page, struct writeback_control *wbc) 398 + static int nilfs_mdt_write_folio(struct folio *folio, 399 + struct writeback_control *wbc) 403 400 { 404 - struct folio *folio = page_folio(page); 405 401 struct inode *inode = folio->mapping->host; 406 402 struct super_block *sb; 407 403 int err = 0; ··· 433 431 return err; 434 432 } 435 433 434 + static int nilfs_mdt_writeback(struct address_space *mapping, 435 + struct writeback_control *wbc) 436 + { 437 + struct folio *folio = NULL; 438 + int error; 439 + 440 + while ((folio = writeback_iter(mapping, wbc, folio, &error))) 441 + error = nilfs_mdt_write_folio(folio, wbc); 442 + 443 + return error; 444 + } 436 445 437 446 static const struct address_space_operations def_mdt_aops = { 438 447 .dirty_folio = block_dirty_folio, 439 448 .invalidate_folio = block_invalidate_folio, 440 - .writepage = nilfs_mdt_write_page, 449 + .writepages = nilfs_mdt_writeback, 450 + .migrate_folio = buffer_migrate_folio_norefs, 441 451 }; 442 452 443 453 static const struct inode_operations def_mdt_iops; ··· 584 570 if (!bh_frozen) 585 571 bh_frozen = create_empty_buffers(folio, 1 << blkbits, 0); 586 572 587 - bh_frozen = get_nth_bh(bh_frozen, bh_offset(bh) >> blkbits); 573 + bh_frozen = get_nth_bh(bh_frozen, 574 + offset_in_folio(folio, bh->b_data) >> blkbits); 588 575 589 576 if (!buffer_uptodate(bh_frozen)) 590 577 nilfs_copy_buffer(bh_frozen, bh); ··· 615 600 if (!IS_ERR(folio)) { 616 601 bh_frozen = folio_buffers(folio); 617 602 if (bh_frozen) { 618 - n = bh_offset(bh) >> inode->i_blkbits; 603 + n = offset_in_folio(folio, bh->b_data) >> 604 + inode->i_blkbits; 619 605 bh_frozen = get_nth_bh(bh_frozen, n); 620 606 } 621 607 folio_unlock(folio);
+2 -2
fs/nilfs2/page.c
··· 422 422 __nilfs_clear_folio_dirty(folio); 423 423 } 424 424 425 - unsigned int nilfs_page_count_clean_buffers(struct page *page, 425 + unsigned int nilfs_page_count_clean_buffers(struct folio *folio, 426 426 unsigned int from, unsigned int to) 427 427 { 428 428 unsigned int block_start, block_end; 429 429 struct buffer_head *bh, *head; 430 430 unsigned int nc = 0; 431 431 432 - for (bh = head = page_buffers(page), block_start = 0; 432 + for (bh = head = folio_buffers(folio), block_start = 0; 433 433 bh != head || !block_start; 434 434 block_start = block_end, bh = bh->b_this_page) { 435 435 block_end = block_start + bh->b_size;
+2 -2
fs/nilfs2/page.h
··· 43 43 void nilfs_copy_back_pages(struct address_space *, struct address_space *); 44 44 void nilfs_clear_folio_dirty(struct folio *folio); 45 45 void nilfs_clear_dirty_pages(struct address_space *mapping); 46 - unsigned int nilfs_page_count_clean_buffers(struct page *, unsigned int, 47 - unsigned int); 46 + unsigned int nilfs_page_count_clean_buffers(struct folio *folio, 47 + unsigned int from, unsigned int to); 48 48 unsigned long nilfs_find_uncommitted_extent(struct inode *inode, 49 49 sector_t start_blk, 50 50 sector_t *blkoff);
+7 -10
fs/nilfs2/recovery.c
··· 481 481 482 482 static int nilfs_recovery_copy_block(struct the_nilfs *nilfs, 483 483 struct nilfs_recovery_block *rb, 484 - loff_t pos, struct page *page) 484 + loff_t pos, struct folio *folio) 485 485 { 486 486 struct buffer_head *bh_org; 487 - size_t from = pos & ~PAGE_MASK; 488 - void *kaddr; 487 + size_t from = offset_in_folio(folio, pos); 489 488 490 489 bh_org = __bread(nilfs->ns_bdev, rb->blocknr, nilfs->ns_blocksize); 491 490 if (unlikely(!bh_org)) 492 491 return -EIO; 493 492 494 - kaddr = kmap_local_page(page); 495 - memcpy(kaddr + from, bh_org->b_data, bh_org->b_size); 496 - kunmap_local(kaddr); 493 + memcpy_to_folio(folio, from, bh_org->b_data, bh_org->b_size); 497 494 brelse(bh_org); 498 495 return 0; 499 496 } ··· 528 531 goto failed_inode; 529 532 } 530 533 531 - err = nilfs_recovery_copy_block(nilfs, rb, pos, &folio->page); 534 + err = nilfs_recovery_copy_block(nilfs, rb, pos, folio); 532 535 if (unlikely(err)) 533 - goto failed_page; 536 + goto failed_folio; 534 537 535 538 err = nilfs_set_file_dirty(inode, 1); 536 539 if (unlikely(err)) 537 - goto failed_page; 540 + goto failed_folio; 538 541 539 542 block_write_end(NULL, inode->i_mapping, pos, blocksize, 540 543 blocksize, folio, NULL); ··· 545 548 (*nr_salvaged_blocks)++; 546 549 goto next; 547 550 548 - failed_page: 551 + failed_folio: 549 552 folio_unlock(folio); 550 553 folio_put(folio); 551 554
+10 -7
fs/nilfs2/segbuf.c
··· 205 205 { 206 206 struct buffer_head *bh; 207 207 struct nilfs_segment_summary *raw_sum; 208 - void *kaddr; 209 208 u32 crc; 210 209 211 210 bh = list_entry(segbuf->sb_segsum_buffers.next, struct buffer_head, ··· 219 220 crc = crc32_le(crc, bh->b_data, bh->b_size); 220 221 } 221 222 list_for_each_entry(bh, &segbuf->sb_payload_buffers, b_assoc_buffers) { 222 - kaddr = kmap_local_page(bh->b_page); 223 - crc = crc32_le(crc, kaddr + bh_offset(bh), bh->b_size); 224 - kunmap_local(kaddr); 223 + size_t offset = offset_in_folio(bh->b_folio, bh->b_data); 224 + unsigned char *from; 225 + 226 + /* Do not support block sizes larger than PAGE_SIZE */ 227 + from = kmap_local_folio(bh->b_folio, offset); 228 + crc = crc32_le(crc, from, bh->b_size); 229 + kunmap_local(from); 225 230 } 226 231 raw_sum->ss_datasum = cpu_to_le32(crc); 227 232 } ··· 377 374 struct nilfs_write_info *wi, 378 375 struct buffer_head *bh) 379 376 { 380 - int len, err; 377 + int err; 381 378 382 379 BUG_ON(wi->nr_vecs <= 0); 383 380 repeat: ··· 388 385 (wi->nilfs->ns_blocksize_bits - 9); 389 386 } 390 387 391 - len = bio_add_page(wi->bio, bh->b_page, bh->b_size, bh_offset(bh)); 392 - if (len == bh->b_size) { 388 + if (bio_add_folio(wi->bio, bh->b_folio, bh->b_size, 389 + offset_in_folio(bh->b_folio, bh->b_data))) { 393 390 wi->end++; 394 391 return 0; 395 392 }
+82 -78
fs/nilfs2/sufile.c
··· 70 70 max - curr + 1); 71 71 } 72 72 73 - static struct nilfs_segment_usage * 74 - nilfs_sufile_block_get_segment_usage(const struct inode *sufile, __u64 segnum, 75 - struct buffer_head *bh, void *kaddr) 73 + /** 74 + * nilfs_sufile_segment_usage_offset - calculate the byte offset of a segment 75 + * usage entry in the folio containing it 76 + * @sufile: segment usage file inode 77 + * @segnum: number of segment usage 78 + * @bh: buffer head of block containing segment usage indexed by @segnum 79 + * 80 + * Return: Byte offset in the folio of the segment usage entry. 81 + */ 82 + static size_t nilfs_sufile_segment_usage_offset(const struct inode *sufile, 83 + __u64 segnum, 84 + struct buffer_head *bh) 76 85 { 77 - return kaddr + bh_offset(bh) + 86 + return offset_in_folio(bh->b_folio, bh->b_data) + 78 87 nilfs_sufile_get_offset(sufile, segnum) * 79 88 NILFS_MDT(sufile)->mi_entry_size; 80 89 } ··· 121 112 u64 ncleanadd, u64 ndirtyadd) 122 113 { 123 114 struct nilfs_sufile_header *header; 124 - void *kaddr; 125 115 126 - kaddr = kmap_local_page(header_bh->b_page); 127 - header = kaddr + bh_offset(header_bh); 116 + header = kmap_local_folio(header_bh->b_folio, 0); 128 117 le64_add_cpu(&header->sh_ncleansegs, ncleanadd); 129 118 le64_add_cpu(&header->sh_ndirtysegs, ndirtyadd); 130 - kunmap_local(kaddr); 119 + kunmap_local(header); 131 120 132 121 mark_buffer_dirty(header_bh); 133 122 } ··· 320 313 struct nilfs_sufile_info *sui = NILFS_SUI(sufile); 321 314 size_t susz = NILFS_MDT(sufile)->mi_entry_size; 322 315 __u64 segnum, maxsegnum, last_alloc; 316 + size_t offset; 323 317 void *kaddr; 324 318 unsigned long nsegments, nsus, cnt; 325 319 int ret, j; ··· 330 322 ret = nilfs_sufile_get_header_block(sufile, &header_bh); 331 323 if (ret < 0) 332 324 goto out_sem; 333 - kaddr = kmap_local_page(header_bh->b_page); 334 - header = kaddr + bh_offset(header_bh); 325 + header = kmap_local_folio(header_bh->b_folio, 0); 335 326 last_alloc = le64_to_cpu(header->sh_last_alloc); 336 - kunmap_local(kaddr); 327 + kunmap_local(header); 337 328 338 329 nsegments = nilfs_sufile_get_nsegments(sufile); 339 330 maxsegnum = sui->allocmax; ··· 366 359 &su_bh); 367 360 if (ret < 0) 368 361 goto out_header; 369 - kaddr = kmap_local_page(su_bh->b_page); 370 - su = nilfs_sufile_block_get_segment_usage( 371 - sufile, segnum, su_bh, kaddr); 362 + 363 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, 364 + su_bh); 365 + su = kaddr = kmap_local_folio(su_bh->b_folio, offset); 372 366 373 367 nsus = nilfs_sufile_segment_usages_in_block( 374 368 sufile, segnum, maxsegnum); ··· 380 372 nilfs_segment_usage_set_dirty(su); 381 373 kunmap_local(kaddr); 382 374 383 - kaddr = kmap_local_page(header_bh->b_page); 384 - header = kaddr + bh_offset(header_bh); 375 + header = kmap_local_folio(header_bh->b_folio, 0); 385 376 le64_add_cpu(&header->sh_ncleansegs, -1); 386 377 le64_add_cpu(&header->sh_ndirtysegs, 1); 387 378 header->sh_last_alloc = cpu_to_le64(segnum); 388 - kunmap_local(kaddr); 379 + kunmap_local(header); 389 380 390 381 sui->ncleansegs--; 391 382 mark_buffer_dirty(header_bh); ··· 418 411 struct buffer_head *su_bh) 419 412 { 420 413 struct nilfs_segment_usage *su; 421 - void *kaddr; 414 + size_t offset; 422 415 423 - kaddr = kmap_local_page(su_bh->b_page); 424 - su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 416 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, su_bh); 417 + su = kmap_local_folio(su_bh->b_folio, offset); 425 418 if (unlikely(!nilfs_segment_usage_clean(su))) { 426 419 nilfs_warn(sufile->i_sb, "%s: segment %llu must be clean", 427 420 __func__, (unsigned long long)segnum); 428 - kunmap_local(kaddr); 421 + kunmap_local(su); 429 422 return; 430 423 } 431 424 nilfs_segment_usage_set_dirty(su); 432 - kunmap_local(kaddr); 425 + kunmap_local(su); 433 426 434 427 nilfs_sufile_mod_counter(header_bh, -1, 1); 435 428 NILFS_SUI(sufile)->ncleansegs--; ··· 443 436 struct buffer_head *su_bh) 444 437 { 445 438 struct nilfs_segment_usage *su; 446 - void *kaddr; 439 + size_t offset; 447 440 int clean, dirty; 448 441 449 - kaddr = kmap_local_page(su_bh->b_page); 450 - su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 442 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, su_bh); 443 + su = kmap_local_folio(su_bh->b_folio, offset); 451 444 if (su->su_flags == cpu_to_le32(BIT(NILFS_SEGMENT_USAGE_DIRTY)) && 452 445 su->su_nblocks == cpu_to_le32(0)) { 453 - kunmap_local(kaddr); 446 + kunmap_local(su); 454 447 return; 455 448 } 456 449 clean = nilfs_segment_usage_clean(su); ··· 460 453 su->su_lastmod = cpu_to_le64(0); 461 454 su->su_nblocks = cpu_to_le32(0); 462 455 su->su_flags = cpu_to_le32(BIT(NILFS_SEGMENT_USAGE_DIRTY)); 463 - kunmap_local(kaddr); 456 + kunmap_local(su); 464 457 465 458 nilfs_sufile_mod_counter(header_bh, clean ? (u64)-1 : 0, dirty ? 0 : 1); 466 459 NILFS_SUI(sufile)->ncleansegs -= clean; ··· 474 467 struct buffer_head *su_bh) 475 468 { 476 469 struct nilfs_segment_usage *su; 477 - void *kaddr; 470 + size_t offset; 478 471 int sudirty; 479 472 480 - kaddr = kmap_local_page(su_bh->b_page); 481 - su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 473 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, su_bh); 474 + su = kmap_local_folio(su_bh->b_folio, offset); 482 475 if (nilfs_segment_usage_clean(su)) { 483 476 nilfs_warn(sufile->i_sb, "%s: segment %llu is already clean", 484 477 __func__, (unsigned long long)segnum); 485 - kunmap_local(kaddr); 478 + kunmap_local(su); 486 479 return; 487 480 } 488 481 if (unlikely(nilfs_segment_usage_error(su))) ··· 495 488 (unsigned long long)segnum); 496 489 497 490 nilfs_segment_usage_set_clean(su); 498 - kunmap_local(kaddr); 491 + kunmap_local(su); 499 492 mark_buffer_dirty(su_bh); 500 493 501 494 nilfs_sufile_mod_counter(header_bh, 1, sudirty ? (u64)-1 : 0); ··· 514 507 int nilfs_sufile_mark_dirty(struct inode *sufile, __u64 segnum) 515 508 { 516 509 struct buffer_head *bh; 517 - void *kaddr; 510 + size_t offset; 518 511 struct nilfs_segment_usage *su; 519 512 int ret; 520 513 ··· 530 523 goto out_sem; 531 524 } 532 525 533 - kaddr = kmap_local_page(bh->b_page); 534 - su = nilfs_sufile_block_get_segment_usage(sufile, segnum, bh, kaddr); 526 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, bh); 527 + su = kmap_local_folio(bh->b_folio, offset); 535 528 if (unlikely(nilfs_segment_usage_error(su))) { 536 529 struct the_nilfs *nilfs = sufile->i_sb->s_fs_info; 537 530 538 - kunmap_local(kaddr); 531 + kunmap_local(su); 539 532 brelse(bh); 540 533 if (nilfs_segment_is_active(nilfs, segnum)) { 541 534 nilfs_error(sufile->i_sb, ··· 553 546 ret = -EIO; 554 547 } else { 555 548 nilfs_segment_usage_set_dirty(su); 556 - kunmap_local(kaddr); 549 + kunmap_local(su); 557 550 mark_buffer_dirty(bh); 558 551 nilfs_mdt_mark_dirty(sufile); 559 552 brelse(bh); ··· 575 568 { 576 569 struct buffer_head *bh; 577 570 struct nilfs_segment_usage *su; 578 - void *kaddr; 571 + size_t offset; 579 572 int ret; 580 573 581 574 down_write(&NILFS_MDT(sufile)->mi_sem); ··· 583 576 if (ret < 0) 584 577 goto out_sem; 585 578 586 - kaddr = kmap_local_page(bh->b_page); 587 - su = nilfs_sufile_block_get_segment_usage(sufile, segnum, bh, kaddr); 579 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, bh); 580 + su = kmap_local_folio(bh->b_folio, offset); 588 581 if (modtime) { 589 582 /* 590 583 * Check segusage error and set su_lastmod only when updating ··· 594 587 su->su_lastmod = cpu_to_le64(modtime); 595 588 } 596 589 su->su_nblocks = cpu_to_le32(nblocks); 597 - kunmap_local(kaddr); 590 + kunmap_local(su); 598 591 599 592 mark_buffer_dirty(bh); 600 593 nilfs_mdt_mark_dirty(sufile); ··· 626 619 struct buffer_head *header_bh; 627 620 struct nilfs_sufile_header *header; 628 621 struct the_nilfs *nilfs = sufile->i_sb->s_fs_info; 629 - void *kaddr; 630 622 int ret; 631 623 632 624 down_read(&NILFS_MDT(sufile)->mi_sem); ··· 634 628 if (ret < 0) 635 629 goto out_sem; 636 630 637 - kaddr = kmap_local_page(header_bh->b_page); 638 - header = kaddr + bh_offset(header_bh); 631 + header = kmap_local_folio(header_bh->b_folio, 0); 639 632 sustat->ss_nsegs = nilfs_sufile_get_nsegments(sufile); 640 633 sustat->ss_ncleansegs = le64_to_cpu(header->sh_ncleansegs); 641 634 sustat->ss_ndirtysegs = le64_to_cpu(header->sh_ndirtysegs); ··· 643 638 spin_lock(&nilfs->ns_last_segment_lock); 644 639 sustat->ss_prot_seq = nilfs->ns_prot_seq; 645 640 spin_unlock(&nilfs->ns_last_segment_lock); 646 - kunmap_local(kaddr); 641 + kunmap_local(header); 647 642 brelse(header_bh); 648 643 649 644 out_sem: ··· 656 651 struct buffer_head *su_bh) 657 652 { 658 653 struct nilfs_segment_usage *su; 659 - void *kaddr; 654 + size_t offset; 660 655 int suclean; 661 656 662 - kaddr = kmap_local_page(su_bh->b_page); 663 - su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 657 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, su_bh); 658 + su = kmap_local_folio(su_bh->b_folio, offset); 664 659 if (nilfs_segment_usage_error(su)) { 665 - kunmap_local(kaddr); 660 + kunmap_local(su); 666 661 return; 667 662 } 668 663 suclean = nilfs_segment_usage_clean(su); 669 664 nilfs_segment_usage_set_error(su); 670 - kunmap_local(kaddr); 665 + kunmap_local(su); 671 666 672 667 if (suclean) { 673 668 nilfs_sufile_mod_counter(header_bh, -1, 0); ··· 705 700 unsigned long segusages_per_block; 706 701 unsigned long nsegs, ncleaned; 707 702 __u64 segnum; 708 - void *kaddr; 703 + size_t offset; 709 704 ssize_t n, nc; 710 705 int ret; 711 706 int j; ··· 736 731 /* hole */ 737 732 continue; 738 733 } 739 - kaddr = kmap_local_page(su_bh->b_page); 740 - su = nilfs_sufile_block_get_segment_usage( 741 - sufile, segnum, su_bh, kaddr); 734 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, 735 + su_bh); 736 + su = kmap_local_folio(su_bh->b_folio, offset); 742 737 su2 = su; 743 738 for (j = 0; j < n; j++, su = (void *)su + susz) { 744 739 if ((le32_to_cpu(su->su_flags) & 745 740 ~BIT(NILFS_SEGMENT_USAGE_ERROR)) || 746 741 nilfs_segment_is_active(nilfs, segnum + j)) { 747 742 ret = -EBUSY; 748 - kunmap_local(kaddr); 743 + kunmap_local(su2); 749 744 brelse(su_bh); 750 745 goto out_header; 751 746 } ··· 757 752 nc++; 758 753 } 759 754 } 760 - kunmap_local(kaddr); 755 + kunmap_local(su2); 761 756 if (nc > 0) { 762 757 mark_buffer_dirty(su_bh); 763 758 ncleaned += nc; ··· 804 799 struct buffer_head *header_bh; 805 800 struct nilfs_sufile_header *header; 806 801 struct nilfs_sufile_info *sui = NILFS_SUI(sufile); 807 - void *kaddr; 808 802 unsigned long nsegs, nrsvsegs; 809 803 int ret = 0; 810 804 ··· 841 837 sui->allocmin = 0; 842 838 } 843 839 844 - kaddr = kmap_local_page(header_bh->b_page); 845 - header = kaddr + bh_offset(header_bh); 840 + header = kmap_local_folio(header_bh->b_folio, 0); 846 841 header->sh_ncleansegs = cpu_to_le64(sui->ncleansegs); 847 - kunmap_local(kaddr); 842 + kunmap_local(header); 848 843 849 844 mark_buffer_dirty(header_bh); 850 845 nilfs_mdt_mark_dirty(sufile); ··· 877 874 struct nilfs_suinfo *si = buf; 878 875 size_t susz = NILFS_MDT(sufile)->mi_entry_size; 879 876 struct the_nilfs *nilfs = sufile->i_sb->s_fs_info; 877 + size_t offset; 880 878 void *kaddr; 881 879 unsigned long nsegs, segusages_per_block; 882 880 ssize_t n; ··· 905 901 continue; 906 902 } 907 903 908 - kaddr = kmap_local_page(su_bh->b_page); 909 - su = nilfs_sufile_block_get_segment_usage( 910 - sufile, segnum, su_bh, kaddr); 904 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, 905 + su_bh); 906 + su = kaddr = kmap_local_folio(su_bh->b_folio, offset); 911 907 for (j = 0; j < n; 912 908 j++, su = (void *)su + susz, si = (void *)si + sisz) { 913 909 si->sui_lastmod = le64_to_cpu(su->su_lastmod); ··· 955 951 struct buffer_head *header_bh, *bh; 956 952 struct nilfs_suinfo_update *sup, *supend = buf + supsz * nsup; 957 953 struct nilfs_segment_usage *su; 958 - void *kaddr; 954 + size_t offset; 959 955 unsigned long blkoff, prev_blkoff; 960 956 int cleansi, cleansu, dirtysi, dirtysu; 961 957 long ncleaned = 0, ndirtied = 0; ··· 987 983 goto out_header; 988 984 989 985 for (;;) { 990 - kaddr = kmap_local_page(bh->b_page); 991 - su = nilfs_sufile_block_get_segment_usage( 992 - sufile, sup->sup_segnum, bh, kaddr); 986 + offset = nilfs_sufile_segment_usage_offset( 987 + sufile, sup->sup_segnum, bh); 988 + su = kmap_local_folio(bh->b_folio, offset); 993 989 994 990 if (nilfs_suinfo_update_lastmod(sup)) 995 991 su->su_lastmod = cpu_to_le64(sup->sup_sui.sui_lastmod); ··· 1024 1020 su->su_flags = cpu_to_le32(sup->sup_sui.sui_flags); 1025 1021 } 1026 1022 1027 - kunmap_local(kaddr); 1023 + kunmap_local(su); 1028 1024 1029 1025 sup = (void *)sup + supsz; 1030 1026 if (sup >= supend) ··· 1080 1076 struct the_nilfs *nilfs = sufile->i_sb->s_fs_info; 1081 1077 struct buffer_head *su_bh; 1082 1078 struct nilfs_segment_usage *su; 1079 + size_t offset; 1083 1080 void *kaddr; 1084 1081 size_t n, i, susz = NILFS_MDT(sufile)->mi_entry_size; 1085 1082 sector_t seg_start, seg_end, start_block, end_block; ··· 1130 1125 continue; 1131 1126 } 1132 1127 1133 - kaddr = kmap_local_page(su_bh->b_page); 1134 - su = nilfs_sufile_block_get_segment_usage(sufile, segnum, 1135 - su_bh, kaddr); 1128 + offset = nilfs_sufile_segment_usage_offset(sufile, segnum, 1129 + su_bh); 1130 + su = kaddr = kmap_local_folio(su_bh->b_folio, offset); 1136 1131 for (i = 0; i < n; ++i, ++segnum, su = (void *)su + susz) { 1137 1132 if (!nilfs_segment_usage_clean(su)) 1138 1133 continue; ··· 1172 1167 } 1173 1168 1174 1169 ndiscarded += nblocks; 1175 - kaddr = kmap_local_page(su_bh->b_page); 1176 - su = nilfs_sufile_block_get_segment_usage( 1177 - sufile, segnum, su_bh, kaddr); 1170 + offset = nilfs_sufile_segment_usage_offset( 1171 + sufile, segnum, su_bh); 1172 + su = kaddr = kmap_local_folio(su_bh->b_folio, 1173 + offset); 1178 1174 } 1179 1175 1180 1176 /* start new extent */ ··· 1227 1221 struct nilfs_sufile_info *sui; 1228 1222 struct buffer_head *header_bh; 1229 1223 struct nilfs_sufile_header *header; 1230 - void *kaddr; 1231 1224 int err; 1232 1225 1233 1226 if (susize > sb->s_blocksize) { ··· 1267 1262 } 1268 1263 1269 1264 sui = NILFS_SUI(sufile); 1270 - kaddr = kmap_local_page(header_bh->b_page); 1271 - header = kaddr + bh_offset(header_bh); 1265 + header = kmap_local_folio(header_bh->b_folio, 0); 1272 1266 sui->ncleansegs = le64_to_cpu(header->sh_ncleansegs); 1273 - kunmap_local(kaddr); 1267 + kunmap_local(header); 1274 1268 brelse(header_bh); 1275 1269 1276 1270 sui->allocmax = nilfs_sufile_get_nsegments(sufile) - 1;
+1 -1
fs/ocfs2/alloc.c
··· 4767 4767 } 4768 4768 4769 4769 /* 4770 - * Allcate and add clusters into the extent b-tree. 4770 + * Allocate and add clusters into the extent b-tree. 4771 4771 * The new clusters(clusters_to_add) will be inserted at logical_offset. 4772 4772 * The extent b-tree's root is specified by et, and 4773 4773 * it is not limited to the file storage. Any extent tree can use this
+2
fs/ocfs2/aops.h
··· 70 70 OCFS2_IOCB_NUM_LOCKS 71 71 }; 72 72 73 + #define ocfs2_iocb_init_rw_locked(iocb) \ 74 + (iocb->private = NULL) 73 75 #define ocfs2_iocb_clear_rw_locked(iocb) \ 74 76 clear_bit(OCFS2_IOCB_RW_LOCK, (unsigned long *)&iocb->private) 75 77 #define ocfs2_iocb_rw_locked_level(iocb) \
+1 -1
fs/ocfs2/cluster/quorum.c
··· 60 60 switch (o2nm_single_cluster->cl_fence_method) { 61 61 case O2NM_FENCE_PANIC: 62 62 panic("*** ocfs2 is very sorry to be fencing this system by " 63 - "panicing ***\n"); 63 + "panicking ***\n"); 64 64 break; 65 65 default: 66 66 WARN_ON(o2nm_single_cluster->cl_fence_method >=
-2
fs/ocfs2/dlm/dlmapi.h
··· 62 62 DLM_MAXSTATS, /* 41: upper limit for return code validation */ 63 63 }; 64 64 65 - /* for pretty-printing dlm_status error messages */ 66 - const char *dlm_errmsg(enum dlm_status err); 67 65 /* for pretty-printing dlm_status error names */ 68 66 const char *dlm_errname(enum dlm_status err); 69 67
-53
fs/ocfs2/dlm/dlmdebug.c
··· 164 164 [DLM_MAXSTATS] = "DLM_MAXSTATS", 165 165 }; 166 166 167 - static const char *dlm_errmsgs[] = { 168 - [DLM_NORMAL] = "request in progress", 169 - [DLM_GRANTED] = "request granted", 170 - [DLM_DENIED] = "request denied", 171 - [DLM_DENIED_NOLOCKS] = "request denied, out of system resources", 172 - [DLM_WORKING] = "async request in progress", 173 - [DLM_BLOCKED] = "lock request blocked", 174 - [DLM_BLOCKED_ORPHAN] = "lock request blocked by a orphan lock", 175 - [DLM_DENIED_GRACE_PERIOD] = "topological change in progress", 176 - [DLM_SYSERR] = "system error", 177 - [DLM_NOSUPPORT] = "unsupported", 178 - [DLM_CANCELGRANT] = "can't cancel convert: already granted", 179 - [DLM_IVLOCKID] = "bad lockid", 180 - [DLM_SYNC] = "synchronous request granted", 181 - [DLM_BADTYPE] = "bad resource type", 182 - [DLM_BADRESOURCE] = "bad resource handle", 183 - [DLM_MAXHANDLES] = "no more resource handles", 184 - [DLM_NOCLINFO] = "can't contact cluster manager", 185 - [DLM_NOLOCKMGR] = "can't contact lock manager", 186 - [DLM_NOPURGED] = "can't contact purge daemon", 187 - [DLM_BADARGS] = "bad api args", 188 - [DLM_VOID] = "no status", 189 - [DLM_NOTQUEUED] = "NOQUEUE was specified and request failed", 190 - [DLM_IVBUFLEN] = "invalid resource name length", 191 - [DLM_CVTUNGRANT] = "attempted to convert ungranted lock", 192 - [DLM_BADPARAM] = "invalid lock mode specified", 193 - [DLM_VALNOTVALID] = "value block has been invalidated", 194 - [DLM_REJECTED] = "request rejected, unrecognized client", 195 - [DLM_ABORT] = "blocked lock request cancelled", 196 - [DLM_CANCEL] = "conversion request cancelled", 197 - [DLM_IVRESHANDLE] = "invalid resource handle", 198 - [DLM_DEADLOCK] = "deadlock recovery refused this request", 199 - [DLM_DENIED_NOASTS] = "failed to allocate AST", 200 - [DLM_FORWARD] = "request must wait for primary's response", 201 - [DLM_TIMEOUT] = "timeout value for lock has expired", 202 - [DLM_IVGROUPID] = "invalid group specification", 203 - [DLM_VERS_CONFLICT] = "version conflicts prevent request handling", 204 - [DLM_BAD_DEVICE_PATH] = "Locks device does not exist or path wrong", 205 - [DLM_NO_DEVICE_PERMISSION] = "Client has insufficient perms for device", 206 - [DLM_NO_CONTROL_DEVICE] = "Cannot set options on opened device ", 207 - [DLM_RECOVERING] = "lock resource being recovered", 208 - [DLM_MIGRATING] = "lock resource being migrated", 209 - [DLM_MAXSTATS] = "invalid error number", 210 - }; 211 - 212 - const char *dlm_errmsg(enum dlm_status err) 213 - { 214 - if (err >= DLM_MAXSTATS || err < 0) 215 - return dlm_errmsgs[DLM_MAXSTATS]; 216 - return dlm_errmsgs[err]; 217 - } 218 - EXPORT_SYMBOL_GPL(dlm_errmsg); 219 - 220 167 const char *dlm_errname(enum dlm_status err) 221 168 { 222 169 if (err >= DLM_MAXSTATS || err < 0)
+4
fs/ocfs2/file.c
··· 2398 2398 } else 2399 2399 inode_lock(inode); 2400 2400 2401 + ocfs2_iocb_init_rw_locked(iocb); 2402 + 2401 2403 /* 2402 2404 * Concurrent O_DIRECT writes are allowed with 2403 2405 * mount_option "coherency=buffered". ··· 2545 2543 2546 2544 if (!direct_io && nowait) 2547 2545 return -EOPNOTSUPP; 2546 + 2547 + ocfs2_iocb_init_rw_locked(iocb); 2548 2548 2549 2549 /* 2550 2550 * buffered reads protect themselves in ->read_folio(). O_DIRECT reads
-1
fs/ocfs2/quota.h
··· 97 97 const char *data, size_t len, loff_t off); 98 98 int ocfs2_global_read_info(struct super_block *sb, int type); 99 99 int ocfs2_global_write_info(struct super_block *sb, int type); 100 - int ocfs2_global_read_dquot(struct dquot *dquot); 101 100 int __ocfs2_sync_dquot(struct dquot *dquot, int freeing); 102 101 static inline int ocfs2_sync_dquot(struct dquot *dquot) 103 102 {
+1 -1
fs/proc/array.c
··· 109 109 else if (p->flags & PF_KTHREAD) 110 110 get_kthread_comm(tcomm, sizeof(tcomm), p); 111 111 else 112 - __get_task_comm(tcomm, sizeof(tcomm), p); 112 + get_task_comm(tcomm, p); 113 113 114 114 if (escape) 115 115 seq_escape_str(m, tcomm, ESCAPE_SPACE | ESCAPE_SPECIAL, "\n\\");
+5 -5
fs/proc/kcore.c
··· 493 493 * the previous entry, search for a matching entry. 494 494 */ 495 495 if (!m || start < m->addr || start >= m->addr + m->size) { 496 - struct kcore_list *iter; 496 + struct kcore_list *pos; 497 497 498 498 m = NULL; 499 - list_for_each_entry(iter, &kclist_head, list) { 500 - if (start >= iter->addr && 501 - start < iter->addr + iter->size) { 502 - m = iter; 499 + list_for_each_entry(pos, &kclist_head, list) { 500 + if (start >= pos->addr && 501 + start < pos->addr + pos->size) { 502 + m = pos; 503 503 break; 504 504 } 505 505 }
+1 -1
include/acpi/platform/aclinux.h
··· 15 15 /* ACPICA external files should not include ACPICA headers directly. */ 16 16 17 17 #if !defined(BUILDING_ACPICA) && !defined(_LINUX_ACPI_H) 18 - #error "Please don't include <acpi/acpi.h> directly, include <linux/acpi.h> instead." 18 + #error "Please do not include <acpi/acpi.h> directly, include <linux/acpi.h> instead." 19 19 #endif 20 20 21 21 #endif
+1 -1
include/linux/compiler-clang.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #ifndef __LINUX_COMPILER_TYPES_H 3 - #error "Please don't include <linux/compiler-clang.h> directly, include <linux/compiler.h> instead." 3 + #error "Please do not include <linux/compiler-clang.h> directly, include <linux/compiler.h> instead." 4 4 #endif 5 5 6 6 /* Compiler specific definitions for Clang compiler */
+1 -1
include/linux/compiler-gcc.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #ifndef __LINUX_COMPILER_TYPES_H 3 - #error "Please don't include <linux/compiler-gcc.h> directly, include <linux/compiler.h> instead." 3 + #error "Please do not include <linux/compiler-gcc.h> directly, include <linux/compiler.h> instead." 4 4 #endif 5 5 6 6 /*
+289 -68
include/linux/min_heap.h
··· 38 38 void (*swp)(void *lhs, void *rhs, void *args); 39 39 }; 40 40 41 + /** 42 + * is_aligned - is this pointer & size okay for word-wide copying? 43 + * @base: pointer to data 44 + * @size: size of each element 45 + * @align: required alignment (typically 4 or 8) 46 + * 47 + * Returns true if elements can be copied using word loads and stores. 48 + * The size must be a multiple of the alignment, and the base address must 49 + * be if we do not have CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS. 50 + * 51 + * For some reason, gcc doesn't know to optimize "if (a & mask || b & mask)" 52 + * to "if ((a | b) & mask)", so we do that by hand. 53 + */ 54 + __attribute_const__ __always_inline 55 + static bool is_aligned(const void *base, size_t size, unsigned char align) 56 + { 57 + unsigned char lsbits = (unsigned char)size; 58 + 59 + (void)base; 60 + #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 61 + lsbits |= (unsigned char)(uintptr_t)base; 62 + #endif 63 + return (lsbits & (align - 1)) == 0; 64 + } 65 + 66 + /** 67 + * swap_words_32 - swap two elements in 32-bit chunks 68 + * @a: pointer to the first element to swap 69 + * @b: pointer to the second element to swap 70 + * @n: element size (must be a multiple of 4) 71 + * 72 + * Exchange the two objects in memory. This exploits base+index addressing, 73 + * which basically all CPUs have, to minimize loop overhead computations. 74 + * 75 + * For some reason, on x86 gcc 7.3.0 adds a redundant test of n at the 76 + * bottom of the loop, even though the zero flag is still valid from the 77 + * subtract (since the intervening mov instructions don't alter the flags). 78 + * Gcc 8.1.0 doesn't have that problem. 79 + */ 80 + static __always_inline 81 + void swap_words_32(void *a, void *b, size_t n) 82 + { 83 + do { 84 + u32 t = *(u32 *)(a + (n -= 4)); 85 + *(u32 *)(a + n) = *(u32 *)(b + n); 86 + *(u32 *)(b + n) = t; 87 + } while (n); 88 + } 89 + 90 + /** 91 + * swap_words_64 - swap two elements in 64-bit chunks 92 + * @a: pointer to the first element to swap 93 + * @b: pointer to the second element to swap 94 + * @n: element size (must be a multiple of 8) 95 + * 96 + * Exchange the two objects in memory. This exploits base+index 97 + * addressing, which basically all CPUs have, to minimize loop overhead 98 + * computations. 99 + * 100 + * We'd like to use 64-bit loads if possible. If they're not, emulating 101 + * one requires base+index+4 addressing which x86 has but most other 102 + * processors do not. If CONFIG_64BIT, we definitely have 64-bit loads, 103 + * but it's possible to have 64-bit loads without 64-bit pointers (e.g. 104 + * x32 ABI). Are there any cases the kernel needs to worry about? 105 + */ 106 + static __always_inline 107 + void swap_words_64(void *a, void *b, size_t n) 108 + { 109 + do { 110 + #ifdef CONFIG_64BIT 111 + u64 t = *(u64 *)(a + (n -= 8)); 112 + *(u64 *)(a + n) = *(u64 *)(b + n); 113 + *(u64 *)(b + n) = t; 114 + #else 115 + /* Use two 32-bit transfers to avoid base+index+4 addressing */ 116 + u32 t = *(u32 *)(a + (n -= 4)); 117 + *(u32 *)(a + n) = *(u32 *)(b + n); 118 + *(u32 *)(b + n) = t; 119 + 120 + t = *(u32 *)(a + (n -= 4)); 121 + *(u32 *)(a + n) = *(u32 *)(b + n); 122 + *(u32 *)(b + n) = t; 123 + #endif 124 + } while (n); 125 + } 126 + 127 + /** 128 + * swap_bytes - swap two elements a byte at a time 129 + * @a: pointer to the first element to swap 130 + * @b: pointer to the second element to swap 131 + * @n: element size 132 + * 133 + * This is the fallback if alignment doesn't allow using larger chunks. 134 + */ 135 + static __always_inline 136 + void swap_bytes(void *a, void *b, size_t n) 137 + { 138 + do { 139 + char t = ((char *)a)[--n]; 140 + ((char *)a)[n] = ((char *)b)[n]; 141 + ((char *)b)[n] = t; 142 + } while (n); 143 + } 144 + 145 + /* 146 + * The values are arbitrary as long as they can't be confused with 147 + * a pointer, but small integers make for the smallest compare 148 + * instructions. 149 + */ 150 + #define SWAP_WORDS_64 ((void (*)(void *, void *, void *))0) 151 + #define SWAP_WORDS_32 ((void (*)(void *, void *, void *))1) 152 + #define SWAP_BYTES ((void (*)(void *, void *, void *))2) 153 + 154 + /* 155 + * Selects the appropriate swap function based on the element size. 156 + */ 157 + static __always_inline 158 + void *select_swap_func(const void *base, size_t size) 159 + { 160 + if (is_aligned(base, size, 8)) 161 + return SWAP_WORDS_64; 162 + else if (is_aligned(base, size, 4)) 163 + return SWAP_WORDS_32; 164 + else 165 + return SWAP_BYTES; 166 + } 167 + 168 + static __always_inline 169 + void do_swap(void *a, void *b, size_t size, void (*swap_func)(void *lhs, void *rhs, void *args), 170 + void *priv) 171 + { 172 + if (swap_func == SWAP_WORDS_64) 173 + swap_words_64(a, b, size); 174 + else if (swap_func == SWAP_WORDS_32) 175 + swap_words_32(a, b, size); 176 + else if (swap_func == SWAP_BYTES) 177 + swap_bytes(a, b, size); 178 + else 179 + swap_func(a, b, priv); 180 + } 181 + 182 + /** 183 + * parent - given the offset of the child, find the offset of the parent. 184 + * @i: the offset of the heap element whose parent is sought. Non-zero. 185 + * @lsbit: a precomputed 1-bit mask, equal to "size & -size" 186 + * @size: size of each element 187 + * 188 + * In terms of array indexes, the parent of element j = @i/@size is simply 189 + * (j-1)/2. But when working in byte offsets, we can't use implicit 190 + * truncation of integer divides. 191 + * 192 + * Fortunately, we only need one bit of the quotient, not the full divide. 193 + * @size has a least significant bit. That bit will be clear if @i is 194 + * an even multiple of @size, and set if it's an odd multiple. 195 + * 196 + * Logically, we're doing "if (i & lsbit) i -= size;", but since the 197 + * branch is unpredictable, it's done with a bit of clever branch-free 198 + * code instead. 199 + */ 200 + __attribute_const__ __always_inline 201 + static size_t parent(size_t i, unsigned int lsbit, size_t size) 202 + { 203 + i -= size; 204 + i -= size & -(i & lsbit); 205 + return i / 2; 206 + } 207 + 41 208 /* Initialize a min-heap. */ 42 209 static __always_inline 43 - void __min_heap_init(min_heap_char *heap, void *data, int size) 210 + void __min_heap_init_inline(min_heap_char *heap, void *data, int size) 44 211 { 45 212 heap->nr = 0; 46 213 heap->size = size; ··· 217 50 heap->data = heap->preallocated; 218 51 } 219 52 220 - #define min_heap_init(_heap, _data, _size) \ 221 - __min_heap_init((min_heap_char *)_heap, _data, _size) 53 + #define min_heap_init_inline(_heap, _data, _size) \ 54 + __min_heap_init_inline((min_heap_char *)_heap, _data, _size) 222 55 223 56 /* Get the minimum element from the heap. */ 224 57 static __always_inline 225 - void *__min_heap_peek(struct min_heap_char *heap) 58 + void *__min_heap_peek_inline(struct min_heap_char *heap) 226 59 { 227 60 return heap->nr ? heap->data : NULL; 228 61 } 229 62 230 - #define min_heap_peek(_heap) \ 231 - (__minheap_cast(_heap) __min_heap_peek((min_heap_char *)_heap)) 63 + #define min_heap_peek_inline(_heap) \ 64 + (__minheap_cast(_heap) __min_heap_peek_inline((min_heap_char *)_heap)) 232 65 233 66 /* Check if the heap is full. */ 234 67 static __always_inline 235 - bool __min_heap_full(min_heap_char *heap) 68 + bool __min_heap_full_inline(min_heap_char *heap) 236 69 { 237 70 return heap->nr == heap->size; 238 71 } 239 72 240 - #define min_heap_full(_heap) \ 241 - __min_heap_full((min_heap_char *)_heap) 73 + #define min_heap_full_inline(_heap) \ 74 + __min_heap_full_inline((min_heap_char *)_heap) 242 75 243 76 /* Sift the element at pos down the heap. */ 244 77 static __always_inline 245 - void __min_heap_sift_down(min_heap_char *heap, int pos, size_t elem_size, 246 - const struct min_heap_callbacks *func, void *args) 78 + void __min_heap_sift_down_inline(min_heap_char *heap, int pos, size_t elem_size, 79 + const struct min_heap_callbacks *func, void *args) 247 80 { 248 - void *left, *right; 81 + const unsigned long lsbit = elem_size & -elem_size; 249 82 void *data = heap->data; 250 - void *root = data + pos * elem_size; 251 - int i = pos, j; 83 + void (*swp)(void *lhs, void *rhs, void *args) = func->swp; 84 + /* pre-scale counters for performance */ 85 + size_t a = pos * elem_size; 86 + size_t b, c, d; 87 + size_t n = heap->nr * elem_size; 88 + 89 + if (!swp) 90 + swp = select_swap_func(data, elem_size); 252 91 253 92 /* Find the sift-down path all the way to the leaves. */ 254 - for (;;) { 255 - if (i * 2 + 2 >= heap->nr) 256 - break; 257 - left = data + (i * 2 + 1) * elem_size; 258 - right = data + (i * 2 + 2) * elem_size; 259 - i = func->less(left, right, args) ? i * 2 + 1 : i * 2 + 2; 260 - } 93 + for (b = a; c = 2 * b + elem_size, (d = c + elem_size) < n;) 94 + b = func->less(data + c, data + d, args) ? c : d; 261 95 262 96 /* Special case for the last leaf with no sibling. */ 263 - if (i * 2 + 2 == heap->nr) 264 - i = i * 2 + 1; 97 + if (d == n) 98 + b = c; 265 99 266 100 /* Backtrack to the correct location. */ 267 - while (i != pos && func->less(root, data + i * elem_size, args)) 268 - i = (i - 1) / 2; 101 + while (b != a && func->less(data + a, data + b, args)) 102 + b = parent(b, lsbit, elem_size); 269 103 270 104 /* Shift the element into its correct place. */ 271 - j = i; 272 - while (i != pos) { 273 - i = (i - 1) / 2; 274 - func->swp(data + i * elem_size, data + j * elem_size, args); 105 + c = b; 106 + while (b != a) { 107 + b = parent(b, lsbit, elem_size); 108 + do_swap(data + b, data + c, elem_size, swp, args); 275 109 } 276 110 } 277 111 278 - #define min_heap_sift_down(_heap, _pos, _func, _args) \ 279 - __min_heap_sift_down((min_heap_char *)_heap, _pos, __minheap_obj_size(_heap), _func, _args) 112 + #define min_heap_sift_down_inline(_heap, _pos, _func, _args) \ 113 + __min_heap_sift_down_inline((min_heap_char *)_heap, _pos, __minheap_obj_size(_heap), \ 114 + _func, _args) 280 115 281 116 /* Sift up ith element from the heap, O(log2(nr)). */ 282 117 static __always_inline 283 - void __min_heap_sift_up(min_heap_char *heap, size_t elem_size, size_t idx, 284 - const struct min_heap_callbacks *func, void *args) 118 + void __min_heap_sift_up_inline(min_heap_char *heap, size_t elem_size, size_t idx, 119 + const struct min_heap_callbacks *func, void *args) 285 120 { 121 + const unsigned long lsbit = elem_size & -elem_size; 286 122 void *data = heap->data; 287 - size_t parent; 123 + void (*swp)(void *lhs, void *rhs, void *args) = func->swp; 124 + /* pre-scale counters for performance */ 125 + size_t a = idx * elem_size, b; 288 126 289 - while (idx) { 290 - parent = (idx - 1) / 2; 291 - if (func->less(data + parent * elem_size, data + idx * elem_size, args)) 127 + if (!swp) 128 + swp = select_swap_func(data, elem_size); 129 + 130 + while (a) { 131 + b = parent(a, lsbit, elem_size); 132 + if (func->less(data + b, data + a, args)) 292 133 break; 293 - func->swp(data + parent * elem_size, data + idx * elem_size, args); 294 - idx = parent; 134 + do_swap(data + a, data + b, elem_size, swp, args); 135 + a = b; 295 136 } 296 137 } 297 138 298 - #define min_heap_sift_up(_heap, _idx, _func, _args) \ 299 - __min_heap_sift_up((min_heap_char *)_heap, __minheap_obj_size(_heap), _idx, _func, _args) 139 + #define min_heap_sift_up_inline(_heap, _idx, _func, _args) \ 140 + __min_heap_sift_up_inline((min_heap_char *)_heap, __minheap_obj_size(_heap), _idx, \ 141 + _func, _args) 300 142 301 143 /* Floyd's approach to heapification that is O(nr). */ 302 144 static __always_inline 303 - void __min_heapify_all(min_heap_char *heap, size_t elem_size, 304 - const struct min_heap_callbacks *func, void *args) 145 + void __min_heapify_all_inline(min_heap_char *heap, size_t elem_size, 146 + const struct min_heap_callbacks *func, void *args) 305 147 { 306 148 int i; 307 149 308 150 for (i = heap->nr / 2 - 1; i >= 0; i--) 309 - __min_heap_sift_down(heap, i, elem_size, func, args); 151 + __min_heap_sift_down_inline(heap, i, elem_size, func, args); 310 152 } 311 153 312 - #define min_heapify_all(_heap, _func, _args) \ 313 - __min_heapify_all((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 154 + #define min_heapify_all_inline(_heap, _func, _args) \ 155 + __min_heapify_all_inline((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 314 156 315 157 /* Remove minimum element from the heap, O(log2(nr)). */ 316 158 static __always_inline 317 - bool __min_heap_pop(min_heap_char *heap, size_t elem_size, 318 - const struct min_heap_callbacks *func, void *args) 159 + bool __min_heap_pop_inline(min_heap_char *heap, size_t elem_size, 160 + const struct min_heap_callbacks *func, void *args) 319 161 { 320 162 void *data = heap->data; 321 163 ··· 334 158 /* Place last element at the root (position 0) and then sift down. */ 335 159 heap->nr--; 336 160 memcpy(data, data + (heap->nr * elem_size), elem_size); 337 - __min_heap_sift_down(heap, 0, elem_size, func, args); 161 + __min_heap_sift_down_inline(heap, 0, elem_size, func, args); 338 162 339 163 return true; 340 164 } 341 165 342 - #define min_heap_pop(_heap, _func, _args) \ 343 - __min_heap_pop((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 166 + #define min_heap_pop_inline(_heap, _func, _args) \ 167 + __min_heap_pop_inline((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 344 168 345 169 /* 346 170 * Remove the minimum element and then push the given element. The ··· 348 172 * efficient than a pop followed by a push that does 2. 349 173 */ 350 174 static __always_inline 351 - void __min_heap_pop_push(min_heap_char *heap, 352 - const void *element, size_t elem_size, 353 - const struct min_heap_callbacks *func, 354 - void *args) 175 + void __min_heap_pop_push_inline(min_heap_char *heap, const void *element, size_t elem_size, 176 + const struct min_heap_callbacks *func, void *args) 355 177 { 356 178 memcpy(heap->data, element, elem_size); 357 - __min_heap_sift_down(heap, 0, elem_size, func, args); 179 + __min_heap_sift_down_inline(heap, 0, elem_size, func, args); 358 180 } 359 181 360 - #define min_heap_pop_push(_heap, _element, _func, _args) \ 361 - __min_heap_pop_push((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), _func, _args) 182 + #define min_heap_pop_push_inline(_heap, _element, _func, _args) \ 183 + __min_heap_pop_push_inline((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), \ 184 + _func, _args) 362 185 363 186 /* Push an element on to the heap, O(log2(nr)). */ 364 187 static __always_inline 365 - bool __min_heap_push(min_heap_char *heap, const void *element, size_t elem_size, 366 - const struct min_heap_callbacks *func, void *args) 188 + bool __min_heap_push_inline(min_heap_char *heap, const void *element, size_t elem_size, 189 + const struct min_heap_callbacks *func, void *args) 367 190 { 368 191 void *data = heap->data; 369 192 int pos; ··· 376 201 heap->nr++; 377 202 378 203 /* Sift child at pos up. */ 379 - __min_heap_sift_up(heap, elem_size, pos, func, args); 204 + __min_heap_sift_up_inline(heap, elem_size, pos, func, args); 380 205 381 206 return true; 382 207 } 383 208 384 - #define min_heap_push(_heap, _element, _func, _args) \ 385 - __min_heap_push((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), _func, _args) 209 + #define min_heap_push_inline(_heap, _element, _func, _args) \ 210 + __min_heap_push_inline((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), \ 211 + _func, _args) 386 212 387 213 /* Remove ith element from the heap, O(log2(nr)). */ 388 214 static __always_inline 389 - bool __min_heap_del(min_heap_char *heap, size_t elem_size, size_t idx, 390 - const struct min_heap_callbacks *func, void *args) 215 + bool __min_heap_del_inline(min_heap_char *heap, size_t elem_size, size_t idx, 216 + const struct min_heap_callbacks *func, void *args) 391 217 { 392 218 void *data = heap->data; 219 + void (*swp)(void *lhs, void *rhs, void *args) = func->swp; 393 220 394 221 if (WARN_ONCE(heap->nr <= 0, "Popping an empty heap")) 395 222 return false; 223 + 224 + if (!swp) 225 + swp = select_swap_func(data, elem_size); 396 226 397 227 /* Place last element at the root (position 0) and then sift down. */ 398 228 heap->nr--; 399 229 if (idx == heap->nr) 400 230 return true; 401 - func->swp(data + (idx * elem_size), data + (heap->nr * elem_size), args); 402 - __min_heap_sift_up(heap, elem_size, idx, func, args); 403 - __min_heap_sift_down(heap, idx, elem_size, func, args); 231 + do_swap(data + (idx * elem_size), data + (heap->nr * elem_size), elem_size, swp, args); 232 + __min_heap_sift_up_inline(heap, elem_size, idx, func, args); 233 + __min_heap_sift_down_inline(heap, idx, elem_size, func, args); 404 234 405 235 return true; 406 236 } 407 237 238 + #define min_heap_del_inline(_heap, _idx, _func, _args) \ 239 + __min_heap_del_inline((min_heap_char *)_heap, __minheap_obj_size(_heap), _idx, \ 240 + _func, _args) 241 + 242 + void __min_heap_init(min_heap_char *heap, void *data, int size); 243 + void *__min_heap_peek(struct min_heap_char *heap); 244 + bool __min_heap_full(min_heap_char *heap); 245 + void __min_heap_sift_down(min_heap_char *heap, int pos, size_t elem_size, 246 + const struct min_heap_callbacks *func, void *args); 247 + void __min_heap_sift_up(min_heap_char *heap, size_t elem_size, size_t idx, 248 + const struct min_heap_callbacks *func, void *args); 249 + void __min_heapify_all(min_heap_char *heap, size_t elem_size, 250 + const struct min_heap_callbacks *func, void *args); 251 + bool __min_heap_pop(min_heap_char *heap, size_t elem_size, 252 + const struct min_heap_callbacks *func, void *args); 253 + void __min_heap_pop_push(min_heap_char *heap, const void *element, size_t elem_size, 254 + const struct min_heap_callbacks *func, void *args); 255 + bool __min_heap_push(min_heap_char *heap, const void *element, size_t elem_size, 256 + const struct min_heap_callbacks *func, void *args); 257 + bool __min_heap_del(min_heap_char *heap, size_t elem_size, size_t idx, 258 + const struct min_heap_callbacks *func, void *args); 259 + 260 + #define min_heap_init(_heap, _data, _size) \ 261 + __min_heap_init((min_heap_char *)_heap, _data, _size) 262 + #define min_heap_peek(_heap) \ 263 + (__minheap_cast(_heap) __min_heap_peek((min_heap_char *)_heap)) 264 + #define min_heap_full(_heap) \ 265 + __min_heap_full((min_heap_char *)_heap) 266 + #define min_heap_sift_down(_heap, _pos, _func, _args) \ 267 + __min_heap_sift_down((min_heap_char *)_heap, _pos, __minheap_obj_size(_heap), _func, _args) 268 + #define min_heap_sift_up(_heap, _idx, _func, _args) \ 269 + __min_heap_sift_up((min_heap_char *)_heap, __minheap_obj_size(_heap), _idx, _func, _args) 270 + #define min_heapify_all(_heap, _func, _args) \ 271 + __min_heapify_all((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 272 + #define min_heap_pop(_heap, _func, _args) \ 273 + __min_heap_pop((min_heap_char *)_heap, __minheap_obj_size(_heap), _func, _args) 274 + #define min_heap_pop_push(_heap, _element, _func, _args) \ 275 + __min_heap_pop_push((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), \ 276 + _func, _args) 277 + #define min_heap_push(_heap, _element, _func, _args) \ 278 + __min_heap_push((min_heap_char *)_heap, _element, __minheap_obj_size(_heap), _func, _args) 408 279 #define min_heap_del(_heap, _idx, _func, _args) \ 409 280 __min_heap_del((min_heap_char *)_heap, __minheap_obj_size(_heap), _idx, _func, _args) 410 281
-2
include/linux/notifier.h
··· 237 237 #define KBD_KEYSYM 0x0004 /* Keyboard keysym */ 238 238 #define KBD_POST_KEYSYM 0x0005 /* Called after keyboard keysym interpretation */ 239 239 240 - extern struct blocking_notifier_head reboot_notifier_list; 241 - 242 240 #endif /* __KERNEL__ */ 243 241 #endif /* _LINUX_NOTIFIER_H */
+13 -8
include/linux/percpu-defs.h
··· 220 220 (void)__vpp_verify; \ 221 221 } while (0) 222 222 223 + #define PERCPU_PTR(__p) \ 224 + ({ \ 225 + unsigned long __pcpu_ptr = (__force unsigned long)(__p); \ 226 + (typeof(*(__p)) __force __kernel *)(__pcpu_ptr); \ 227 + }) 228 + 223 229 #ifdef CONFIG_SMP 224 230 225 231 /* 226 - * Add an offset to a pointer but keep the pointer as-is. Use RELOC_HIDE() 227 - * to prevent the compiler from making incorrect assumptions about the 228 - * pointer value. The weird cast keeps both GCC and sparse happy. 232 + * Add an offset to a pointer. Use RELOC_HIDE() to prevent the compiler 233 + * from making incorrect assumptions about the pointer value. 229 234 */ 230 235 #define SHIFT_PERCPU_PTR(__p, __offset) \ 231 - RELOC_HIDE((typeof(*(__p)) __kernel __force *)(__p), (__offset)) 236 + RELOC_HIDE(PERCPU_PTR(__p), (__offset)) 232 237 233 238 #define per_cpu_ptr(ptr, cpu) \ 234 239 ({ \ ··· 259 254 260 255 #else /* CONFIG_SMP */ 261 256 262 - #define VERIFY_PERCPU_PTR(__p) \ 257 + #define per_cpu_ptr(ptr, cpu) \ 263 258 ({ \ 264 - __verify_pcpu_ptr(__p); \ 265 - (typeof(*(__p)) __kernel __force *)(__p); \ 259 + (void)(cpu); \ 260 + __verify_pcpu_ptr(ptr); \ 261 + PERCPU_PTR(ptr); \ 266 262 }) 267 263 268 - #define per_cpu_ptr(ptr, cpu) ({ (void)(cpu); VERIFY_PERCPU_PTR(ptr); }) 269 264 #define raw_cpu_ptr(ptr) per_cpu_ptr(ptr, 0) 270 265 #define this_cpu_ptr(ptr) raw_cpu_ptr(ptr) 271 266
+1 -1
include/linux/pm_wakeup.h
··· 10 10 #define _LINUX_PM_WAKEUP_H 11 11 12 12 #ifndef _DEVICE_H_ 13 - # error "please don't include this file directly" 13 + # error "Please do not include this file directly." 14 14 #endif 15 15 16 16 #include <linux/types.h>
+1 -1
include/linux/rwlock.h
··· 2 2 #define __LINUX_RWLOCK_H 3 3 4 4 #ifndef __LINUX_INSIDE_SPINLOCK_H 5 - # error "please don't include this file directly" 5 + # error "Please do not include this file directly." 6 6 #endif 7 7 8 8 /*
+1 -1
include/linux/rwlock_api_smp.h
··· 2 2 #define __LINUX_RWLOCK_API_SMP_H 3 3 4 4 #ifndef __LINUX_SPINLOCK_API_SMP_H 5 - # error "please don't include this file directly" 5 + # error "Please do not include this file directly." 6 6 #endif 7 7 8 8 /*
+1 -1
include/linux/scatterlist.h
··· 273 273 } 274 274 275 275 /* 276 - * One 64-bit architectures there is a 4-byte padding in struct scatterlist 276 + * On 64-bit architectures there is a 4-byte padding in struct scatterlist 277 277 * (assuming also CONFIG_NEED_SG_DMA_LENGTH is set). Use this padding for DMA 278 278 * flags bits to indicate when a specific dma address is a bus address or the 279 279 * buffer may have been bounced via SWIOTLB.
+22 -6
include/linux/sched.h
··· 1121 1121 /* 1122 1122 * executable name, excluding path. 1123 1123 * 1124 - * - normally initialized setup_new_exec() 1125 - * - access it with [gs]et_task_comm() 1126 - * - lock it with task_lock() 1124 + * - normally initialized begin_new_exec() 1125 + * - set it with set_task_comm() 1126 + * - strscpy_pad() to ensure it is always NUL-terminated and 1127 + * zero-padded 1128 + * - task_lock() to ensure the operation is atomic and the name is 1129 + * fully updated. 1127 1130 */ 1128 1131 char comm[TASK_COMM_LEN]; 1129 1132 ··· 1942 1939 __set_task_comm(tsk, from, false); 1943 1940 } 1944 1941 1945 - extern char *__get_task_comm(char *to, size_t len, struct task_struct *tsk); 1942 + /* 1943 + * - Why not use task_lock()? 1944 + * User space can randomly change their names anyway, so locking for readers 1945 + * doesn't make sense. For writers, locking is probably necessary, as a race 1946 + * condition could lead to long-term mixed results. 1947 + * The strscpy_pad() in __set_task_comm() can ensure that the task comm is 1948 + * always NUL-terminated and zero-padded. Therefore the race condition between 1949 + * reader and writer is not an issue. 1950 + * 1951 + * - BUILD_BUG_ON() can help prevent the buf from being truncated. 1952 + * Since the callers don't perform any return value checks, this safeguard is 1953 + * necessary. 1954 + */ 1946 1955 #define get_task_comm(buf, tsk) ({ \ 1947 - BUILD_BUG_ON(sizeof(buf) != TASK_COMM_LEN); \ 1948 - __get_task_comm(buf, sizeof(buf), tsk); \ 1956 + BUILD_BUG_ON(sizeof(buf) < TASK_COMM_LEN); \ 1957 + strscpy_pad(buf, (tsk)->comm); \ 1958 + buf; \ 1949 1959 }) 1950 1960 1951 1961 #ifdef CONFIG_SMP
+1 -1
include/linux/spinlock_api_smp.h
··· 2 2 #define __LINUX_SPINLOCK_API_SMP_H 3 3 4 4 #ifndef __LINUX_INSIDE_SPINLOCK_H 5 - # error "please don't include this file directly" 5 + # error "Please do not include this file directly." 6 6 #endif 7 7 8 8 /*
+1 -1
include/linux/spinlock_types_up.h
··· 2 2 #define __LINUX_SPINLOCK_TYPES_UP_H 3 3 4 4 #ifndef __LINUX_SPINLOCK_TYPES_RAW_H 5 - # error "please don't include this file directly" 5 + # error "Please do not include this file directly." 6 6 #endif 7 7 8 8 /*
+1 -1
include/linux/spinlock_up.h
··· 2 2 #define __LINUX_SPINLOCK_UP_H 3 3 4 4 #ifndef __LINUX_INSIDE_SPINLOCK_H 5 - # error "please don't include this file directly" 5 + # error "Please do not include this file directly." 6 6 #endif 7 7 8 8 #include <asm/processor.h> /* for cpu_relax() */
+40 -16
include/linux/util_macros.h
··· 4 4 5 5 #include <linux/math.h> 6 6 7 - #define __find_closest(x, a, as, op) \ 8 - ({ \ 9 - typeof(as) __fc_i, __fc_as = (as) - 1; \ 10 - typeof(x) __fc_x = (x); \ 11 - typeof(*a) const *__fc_a = (a); \ 12 - for (__fc_i = 0; __fc_i < __fc_as; __fc_i++) { \ 13 - if (__fc_x op DIV_ROUND_CLOSEST(__fc_a[__fc_i] + \ 14 - __fc_a[__fc_i + 1], 2)) \ 15 - break; \ 16 - } \ 17 - (__fc_i); \ 18 - }) 19 - 20 7 /** 21 8 * find_closest - locate the closest element in a sorted array 22 9 * @x: The reference value. ··· 12 25 * @as: Size of 'a'. 13 26 * 14 27 * Returns the index of the element closest to 'x'. 28 + * Note: If using an array of negative numbers (or mixed positive numbers), 29 + * then be sure that 'x' is of a signed-type to get good results. 15 30 */ 16 - #define find_closest(x, a, as) __find_closest(x, a, as, <=) 31 + #define find_closest(x, a, as) \ 32 + ({ \ 33 + typeof(as) __fc_i, __fc_as = (as) - 1; \ 34 + long __fc_mid_x, __fc_x = (x); \ 35 + long __fc_left, __fc_right; \ 36 + typeof(*a) const *__fc_a = (a); \ 37 + for (__fc_i = 0; __fc_i < __fc_as; __fc_i++) { \ 38 + __fc_mid_x = (__fc_a[__fc_i] + __fc_a[__fc_i + 1]) / 2; \ 39 + if (__fc_x <= __fc_mid_x) { \ 40 + __fc_left = __fc_x - __fc_a[__fc_i]; \ 41 + __fc_right = __fc_a[__fc_i + 1] - __fc_x; \ 42 + if (__fc_right < __fc_left) \ 43 + __fc_i++; \ 44 + break; \ 45 + } \ 46 + } \ 47 + (__fc_i); \ 48 + }) 17 49 18 50 /** 19 51 * find_closest_descending - locate the closest element in a sorted array ··· 42 36 * @as: Size of 'a'. 43 37 * 44 38 * Similar to find_closest() but 'a' is expected to be sorted in descending 45 - * order. 39 + * order. The iteration is done in reverse order, so that the comparison 40 + * of '__fc_right' & '__fc_left' also works for unsigned numbers. 46 41 */ 47 - #define find_closest_descending(x, a, as) __find_closest(x, a, as, >=) 42 + #define find_closest_descending(x, a, as) \ 43 + ({ \ 44 + typeof(as) __fc_i, __fc_as = (as) - 1; \ 45 + long __fc_mid_x, __fc_x = (x); \ 46 + long __fc_left, __fc_right; \ 47 + typeof(*a) const *__fc_a = (a); \ 48 + for (__fc_i = __fc_as; __fc_i >= 1; __fc_i--) { \ 49 + __fc_mid_x = (__fc_a[__fc_i] + __fc_a[__fc_i - 1]) / 2; \ 50 + if (__fc_x <= __fc_mid_x) { \ 51 + __fc_left = __fc_x - __fc_a[__fc_i]; \ 52 + __fc_right = __fc_a[__fc_i - 1] - __fc_x; \ 53 + if (__fc_right < __fc_left) \ 54 + __fc_i--; \ 55 + break; \ 56 + } \ 57 + } \ 58 + (__fc_i); \ 59 + }) 48 60 49 61 /** 50 62 * is_insidevar - check if the @ptr points inside the @var memory range.
+1
init/Kconfig
··· 1166 1166 config CPUSETS 1167 1167 bool "Cpuset controller" 1168 1168 depends on SMP 1169 + select UNION_FIND 1169 1170 help 1170 1171 This option will let you create and manage CPUSETs which 1171 1172 allow dynamically partitioning a system into sets of CPUs and
+1 -1
ipc/msg.c
··· 978 978 979 979 struct compat_msgbuf { 980 980 compat_long_t mtype; 981 - char mtext[1]; 981 + char mtext[]; 982 982 }; 983 983 984 984 long compat_ksys_msgsnd(int msqid, compat_uptr_t msgp,
+3 -1
ipc/namespace.c
··· 83 83 84 84 err = msg_init_ns(ns); 85 85 if (err) 86 - goto fail_put; 86 + goto fail_ipc; 87 87 88 88 sem_init_ns(ns); 89 89 shm_init_ns(ns); 90 90 91 91 return ns; 92 92 93 + fail_ipc: 94 + retire_ipc_sysctls(ns); 93 95 fail_mq: 94 96 retire_mq_sysctls(ns); 95 97
+3 -3
kernel/auditsc.c
··· 2729 2729 context->target_uid = task_uid(t); 2730 2730 context->target_sessionid = audit_get_sessionid(t); 2731 2731 security_task_getlsmprop_obj(t, &context->target_ref); 2732 - memcpy(context->target_comm, t->comm, TASK_COMM_LEN); 2732 + strscpy(context->target_comm, t->comm); 2733 2733 } 2734 2734 2735 2735 /** ··· 2756 2756 ctx->target_uid = t_uid; 2757 2757 ctx->target_sessionid = audit_get_sessionid(t); 2758 2758 security_task_getlsmprop_obj(t, &ctx->target_ref); 2759 - memcpy(ctx->target_comm, t->comm, TASK_COMM_LEN); 2759 + strscpy(ctx->target_comm, t->comm); 2760 2760 return 0; 2761 2761 } 2762 2762 ··· 2777 2777 axp->target_uid[axp->pid_count] = t_uid; 2778 2778 axp->target_sessionid[axp->pid_count] = audit_get_sessionid(t); 2779 2779 security_task_getlsmprop_obj(t, &axp->target_ref[axp->pid_count]); 2780 - memcpy(axp->target_comm[axp->pid_count], t->comm, TASK_COMM_LEN); 2780 + strscpy(axp->target_comm[axp->pid_count], t->comm); 2781 2781 axp->pid_count++; 2782 2782 2783 2783 return 0;
+4 -2
kernel/crash_core.c
··· 505 505 crash_hotplug_lock(); 506 506 /* Obtain lock while reading crash information */ 507 507 if (!kexec_trylock()) { 508 - pr_info("kexec_trylock() failed, kdump image may be inaccurate\n"); 508 + if (!kexec_in_progress) 509 + pr_info("kexec_trylock() failed, kdump image may be inaccurate\n"); 509 510 crash_hotplug_unlock(); 510 511 return 0; 511 512 } ··· 548 547 crash_hotplug_lock(); 549 548 /* Obtain lock while changing crash information */ 550 549 if (!kexec_trylock()) { 551 - pr_info("kexec_trylock() failed, kdump image may be inaccurate\n"); 550 + if (!kexec_in_progress) 551 + pr_info("kexec_trylock() failed, kdump image may be inaccurate\n"); 552 552 crash_hotplug_unlock(); 553 553 return; 554 554 }
+4 -11
kernel/events/core.c
··· 3778 3778 return le->group_index < re->group_index; 3779 3779 } 3780 3780 3781 - static void swap_ptr(void *l, void *r, void __always_unused *args) 3782 - { 3783 - void **lp = l, **rp = r; 3784 - 3785 - swap(*lp, *rp); 3786 - } 3787 - 3788 3781 DEFINE_MIN_HEAP(struct perf_event *, perf_event_min_heap); 3789 3782 3790 3783 static const struct min_heap_callbacks perf_min_heap = { 3791 3784 .less = perf_less_group_idx, 3792 - .swp = swap_ptr, 3785 + .swp = NULL, 3793 3786 }; 3794 3787 3795 3788 static void __heap_add(struct perf_event_min_heap *heap, struct perf_event *event) ··· 3863 3870 perf_assert_pmu_disabled((*evt)->pmu_ctx->pmu); 3864 3871 } 3865 3872 3866 - min_heapify_all(&event_heap, &perf_min_heap, NULL); 3873 + min_heapify_all_inline(&event_heap, &perf_min_heap, NULL); 3867 3874 3868 3875 while (event_heap.nr) { 3869 3876 ret = func(*evt, data); ··· 3872 3879 3873 3880 *evt = perf_event_groups_next(*evt, pmu); 3874 3881 if (*evt) 3875 - min_heap_sift_down(&event_heap, 0, &perf_min_heap, NULL); 3882 + min_heap_sift_down_inline(&event_heap, 0, &perf_min_heap, NULL); 3876 3883 else 3877 - min_heap_pop(&event_heap, &perf_min_heap, NULL); 3884 + min_heap_pop_inline(&event_heap, &perf_min_heap, NULL); 3878 3885 } 3879 3886 3880 3887 return 0;
+2 -2
kernel/events/hw_breakpoint.c
··· 849 849 850 850 cpu_events = alloc_percpu(typeof(*cpu_events)); 851 851 if (!cpu_events) 852 - return (void __percpu __force *)ERR_PTR(-ENOMEM); 852 + return ERR_PTR_PCPU(-ENOMEM); 853 853 854 854 cpus_read_lock(); 855 855 for_each_online_cpu(cpu) { ··· 868 868 return cpu_events; 869 869 870 870 unregister_wide_hw_breakpoint(cpu_events); 871 - return (void __percpu __force *)ERR_PTR(err); 871 + return ERR_PTR_PCPU(err); 872 872 } 873 873 EXPORT_SYMBOL_GPL(register_wide_hw_breakpoint); 874 874
+18
kernel/hung_task.c
··· 31 31 static int __read_mostly sysctl_hung_task_check_count = PID_MAX_LIMIT; 32 32 33 33 /* 34 + * Total number of tasks detected as hung since boot: 35 + */ 36 + static unsigned long __read_mostly sysctl_hung_task_detect_count; 37 + 38 + /* 34 39 * Limit number of tasks checked in a batch. 35 40 * 36 41 * This value controls the preemptibility of khungtaskd since preemption ··· 119 114 } 120 115 if (time_is_after_jiffies(t->last_switch_time + timeout * HZ)) 121 116 return; 117 + 118 + /* 119 + * This counter tracks the total number of tasks detected as hung 120 + * since boot. 121 + */ 122 + sysctl_hung_task_detect_count++; 122 123 123 124 trace_sched_process_hang(t); 124 125 ··· 324 313 .mode = 0644, 325 314 .proc_handler = proc_dointvec_minmax, 326 315 .extra1 = SYSCTL_NEG_ONE, 316 + }, 317 + { 318 + .procname = "hung_task_detect_count", 319 + .data = &sysctl_hung_task_detect_count, 320 + .maxlen = sizeof(unsigned long), 321 + .mode = 0444, 322 + .proc_handler = proc_doulongvec_minmax, 327 323 }, 328 324 }; 329 325
+1 -1
kernel/kthread.c
··· 101 101 struct kthread *kthread = to_kthread(tsk); 102 102 103 103 if (!kthread || !kthread->full_name) { 104 - __get_task_comm(buf, buf_size, tsk); 104 + strscpy(buf, tsk->comm, buf_size); 105 105 return; 106 106 } 107 107
-8
kernel/notifier.c
··· 5 5 #include <linux/notifier.h> 6 6 #include <linux/rcupdate.h> 7 7 #include <linux/vmalloc.h> 8 - #include <linux/reboot.h> 9 8 10 9 #define CREATE_TRACE_POINTS 11 10 #include <trace/events/notifier.h> 12 - 13 - /* 14 - * Notifier list for kernel code which wants to be called 15 - * at shutdown. This is used to stop any idling DMA operations 16 - * and the like. 17 - */ 18 - BLOCKING_NOTIFIER_HEAD(reboot_notifier_list); 19 11 20 12 /* 21 13 * Notifier chain core routines. The exported routines below
+11 -4
kernel/reboot.c
··· 72 72 */ 73 73 void __weak (*pm_power_off)(void); 74 74 75 + /* 76 + * Notifier list for kernel code which wants to be called 77 + * at shutdown. This is used to stop any idling DMA operations 78 + * and the like. 79 + */ 80 + static BLOCKING_NOTIFIER_HEAD(reboot_notifier_list); 81 + 75 82 /** 76 83 * emergency_restart - reboot the system 77 84 * ··· 1137 1130 val = REBOOT_UNDEFINED_STR; 1138 1131 } 1139 1132 1140 - return sprintf(buf, "%s\n", val); 1133 + return sysfs_emit(buf, "%s\n", val); 1141 1134 } 1142 1135 static ssize_t mode_store(struct kobject *kobj, struct kobj_attribute *attr, 1143 1136 const char *buf, size_t count) ··· 1167 1160 #ifdef CONFIG_X86 1168 1161 static ssize_t force_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) 1169 1162 { 1170 - return sprintf(buf, "%d\n", reboot_force); 1163 + return sysfs_emit(buf, "%d\n", reboot_force); 1171 1164 } 1172 1165 static ssize_t force_store(struct kobject *kobj, struct kobj_attribute *attr, 1173 1166 const char *buf, size_t count) ··· 1214 1207 val = REBOOT_UNDEFINED_STR; 1215 1208 } 1216 1209 1217 - return sprintf(buf, "%s\n", val); 1210 + return sysfs_emit(buf, "%s\n", val); 1218 1211 } 1219 1212 static ssize_t type_store(struct kobject *kobj, struct kobj_attribute *attr, 1220 1213 const char *buf, size_t count) ··· 1247 1240 #ifdef CONFIG_SMP 1248 1241 static ssize_t cpu_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) 1249 1242 { 1250 - return sprintf(buf, "%d\n", reboot_cpu); 1243 + return sysfs_emit(buf, "%d\n", reboot_cpu); 1251 1244 } 1252 1245 static ssize_t cpu_store(struct kobject *kobj, struct kobj_attribute *attr, 1253 1246 const char *buf, size_t count)
+39 -27
kernel/resource.c
··· 50 50 51 51 static DEFINE_RWLOCK(resource_lock); 52 52 53 - static struct resource *next_resource(struct resource *p, bool skip_children) 53 + /* 54 + * Return the next node of @p in pre-order tree traversal. If 55 + * @skip_children is true, skip the descendant nodes of @p in 56 + * traversal. If @p is a descendant of @subtree_root, only traverse 57 + * the subtree under @subtree_root. 58 + */ 59 + static struct resource *next_resource(struct resource *p, bool skip_children, 60 + struct resource *subtree_root) 54 61 { 55 62 if (!skip_children && p->child) 56 63 return p->child; 57 - while (!p->sibling && p->parent) 64 + while (!p->sibling && p->parent) { 58 65 p = p->parent; 66 + if (p == subtree_root) 67 + return NULL; 68 + } 59 69 return p->sibling; 60 70 } 61 71 72 + /* 73 + * Traverse the resource subtree under @_root in pre-order, excluding 74 + * @_root itself. 75 + * 76 + * NOTE: '__p' is introduced to avoid shadowing '_p' outside of loop. 77 + * And it is referenced to avoid unused variable warning. 78 + */ 62 79 #define for_each_resource(_root, _p, _skip_children) \ 63 - for ((_p) = (_root)->child; (_p); (_p) = next_resource(_p, _skip_children)) 80 + for (typeof(_root) __root = (_root), __p = _p = __root->child; \ 81 + __p && _p; _p = next_resource(_p, _skip_children, __root)) 64 82 65 83 #ifdef CONFIG_PROC_FS 66 84 ··· 106 88 107 89 (*pos)++; 108 90 109 - return (void *)next_resource(p, false); 91 + return (void *)next_resource(p, false, NULL); 110 92 } 111 93 112 94 static void r_stop(struct seq_file *m, void *v) ··· 315 297 316 298 EXPORT_SYMBOL(release_resource); 317 299 300 + static bool is_type_match(struct resource *p, unsigned long flags, unsigned long desc) 301 + { 302 + return (p->flags & flags) == flags && (desc == IORES_DESC_NONE || desc == p->desc); 303 + } 304 + 318 305 /** 319 306 * find_next_iomem_res - Finds the lowest iomem resource that covers part of 320 307 * [@start..@end]. ··· 362 339 if (p->end < start) 363 340 continue; 364 341 365 - if ((p->flags & flags) != flags) 366 - continue; 367 - if ((desc != IORES_DESC_NONE) && (desc != p->desc)) 368 - continue; 369 - 370 342 /* Found a match, break */ 371 - break; 343 + if (is_type_match(p, flags, desc)) 344 + break; 372 345 } 373 346 374 347 if (p) { ··· 556 537 size_t size, unsigned long flags, 557 538 unsigned long desc) 558 539 { 559 - resource_size_t ostart, oend; 560 540 int type = 0; int other = 0; 561 541 struct resource *p, *dp; 562 - bool is_type, covered; 563 - struct resource res; 542 + struct resource res, o; 543 + bool covered; 564 544 565 545 res.start = start; 566 546 res.end = start + size - 1; 567 547 568 548 for (p = parent->child; p ; p = p->sibling) { 569 - if (!resource_overlaps(p, &res)) 549 + if (!resource_intersection(p, &res, &o)) 570 550 continue; 571 - is_type = (p->flags & flags) == flags && 572 - (desc == IORES_DESC_NONE || desc == p->desc); 573 - if (is_type) { 551 + if (is_type_match(p, flags, desc)) { 574 552 type++; 575 553 continue; 576 554 } ··· 584 568 * |-- "System RAM" --||-- "CXL Window 0a" --| 585 569 */ 586 570 covered = false; 587 - ostart = max(res.start, p->start); 588 - oend = min(res.end, p->end); 589 571 for_each_resource(p, dp, false) { 590 572 if (!resource_overlaps(dp, &res)) 591 573 continue; 592 - is_type = (dp->flags & flags) == flags && 593 - (desc == IORES_DESC_NONE || desc == dp->desc); 594 - if (is_type) { 574 + if (is_type_match(dp, flags, desc)) { 595 575 type++; 596 576 /* 597 - * Range from 'ostart' to 'dp->start' 577 + * Range from 'o.start' to 'dp->start' 598 578 * isn't covered by matched resource. 599 579 */ 600 - if (dp->start > ostart) 580 + if (dp->start > o.start) 601 581 break; 602 - if (dp->end >= oend) { 582 + if (dp->end >= o.end) { 603 583 covered = true; 604 584 break; 605 585 } 606 586 /* Remove covered range */ 607 - ostart = max(ostart, dp->end + 1); 587 + o.start = max(o.start, dp->end + 1); 608 588 } 609 589 } 610 590 if (!covered) ··· 756 744 * @root: root resource descriptor 757 745 * @old: resource descriptor desired by caller 758 746 * @newsize: new size of the resource descriptor 759 - * @constraint: the size and alignment constraints to be met. 747 + * @constraint: the memory range and alignment constraints to be met. 760 748 */ 761 749 static int reallocate_resource(struct resource *root, struct resource *old, 762 750 resource_size_t newsize,
+2 -1
kernel/watchdog.c
··· 998 998 999 999 mutex_lock(&watchdog_mutex); 1000 1000 1001 + old = *param; 1001 1002 if (!write) { 1002 1003 /* 1003 1004 * On read synchronize the userspace interface. This is a ··· 1006 1005 */ 1007 1006 *param = (watchdog_enabled & which) != 0; 1008 1007 err = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 1008 + *param = old; 1009 1009 } else { 1010 - old = READ_ONCE(*param); 1011 1010 err = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 1012 1011 if (!err && old != READ_ONCE(*param)) 1013 1012 proc_watchdog_update();
+6
lib/Kconfig
··· 789 789 790 790 config FIRMWARE_TABLE 791 791 bool 792 + 793 + config UNION_FIND 794 + bool 795 + 796 + config MIN_HEAP 797 + bool
+43 -16
lib/Kconfig.debug
··· 2269 2269 config TEST_MIN_HEAP 2270 2270 tristate "Min heap test" 2271 2271 depends on DEBUG_KERNEL || m 2272 + select MIN_HEAP 2272 2273 help 2273 2274 Enable this to turn on min heap function tests. This test is 2274 2275 executed only once during system boot (so affects only boot time), ··· 2620 2619 2621 2620 If unsure, say N. 2622 2621 2622 + config UTIL_MACROS_KUNIT 2623 + tristate "KUnit test util_macros.h functions at runtime" if !KUNIT_ALL_TESTS 2624 + depends on KUNIT 2625 + default KUNIT_ALL_TESTS 2626 + help 2627 + Enable this option to test the util_macros.h function at boot. 2628 + 2629 + KUnit tests run during boot and output the results to the debug log 2630 + in TAP format (http://testanything.org/). Only useful for kernel devs 2631 + running the KUnit test harness, and not intended for inclusion into a 2632 + production build. 2633 + 2634 + For more information on KUnit and unit tests in general please refer 2635 + to the KUnit documentation in Documentation/dev-tools/kunit/. 2636 + 2637 + If unsure, say N. 2638 + 2623 2639 config HASH_KUNIT_TEST 2624 2640 tristate "KUnit Test for integer hash functions" if !KUNIT_ALL_TESTS 2625 2641 depends on KUNIT ··· 2858 2840 on the copy_to/from_user infrastructure, making sure basic 2859 2841 user/kernel boundary testing is working. 2860 2842 2843 + config CRC16_KUNIT_TEST 2844 + tristate "KUnit tests for CRC16" 2845 + depends on KUNIT 2846 + default KUNIT_ALL_TESTS 2847 + select CRC16 2848 + help 2849 + Enable this option to run unit tests for the kernel's CRC16 2850 + implementation (<linux/crc16.h>). 2851 + 2861 2852 config TEST_UDELAY 2862 2853 tristate "udelay test driver" 2863 2854 help ··· 3010 2983 3011 2984 If unsure, say N. 3012 2985 2986 + config INT_POW_TEST 2987 + tristate "Integer exponentiation (int_pow) test" if !KUNIT_ALL_TESTS 2988 + depends on KUNIT 2989 + default KUNIT_ALL_TESTS 2990 + help 2991 + This option enables the KUnit test suite for the int_pow function, 2992 + which performs integer exponentiation. The test suite is designed to 2993 + verify that the implementation of int_pow correctly computes the power 2994 + of a given base raised to a given exponent. 2995 + 2996 + Enabling this option will include tests that check various scenarios 2997 + and edge cases to ensure the accuracy and reliability of the exponentiation 2998 + function. 2999 + 3000 + If unsure, say N 3001 + 3013 3002 endif # RUNTIME_TESTING_MENU 3014 3003 3015 3004 config ARCH_USE_MEMTEST ··· 3121 3078 endmenu # "Rust" 3122 3079 3123 3080 endmenu # Kernel hacking 3124 - 3125 - config INT_POW_TEST 3126 - tristate "Integer exponentiation (int_pow) test" if !KUNIT_ALL_TESTS 3127 - depends on KUNIT 3128 - default KUNIT_ALL_TESTS 3129 - help 3130 - This option enables the KUnit test suite for the int_pow function, 3131 - which performs integer exponentiation. The test suite is designed to 3132 - verify that the implementation of int_pow correctly computes the power 3133 - of a given base raised to a given exponent. 3134 - 3135 - Enabling this option will include tests that check various scenarios 3136 - and edge cases to ensure the accuracy and reliability of the exponentiation 3137 - function. 3138 - 3139 - If unsure, say N
+5 -1
lib/Makefile
··· 35 35 is_single_threaded.o plist.o decompress.o kobject_uevent.o \ 36 36 earlycpio.o seq_buf.o siphash.o dec_and_lock.o \ 37 37 nmi_backtrace.o win_minmax.o memcat_p.o \ 38 - buildid.o objpool.o union_find.o iomem_copy.o 38 + buildid.o objpool.o iomem_copy.o 39 39 40 + lib-$(CONFIG_UNION_FIND) += union_find.o 40 41 lib-$(CONFIG_PRINTK) += dump_stack.o 41 42 lib-$(CONFIG_SMP) += cpumask.o 43 + lib-$(CONFIG_MIN_HEAP) += min_heap.o 42 44 43 45 lib-y += kobject.o klist.o 44 46 obj-y += lockref.o ··· 373 371 CFLAGS_bitfield_kunit.o := $(DISABLE_STRUCTLEAK_PLUGIN) 374 372 obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o 375 373 obj-$(CONFIG_CHECKSUM_KUNIT) += checksum_kunit.o 374 + obj-$(CONFIG_UTIL_MACROS_KUNIT) += util_macros_kunit.o 376 375 obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o 377 376 obj-$(CONFIG_HASHTABLE_KUNIT_TEST) += hashtable_test.o 378 377 obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o ··· 393 390 obj-$(CONFIG_FORTIFY_KUNIT_TEST) += fortify_kunit.o 394 391 obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o 395 392 obj-$(CONFIG_USERCOPY_KUNIT_TEST) += usercopy_kunit.o 393 + obj-$(CONFIG_CRC16_KUNIT_TEST) += crc16_kunit.o 396 394 397 395 obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o 398 396
+155
lib/crc16_kunit.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * KUnits tests for CRC16. 4 + * 5 + * Copyright (C) 2024, LKCAMP 6 + * Author: Vinicius Peixoto <vpeixoto@lkcamp.dev> 7 + * Author: Fabricio Gasperin <fgasperin@lkcamp.dev> 8 + * Author: Enzo Bertoloti <ebertoloti@lkcamp.dev> 9 + */ 10 + #include <kunit/test.h> 11 + #include <linux/crc16.h> 12 + #include <linux/prandom.h> 13 + 14 + #define CRC16_KUNIT_DATA_SIZE 4096 15 + #define CRC16_KUNIT_TEST_SIZE 100 16 + #define CRC16_KUNIT_SEED 0x12345678 17 + 18 + /** 19 + * struct crc16_test - CRC16 test data 20 + * @crc: initial input value to CRC16 21 + * @start: Start index within the data buffer 22 + * @length: Length of the data 23 + */ 24 + static struct crc16_test { 25 + u16 crc; 26 + u16 start; 27 + u16 length; 28 + } tests[CRC16_KUNIT_TEST_SIZE]; 29 + 30 + u8 data[CRC16_KUNIT_DATA_SIZE]; 31 + 32 + 33 + /* Naive implementation of CRC16 for validation purposes */ 34 + static inline u16 _crc16_naive_byte(u16 crc, u8 data) 35 + { 36 + u8 i = 0; 37 + 38 + crc ^= (u16) data; 39 + for (i = 0; i < 8; i++) { 40 + if (crc & 0x01) 41 + crc = (crc >> 1) ^ 0xa001; 42 + else 43 + crc = crc >> 1; 44 + } 45 + 46 + return crc; 47 + } 48 + 49 + 50 + static inline u16 _crc16_naive(u16 crc, u8 *buffer, size_t len) 51 + { 52 + while (len--) 53 + crc = _crc16_naive_byte(crc, *buffer++); 54 + return crc; 55 + } 56 + 57 + 58 + /* Small helper for generating pseudorandom 16-bit data */ 59 + static inline u16 _rand16(void) 60 + { 61 + static u32 rand = CRC16_KUNIT_SEED; 62 + 63 + rand = next_pseudo_random32(rand); 64 + return rand & 0xFFFF; 65 + } 66 + 67 + 68 + static int crc16_init_test_data(struct kunit_suite *suite) 69 + { 70 + size_t i; 71 + 72 + /* Fill the data buffer with random bytes */ 73 + for (i = 0; i < CRC16_KUNIT_DATA_SIZE; i++) 74 + data[i] = _rand16() & 0xFF; 75 + 76 + /* Generate random test data while ensuring the random 77 + * start + length values won't overflow the 4096-byte 78 + * buffer (0x7FF * 2 = 0xFFE < 0x1000) 79 + */ 80 + for (size_t i = 0; i < CRC16_KUNIT_TEST_SIZE; i++) { 81 + tests[i].crc = _rand16(); 82 + tests[i].start = _rand16() & 0x7FF; 83 + tests[i].length = _rand16() & 0x7FF; 84 + } 85 + 86 + return 0; 87 + } 88 + 89 + static void crc16_test_empty(struct kunit *test) 90 + { 91 + u16 crc; 92 + 93 + /* The result for empty data should be the same as the 94 + * initial crc 95 + */ 96 + crc = crc16(0x00, data, 0); 97 + KUNIT_EXPECT_EQ(test, crc, 0); 98 + crc = crc16(0xFF, data, 0); 99 + KUNIT_EXPECT_EQ(test, crc, 0xFF); 100 + } 101 + 102 + static void crc16_test_correctness(struct kunit *test) 103 + { 104 + size_t i; 105 + u16 crc, crc_naive; 106 + 107 + for (i = 0; i < CRC16_KUNIT_TEST_SIZE; i++) { 108 + /* Compare results with the naive crc16 implementation */ 109 + crc = crc16(tests[i].crc, data + tests[i].start, 110 + tests[i].length); 111 + crc_naive = _crc16_naive(tests[i].crc, data + tests[i].start, 112 + tests[i].length); 113 + KUNIT_EXPECT_EQ(test, crc, crc_naive); 114 + } 115 + } 116 + 117 + 118 + static void crc16_test_combine(struct kunit *test) 119 + { 120 + size_t i, j; 121 + u16 crc, crc_naive; 122 + 123 + /* Make sure that combining two consecutive crc16 calculations 124 + * yields the same result as calculating the crc16 for the whole thing 125 + */ 126 + for (i = 0; i < CRC16_KUNIT_TEST_SIZE; i++) { 127 + crc_naive = crc16(tests[i].crc, data + tests[i].start, tests[i].length); 128 + for (j = 0; j < tests[i].length; j++) { 129 + crc = crc16(tests[i].crc, data + tests[i].start, j); 130 + crc = crc16(crc, data + tests[i].start + j, tests[i].length - j); 131 + KUNIT_EXPECT_EQ(test, crc, crc_naive); 132 + } 133 + } 134 + } 135 + 136 + 137 + static struct kunit_case crc16_test_cases[] = { 138 + KUNIT_CASE(crc16_test_empty), 139 + KUNIT_CASE(crc16_test_combine), 140 + KUNIT_CASE(crc16_test_correctness), 141 + {}, 142 + }; 143 + 144 + static struct kunit_suite crc16_test_suite = { 145 + .name = "crc16", 146 + .test_cases = crc16_test_cases, 147 + .suite_init = crc16_init_test_data, 148 + }; 149 + kunit_test_suite(crc16_test_suite); 150 + 151 + MODULE_AUTHOR("Fabricio Gasperin <fgasperin@lkcamp.dev>"); 152 + MODULE_AUTHOR("Vinicius Peixoto <vpeixoto@lkcamp.dev>"); 153 + MODULE_AUTHOR("Enzo Bertoloti <ebertoloti@lkcamp.dev>"); 154 + MODULE_DESCRIPTION("Unit tests for crc16"); 155 + MODULE_LICENSE("GPL");
+4
lib/list-test.c
··· 412 412 KUNIT_EXPECT_PTR_EQ(test, cur, &entries[i]); 413 413 i++; 414 414 } 415 + 416 + KUNIT_EXPECT_EQ(test, i, 3); 415 417 } 416 418 417 419 static void list_test_list_cut_before(struct kunit *test) ··· 442 440 KUNIT_EXPECT_PTR_EQ(test, cur, &entries[i]); 443 441 i++; 444 442 } 443 + 444 + KUNIT_EXPECT_EQ(test, i, 3); 445 445 } 446 446 447 447 static void list_test_list_splice(struct kunit *test)
-3
lib/list_sort.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - #include <linux/kernel.h> 3 - #include <linux/bug.h> 4 2 #include <linux/compiler.h> 5 3 #include <linux/export.h> 6 - #include <linux/string.h> 7 4 #include <linux/list_sort.h> 8 5 #include <linux/list.h> 9 6
+70
lib/min_heap.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/export.h> 3 + #include <linux/min_heap.h> 4 + 5 + void __min_heap_init(min_heap_char *heap, void *data, int size) 6 + { 7 + __min_heap_init_inline(heap, data, size); 8 + } 9 + EXPORT_SYMBOL(__min_heap_init); 10 + 11 + void *__min_heap_peek(struct min_heap_char *heap) 12 + { 13 + return __min_heap_peek_inline(heap); 14 + } 15 + EXPORT_SYMBOL(__min_heap_peek); 16 + 17 + bool __min_heap_full(min_heap_char *heap) 18 + { 19 + return __min_heap_full_inline(heap); 20 + } 21 + EXPORT_SYMBOL(__min_heap_full); 22 + 23 + void __min_heap_sift_down(min_heap_char *heap, int pos, size_t elem_size, 24 + const struct min_heap_callbacks *func, void *args) 25 + { 26 + __min_heap_sift_down_inline(heap, pos, elem_size, func, args); 27 + } 28 + EXPORT_SYMBOL(__min_heap_sift_down); 29 + 30 + void __min_heap_sift_up(min_heap_char *heap, size_t elem_size, size_t idx, 31 + const struct min_heap_callbacks *func, void *args) 32 + { 33 + __min_heap_sift_up_inline(heap, elem_size, idx, func, args); 34 + } 35 + EXPORT_SYMBOL(__min_heap_sift_up); 36 + 37 + void __min_heapify_all(min_heap_char *heap, size_t elem_size, 38 + const struct min_heap_callbacks *func, void *args) 39 + { 40 + __min_heapify_all_inline(heap, elem_size, func, args); 41 + } 42 + EXPORT_SYMBOL(__min_heapify_all); 43 + 44 + bool __min_heap_pop(min_heap_char *heap, size_t elem_size, 45 + const struct min_heap_callbacks *func, void *args) 46 + { 47 + return __min_heap_pop_inline(heap, elem_size, func, args); 48 + } 49 + EXPORT_SYMBOL(__min_heap_pop); 50 + 51 + void __min_heap_pop_push(min_heap_char *heap, const void *element, size_t elem_size, 52 + const struct min_heap_callbacks *func, void *args) 53 + { 54 + __min_heap_pop_push_inline(heap, element, elem_size, func, args); 55 + } 56 + EXPORT_SYMBOL(__min_heap_pop_push); 57 + 58 + bool __min_heap_push(min_heap_char *heap, const void *element, size_t elem_size, 59 + const struct min_heap_callbacks *func, void *args) 60 + { 61 + return __min_heap_push_inline(heap, element, elem_size, func, args); 62 + } 63 + EXPORT_SYMBOL(__min_heap_push); 64 + 65 + bool __min_heap_del(min_heap_char *heap, size_t elem_size, size_t idx, 66 + const struct min_heap_callbacks *func, void *args) 67 + { 68 + return __min_heap_del_inline(heap, elem_size, idx, func, args); 69 + } 70 + EXPORT_SYMBOL(__min_heap_del);
+2 -2
lib/scatterlist.c
··· 474 474 return -EOPNOTSUPP; 475 475 476 476 if (sgt_append->prv) { 477 - unsigned long next_pfn = (page_to_phys(sg_page(sgt_append->prv)) + 478 - sgt_append->prv->offset + sgt_append->prv->length) / PAGE_SIZE; 477 + unsigned long next_pfn; 479 478 480 479 if (WARN_ON(offset)) 481 480 return -EINVAL; 482 481 483 482 /* Merge contiguous pages into the last SG */ 484 483 prv_len = sgt_append->prv->length; 484 + next_pfn = (sg_phys(sgt_append->prv) + prv_len) / PAGE_SIZE; 485 485 if (page_to_pfn(pages[0]) == next_pfn) { 486 486 last_pg = pfn_to_page(next_pfn - 1); 487 487 while (n_pages && pages_are_mergeable(pages[0], last_pg)) {
+4 -12
lib/test_min_heap.c
··· 23 23 return *(int *)lhs > *(int *)rhs; 24 24 } 25 25 26 - static __init void swap_ints(void *lhs, void *rhs, void __always_unused *args) 27 - { 28 - int temp = *(int *)lhs; 29 - 30 - *(int *)lhs = *(int *)rhs; 31 - *(int *)rhs = temp; 32 - } 33 - 34 26 static __init int pop_verify_heap(bool min_heap, 35 27 struct min_heap_test *heap, 36 28 const struct min_heap_callbacks *funcs) ··· 64 72 }; 65 73 struct min_heap_callbacks funcs = { 66 74 .less = min_heap ? less_than : greater_than, 67 - .swp = swap_ints, 75 + .swp = NULL, 68 76 }; 69 77 int i, err; 70 78 ··· 96 104 }; 97 105 struct min_heap_callbacks funcs = { 98 106 .less = min_heap ? less_than : greater_than, 99 - .swp = swap_ints, 107 + .swp = NULL, 100 108 }; 101 109 int i, temp, err; 102 110 ··· 128 136 }; 129 137 struct min_heap_callbacks funcs = { 130 138 .less = min_heap ? less_than : greater_than, 131 - .swp = swap_ints, 139 + .swp = NULL, 132 140 }; 133 141 int i, temp, err; 134 142 ··· 167 175 heap.nr = ARRAY_SIZE(values); 168 176 struct min_heap_callbacks funcs = { 169 177 .less = min_heap ? less_than : greater_than, 170 - .swp = swap_ints, 178 + .swp = NULL, 171 179 }; 172 180 int i, err; 173 181
+240
lib/util_macros_kunit.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Test cases for bitfield helpers. 4 + */ 5 + 6 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 + 8 + #include <kunit/test.h> 9 + #include <linux/util_macros.h> 10 + 11 + #define FIND_CLOSEST_RANGE_CHECK(from, to, array, exp_idx) \ 12 + { \ 13 + int i; \ 14 + for (i = from; i <= to; i++) { \ 15 + int found = find_closest(i, array, ARRAY_SIZE(array)); \ 16 + KUNIT_ASSERT_EQ(ctx, exp_idx, found); \ 17 + } \ 18 + } 19 + 20 + static void test_find_closest(struct kunit *ctx) 21 + { 22 + /* This will test a few arrays that are found in drivers */ 23 + static const int ina226_avg_tab[] = { 1, 4, 16, 64, 128, 256, 512, 1024 }; 24 + static const unsigned int ad7616_oversampling_avail[] = { 25 + 1, 2, 4, 8, 16, 32, 64, 128, 26 + }; 27 + static u32 wd_timeout_table[] = { 2, 4, 6, 8, 16, 32, 48, 64 }; 28 + static int array_prog1a[] = { 1, 2, 3, 4, 5 }; 29 + static u32 array_prog1b[] = { 2, 3, 4, 5, 6 }; 30 + static int array_prog1mix[] = { -2, -1, 0, 1, 2 }; 31 + static int array_prog2a[] = { 1, 3, 5, 7 }; 32 + static u32 array_prog2b[] = { 2, 4, 6, 8 }; 33 + static int array_prog3a[] = { 1, 4, 7, 10 }; 34 + static u32 array_prog3b[] = { 2, 5, 8, 11 }; 35 + static int array_prog4a[] = { 1, 5, 9, 13 }; 36 + static u32 array_prog4b[] = { 2, 6, 10, 14 }; 37 + 38 + FIND_CLOSEST_RANGE_CHECK(-3, 2, ina226_avg_tab, 0); 39 + FIND_CLOSEST_RANGE_CHECK(3, 10, ina226_avg_tab, 1); 40 + FIND_CLOSEST_RANGE_CHECK(11, 40, ina226_avg_tab, 2); 41 + FIND_CLOSEST_RANGE_CHECK(41, 96, ina226_avg_tab, 3); 42 + FIND_CLOSEST_RANGE_CHECK(97, 192, ina226_avg_tab, 4); 43 + FIND_CLOSEST_RANGE_CHECK(193, 384, ina226_avg_tab, 5); 44 + FIND_CLOSEST_RANGE_CHECK(385, 768, ina226_avg_tab, 6); 45 + FIND_CLOSEST_RANGE_CHECK(769, 2048, ina226_avg_tab, 7); 46 + 47 + /* The array that found the bug that caused this kunit to exist */ 48 + FIND_CLOSEST_RANGE_CHECK(-3, 1, ad7616_oversampling_avail, 0); 49 + FIND_CLOSEST_RANGE_CHECK(2, 3, ad7616_oversampling_avail, 1); 50 + FIND_CLOSEST_RANGE_CHECK(4, 6, ad7616_oversampling_avail, 2); 51 + FIND_CLOSEST_RANGE_CHECK(7, 12, ad7616_oversampling_avail, 3); 52 + FIND_CLOSEST_RANGE_CHECK(13, 24, ad7616_oversampling_avail, 4); 53 + FIND_CLOSEST_RANGE_CHECK(25, 48, ad7616_oversampling_avail, 5); 54 + FIND_CLOSEST_RANGE_CHECK(49, 96, ad7616_oversampling_avail, 6); 55 + FIND_CLOSEST_RANGE_CHECK(97, 256, ad7616_oversampling_avail, 7); 56 + 57 + FIND_CLOSEST_RANGE_CHECK(-3, 3, wd_timeout_table, 0); 58 + FIND_CLOSEST_RANGE_CHECK(4, 5, wd_timeout_table, 1); 59 + FIND_CLOSEST_RANGE_CHECK(6, 7, wd_timeout_table, 2); 60 + FIND_CLOSEST_RANGE_CHECK(8, 12, wd_timeout_table, 3); 61 + FIND_CLOSEST_RANGE_CHECK(13, 24, wd_timeout_table, 4); 62 + FIND_CLOSEST_RANGE_CHECK(25, 40, wd_timeout_table, 5); 63 + FIND_CLOSEST_RANGE_CHECK(41, 56, wd_timeout_table, 6); 64 + FIND_CLOSEST_RANGE_CHECK(57, 128, wd_timeout_table, 7); 65 + 66 + /* One could argue that find_closest() should not be used for monotonic 67 + * arrays (like 1,2,3,4,5), but even so, it should work as long as the 68 + * array is sorted ascending. */ 69 + FIND_CLOSEST_RANGE_CHECK(-3, 1, array_prog1a, 0); 70 + FIND_CLOSEST_RANGE_CHECK(2, 2, array_prog1a, 1); 71 + FIND_CLOSEST_RANGE_CHECK(3, 3, array_prog1a, 2); 72 + FIND_CLOSEST_RANGE_CHECK(4, 4, array_prog1a, 3); 73 + FIND_CLOSEST_RANGE_CHECK(5, 8, array_prog1a, 4); 74 + 75 + FIND_CLOSEST_RANGE_CHECK(-3, 2, array_prog1b, 0); 76 + FIND_CLOSEST_RANGE_CHECK(3, 3, array_prog1b, 1); 77 + FIND_CLOSEST_RANGE_CHECK(4, 4, array_prog1b, 2); 78 + FIND_CLOSEST_RANGE_CHECK(5, 5, array_prog1b, 3); 79 + FIND_CLOSEST_RANGE_CHECK(6, 8, array_prog1b, 4); 80 + 81 + FIND_CLOSEST_RANGE_CHECK(-4, -2, array_prog1mix, 0); 82 + FIND_CLOSEST_RANGE_CHECK(-1, -1, array_prog1mix, 1); 83 + FIND_CLOSEST_RANGE_CHECK(0, 0, array_prog1mix, 2); 84 + FIND_CLOSEST_RANGE_CHECK(1, 1, array_prog1mix, 3); 85 + FIND_CLOSEST_RANGE_CHECK(2, 5, array_prog1mix, 4); 86 + 87 + FIND_CLOSEST_RANGE_CHECK(-3, 2, array_prog2a, 0); 88 + FIND_CLOSEST_RANGE_CHECK(3, 4, array_prog2a, 1); 89 + FIND_CLOSEST_RANGE_CHECK(5, 6, array_prog2a, 2); 90 + FIND_CLOSEST_RANGE_CHECK(7, 10, array_prog2a, 3); 91 + 92 + FIND_CLOSEST_RANGE_CHECK(-3, 3, array_prog2b, 0); 93 + FIND_CLOSEST_RANGE_CHECK(4, 5, array_prog2b, 1); 94 + FIND_CLOSEST_RANGE_CHECK(6, 7, array_prog2b, 2); 95 + FIND_CLOSEST_RANGE_CHECK(8, 10, array_prog2b, 3); 96 + 97 + FIND_CLOSEST_RANGE_CHECK(-3, 2, array_prog3a, 0); 98 + FIND_CLOSEST_RANGE_CHECK(3, 5, array_prog3a, 1); 99 + FIND_CLOSEST_RANGE_CHECK(6, 8, array_prog3a, 2); 100 + FIND_CLOSEST_RANGE_CHECK(9, 20, array_prog3a, 3); 101 + 102 + FIND_CLOSEST_RANGE_CHECK(-3, 3, array_prog3b, 0); 103 + FIND_CLOSEST_RANGE_CHECK(4, 6, array_prog3b, 1); 104 + FIND_CLOSEST_RANGE_CHECK(7, 9, array_prog3b, 2); 105 + FIND_CLOSEST_RANGE_CHECK(10, 20, array_prog3b, 3); 106 + 107 + FIND_CLOSEST_RANGE_CHECK(-3, 3, array_prog4a, 0); 108 + FIND_CLOSEST_RANGE_CHECK(4, 7, array_prog4a, 1); 109 + FIND_CLOSEST_RANGE_CHECK(8, 11, array_prog4a, 2); 110 + FIND_CLOSEST_RANGE_CHECK(12, 20, array_prog4a, 3); 111 + 112 + FIND_CLOSEST_RANGE_CHECK(-3, 4, array_prog4b, 0); 113 + FIND_CLOSEST_RANGE_CHECK(5, 8, array_prog4b, 1); 114 + FIND_CLOSEST_RANGE_CHECK(9, 12, array_prog4b, 2); 115 + FIND_CLOSEST_RANGE_CHECK(13, 20, array_prog4b, 3); 116 + } 117 + 118 + #define FIND_CLOSEST_DESC_RANGE_CHECK(from, to, array, exp_idx) \ 119 + { \ 120 + int i; \ 121 + for (i = from; i <= to; i++) { \ 122 + int found = find_closest_descending(i, array, \ 123 + ARRAY_SIZE(array)); \ 124 + KUNIT_ASSERT_EQ(ctx, exp_idx, found); \ 125 + } \ 126 + } 127 + 128 + static void test_find_closest_descending(struct kunit *ctx) 129 + { 130 + /* Same arrays as 'test_find_closest' but reversed */ 131 + static const int ina226_avg_tab[] = { 1024, 512, 256, 128, 64, 16, 4, 1 }; 132 + static const unsigned int ad7616_oversampling_avail[] = { 133 + 128, 64, 32, 16, 8, 4, 2, 1 134 + }; 135 + static u32 wd_timeout_table[] = { 64, 48, 32, 16, 8, 6, 4, 2 }; 136 + static int array_prog1a[] = { 5, 4, 3, 2, 1 }; 137 + static u32 array_prog1b[] = { 6, 5, 4, 3, 2 }; 138 + static int array_prog1mix[] = { 2, 1, 0, -1, -2 }; 139 + static int array_prog2a[] = { 7, 5, 3, 1 }; 140 + static u32 array_prog2b[] = { 8, 6, 4, 2 }; 141 + static int array_prog3a[] = { 10, 7, 4, 1 }; 142 + static u32 array_prog3b[] = { 11, 8, 5, 2 }; 143 + static int array_prog4a[] = { 13, 9, 5, 1 }; 144 + static u32 array_prog4b[] = { 14, 10, 6, 2 }; 145 + 146 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 2, ina226_avg_tab, 7); 147 + FIND_CLOSEST_DESC_RANGE_CHECK(3, 10, ina226_avg_tab, 6); 148 + FIND_CLOSEST_DESC_RANGE_CHECK(11, 40, ina226_avg_tab, 5); 149 + FIND_CLOSEST_DESC_RANGE_CHECK(41, 96, ina226_avg_tab, 4); 150 + FIND_CLOSEST_DESC_RANGE_CHECK(97, 192, ina226_avg_tab, 3); 151 + FIND_CLOSEST_DESC_RANGE_CHECK(193, 384, ina226_avg_tab, 2); 152 + FIND_CLOSEST_DESC_RANGE_CHECK(385, 768, ina226_avg_tab, 1); 153 + FIND_CLOSEST_DESC_RANGE_CHECK(769, 2048, ina226_avg_tab, 0); 154 + 155 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 1, ad7616_oversampling_avail, 7); 156 + FIND_CLOSEST_DESC_RANGE_CHECK(2, 3, ad7616_oversampling_avail, 6); 157 + FIND_CLOSEST_DESC_RANGE_CHECK(4, 6, ad7616_oversampling_avail, 5); 158 + FIND_CLOSEST_DESC_RANGE_CHECK(7, 12, ad7616_oversampling_avail, 4); 159 + FIND_CLOSEST_DESC_RANGE_CHECK(13, 24, ad7616_oversampling_avail, 3); 160 + FIND_CLOSEST_DESC_RANGE_CHECK(25, 48, ad7616_oversampling_avail, 2); 161 + FIND_CLOSEST_DESC_RANGE_CHECK(49, 96, ad7616_oversampling_avail, 1); 162 + FIND_CLOSEST_DESC_RANGE_CHECK(97, 256, ad7616_oversampling_avail, 0); 163 + 164 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 3, wd_timeout_table, 7); 165 + FIND_CLOSEST_DESC_RANGE_CHECK(4, 5, wd_timeout_table, 6); 166 + FIND_CLOSEST_DESC_RANGE_CHECK(6, 7, wd_timeout_table, 5); 167 + FIND_CLOSEST_DESC_RANGE_CHECK(8, 12, wd_timeout_table, 4); 168 + FIND_CLOSEST_DESC_RANGE_CHECK(13, 24, wd_timeout_table, 3); 169 + FIND_CLOSEST_DESC_RANGE_CHECK(25, 40, wd_timeout_table, 2); 170 + FIND_CLOSEST_DESC_RANGE_CHECK(41, 56, wd_timeout_table, 1); 171 + FIND_CLOSEST_DESC_RANGE_CHECK(57, 128, wd_timeout_table, 0); 172 + 173 + /* One could argue that find_closest_descending() should not be used 174 + * for monotonic arrays (like 5,4,3,2,1), but even so, it should still 175 + * it should work as long as the array is sorted descending. */ 176 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 1, array_prog1a, 4); 177 + FIND_CLOSEST_DESC_RANGE_CHECK(2, 2, array_prog1a, 3); 178 + FIND_CLOSEST_DESC_RANGE_CHECK(3, 3, array_prog1a, 2); 179 + FIND_CLOSEST_DESC_RANGE_CHECK(4, 4, array_prog1a, 1); 180 + FIND_CLOSEST_DESC_RANGE_CHECK(5, 8, array_prog1a, 0); 181 + 182 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 2, array_prog1b, 4); 183 + FIND_CLOSEST_DESC_RANGE_CHECK(3, 3, array_prog1b, 3); 184 + FIND_CLOSEST_DESC_RANGE_CHECK(4, 4, array_prog1b, 2); 185 + FIND_CLOSEST_DESC_RANGE_CHECK(5, 5, array_prog1b, 1); 186 + FIND_CLOSEST_DESC_RANGE_CHECK(6, 8, array_prog1b, 0); 187 + 188 + FIND_CLOSEST_DESC_RANGE_CHECK(-4, -2, array_prog1mix, 4); 189 + FIND_CLOSEST_DESC_RANGE_CHECK(-1, -1, array_prog1mix, 3); 190 + FIND_CLOSEST_DESC_RANGE_CHECK(0, 0, array_prog1mix, 2); 191 + FIND_CLOSEST_DESC_RANGE_CHECK(1, 1, array_prog1mix, 1); 192 + FIND_CLOSEST_DESC_RANGE_CHECK(2, 5, array_prog1mix, 0); 193 + 194 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 2, array_prog2a, 3); 195 + FIND_CLOSEST_DESC_RANGE_CHECK(3, 4, array_prog2a, 2); 196 + FIND_CLOSEST_DESC_RANGE_CHECK(5, 6, array_prog2a, 1); 197 + FIND_CLOSEST_DESC_RANGE_CHECK(7, 10, array_prog2a, 0); 198 + 199 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 3, array_prog2b, 3); 200 + FIND_CLOSEST_DESC_RANGE_CHECK(4, 5, array_prog2b, 2); 201 + FIND_CLOSEST_DESC_RANGE_CHECK(6, 7, array_prog2b, 1); 202 + FIND_CLOSEST_DESC_RANGE_CHECK(8, 10, array_prog2b, 0); 203 + 204 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 2, array_prog3a, 3); 205 + FIND_CLOSEST_DESC_RANGE_CHECK(3, 5, array_prog3a, 2); 206 + FIND_CLOSEST_DESC_RANGE_CHECK(6, 8, array_prog3a, 1); 207 + FIND_CLOSEST_DESC_RANGE_CHECK(9, 20, array_prog3a, 0); 208 + 209 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 3, array_prog3b, 3); 210 + FIND_CLOSEST_DESC_RANGE_CHECK(4, 6, array_prog3b, 2); 211 + FIND_CLOSEST_DESC_RANGE_CHECK(7, 9, array_prog3b, 1); 212 + FIND_CLOSEST_DESC_RANGE_CHECK(10, 20, array_prog3b, 0); 213 + 214 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 3, array_prog4a, 3); 215 + FIND_CLOSEST_DESC_RANGE_CHECK(4, 7, array_prog4a, 2); 216 + FIND_CLOSEST_DESC_RANGE_CHECK(8, 11, array_prog4a, 1); 217 + FIND_CLOSEST_DESC_RANGE_CHECK(12, 20, array_prog4a, 0); 218 + 219 + FIND_CLOSEST_DESC_RANGE_CHECK(-3, 4, array_prog4b, 3); 220 + FIND_CLOSEST_DESC_RANGE_CHECK(5, 8, array_prog4b, 2); 221 + FIND_CLOSEST_DESC_RANGE_CHECK(9, 12, array_prog4b, 1); 222 + FIND_CLOSEST_DESC_RANGE_CHECK(13, 20, array_prog4b, 0); 223 + } 224 + 225 + static struct kunit_case __refdata util_macros_test_cases[] = { 226 + KUNIT_CASE(test_find_closest), 227 + KUNIT_CASE(test_find_closest_descending), 228 + {} 229 + }; 230 + 231 + static struct kunit_suite util_macros_test_suite = { 232 + .name = "util_macros.h", 233 + .test_cases = util_macros_test_cases, 234 + }; 235 + 236 + kunit_test_suites(&util_macros_test_suite); 237 + 238 + MODULE_AUTHOR("Alexandru Ardelean <aardelean@baylibre.com>"); 239 + MODULE_DESCRIPTION("Test cases for util_macros.h helpers"); 240 + MODULE_LICENSE("GPL");
+27 -35
mm/util.c
··· 45 45 EXPORT_SYMBOL(kfree_const); 46 46 47 47 /** 48 + * __kmemdup_nul - Create a NUL-terminated string from @s, which might be unterminated. 49 + * @s: The data to copy 50 + * @len: The size of the data, not including the NUL terminator 51 + * @gfp: the GFP mask used in the kmalloc() call when allocating memory 52 + * 53 + * Return: newly allocated copy of @s with NUL-termination or %NULL in 54 + * case of error 55 + */ 56 + static __always_inline char *__kmemdup_nul(const char *s, size_t len, gfp_t gfp) 57 + { 58 + char *buf; 59 + 60 + /* '+1' for the NUL terminator */ 61 + buf = kmalloc_track_caller(len + 1, gfp); 62 + if (!buf) 63 + return NULL; 64 + 65 + memcpy(buf, s, len); 66 + /* Ensure the buf is always NUL-terminated, regardless of @s. */ 67 + buf[len] = '\0'; 68 + return buf; 69 + } 70 + 71 + /** 48 72 * kstrdup - allocate space for and copy an existing string 49 73 * @s: the string to duplicate 50 74 * @gfp: the GFP mask used in the kmalloc() call when allocating memory ··· 78 54 noinline 79 55 char *kstrdup(const char *s, gfp_t gfp) 80 56 { 81 - size_t len; 82 - char *buf; 83 - 84 - if (!s) 85 - return NULL; 86 - 87 - len = strlen(s) + 1; 88 - buf = kmalloc_track_caller(len, gfp); 89 - if (buf) 90 - memcpy(buf, s, len); 91 - return buf; 57 + return s ? __kmemdup_nul(s, strlen(s), gfp) : NULL; 92 58 } 93 59 EXPORT_SYMBOL(kstrdup); 94 60 ··· 114 100 */ 115 101 char *kstrndup(const char *s, size_t max, gfp_t gfp) 116 102 { 117 - size_t len; 118 - char *buf; 119 - 120 - if (!s) 121 - return NULL; 122 - 123 - len = strnlen(s, max); 124 - buf = kmalloc_track_caller(len+1, gfp); 125 - if (buf) { 126 - memcpy(buf, s, len); 127 - buf[len] = '\0'; 128 - } 129 - return buf; 103 + return s ? __kmemdup_nul(s, strnlen(s, max), gfp) : NULL; 130 104 } 131 105 EXPORT_SYMBOL(kstrndup); 132 106 ··· 188 186 */ 189 187 char *kmemdup_nul(const char *s, size_t len, gfp_t gfp) 190 188 { 191 - char *buf; 192 - 193 - if (!s) 194 - return NULL; 195 - 196 - buf = kmalloc_track_caller(len + 1, gfp); 197 - if (buf) { 198 - memcpy(buf, s, len); 199 - buf[len] = '\0'; 200 - } 201 - return buf; 189 + return s ? __kmemdup_nul(s, len, gfp) : NULL; 202 190 } 203 191 EXPORT_SYMBOL(kmemdup_nul); 204 192
+2 -2
samples/hw_breakpoint/data_breakpoint.c
··· 52 52 attr.bp_type = HW_BREAKPOINT_W; 53 53 54 54 sample_hbp = register_wide_hw_breakpoint(&attr, sample_hbp_handler, NULL); 55 - if (IS_ERR((void __force *)sample_hbp)) { 56 - ret = PTR_ERR((void __force *)sample_hbp); 55 + if (IS_ERR_PCPU(sample_hbp)) { 56 + ret = PTR_ERR_PCPU(sample_hbp); 57 57 goto fail; 58 58 } 59 59
+16 -21
scripts/checkpatch.pl
··· 3209 3209 3210 3210 # Check Fixes: styles is correct 3211 3211 if (!$in_header_lines && 3212 - $line =~ /^\s*fixes:?\s*(?:commit\s*)?[0-9a-f]{5,}\b/i) { 3213 - my $orig_commit = ""; 3214 - my $id = "0123456789ab"; 3215 - my $title = "commit title"; 3216 - my $tag_case = 1; 3217 - my $tag_space = 1; 3218 - my $id_length = 1; 3219 - my $id_case = 1; 3212 + $line =~ /^\s*(fixes:?)\s*(?:commit\s*)?([0-9a-f]{5,40})(?:\s*($balanced_parens))?/i) { 3213 + my $tag = $1; 3214 + my $orig_commit = $2; 3215 + my $title; 3220 3216 my $title_has_quotes = 0; 3221 3217 $fixes_tag = 1; 3222 - 3223 - if ($line =~ /(\s*fixes:?)\s+([0-9a-f]{5,})\s+($balanced_parens)/i) { 3224 - my $tag = $1; 3225 - $orig_commit = $2; 3226 - $title = $3; 3227 - 3228 - $tag_case = 0 if $tag eq "Fixes:"; 3229 - $tag_space = 0 if ($line =~ /^fixes:? [0-9a-f]{5,} ($balanced_parens)/i); 3230 - 3231 - $id_length = 0 if ($orig_commit =~ /^[0-9a-f]{12}$/i); 3232 - $id_case = 0 if ($orig_commit !~ /[A-F]/); 3233 - 3218 + if (defined $3) { 3234 3219 # Always strip leading/trailing parens then double quotes if existing 3235 - $title = substr($title, 1, -1); 3220 + $title = substr($3, 1, -1); 3236 3221 if ($title =~ /^".*"$/) { 3237 3222 $title = substr($title, 1, -1); 3238 3223 $title_has_quotes = 1; 3239 3224 } 3225 + } else { 3226 + $title = "commit title" 3240 3227 } 3241 3228 3229 + 3230 + my $tag_case = not ($tag eq "Fixes:"); 3231 + my $tag_space = not ($line =~ /^fixes:? [0-9a-f]{5,40} ($balanced_parens)/i); 3232 + 3233 + my $id_length = not ($orig_commit =~ /^[0-9a-f]{12}$/i); 3234 + my $id_case = not ($orig_commit !~ /[A-F]/); 3235 + 3236 + my $id = "0123456789ab"; 3242 3237 my ($cid, $ctitle) = git_commit_info($orig_commit, $id, 3243 3238 $title); 3244 3239
+6 -1
scripts/decode_stacktrace.sh
··· 311 311 parse_symbol # modifies $symbol 312 312 313 313 # Add up the line number to the symbol 314 - echo "${words[@]}" "$symbol $module" 314 + if [[ -z ${module} ]] 315 + then 316 + echo "${words[@]}" "$symbol" 317 + else 318 + echo "${words[@]}" "$symbol $module" 319 + fi 315 320 } 316 321 317 322 while read line; do
+3
scripts/gdb/linux/modules.py
··· 19 19 module_type = utils.CachedType("struct module") 20 20 21 21 22 + def has_modules(): 23 + return utils.gdb_eval_or_none("modules") is not None 24 + 22 25 def module_list(): 23 26 global module_type 24 27 modules = utils.gdb_eval_or_none("modules")
+3
scripts/gdb/linux/symbols.py
··· 178 178 179 179 self.load_all_symbols() 180 180 181 + if not modules.has_modules(): 182 + return 183 + 181 184 if hasattr(gdb, 'Breakpoint'): 182 185 if self.breakpoint is not None: 183 186 self.breakpoint.delete()
+33
scripts/spelling.txt
··· 141 141 anonynous||anonymous 142 142 anway||anyway 143 143 aplication||application 144 + apeared||appeared 144 145 appearence||appearance 145 146 applicaion||application 146 147 appliction||application ··· 156 155 aquainted||acquainted 157 156 aquired||acquired 158 157 aquisition||acquisition 158 + aquires||acquires 159 159 arbitary||arbitrary 160 160 architechture||architecture 161 161 archtecture||architecture ··· 187 185 asssert||assert 188 186 assum||assume 189 187 assumtpion||assumption 188 + asume||assume 190 189 asuming||assuming 191 190 asycronous||asynchronous 192 191 asychronous||asynchronous 193 192 asynchnous||asynchronous 193 + asynchrnous||asynchronous 194 194 asynchronus||asynchronous 195 195 asynchromous||asynchronous 196 196 asymetric||asymmetric ··· 273 269 caculation||calculation 274 270 cadidate||candidate 275 271 cahces||caches 272 + calcluate||calculate 276 273 calender||calendar 277 274 calescing||coalescing 278 275 calibraiton||calibration ··· 336 331 circumvernt||circumvent 337 332 claread||cleared 338 333 clared||cleared 334 + clearify||clarify 339 335 closeing||closing 340 336 clustred||clustered 341 337 cnfiguration||configuration ··· 385 379 comunicate||communicate 386 380 comunication||communication 387 381 conbination||combination 382 + concurent||concurrent 388 383 conditionaly||conditionally 389 384 conditon||condition 390 385 condtion||condition 391 386 condtional||conditional 392 387 conected||connected 393 388 conector||connector 389 + configed||configured 394 390 configration||configuration 395 391 configred||configured 396 392 configuartion||configuration ··· 402 394 configuraton||configuration 403 395 configuretion||configuration 404 396 configutation||configuration 397 + congiuration||configuration 405 398 conider||consider 406 399 conjuction||conjunction 407 400 connecetd||connected ··· 412 403 connnections||connections 413 404 consistancy||consistency 414 405 consistant||consistent 406 + consits||consists 415 407 containes||contains 416 408 containts||contains 417 409 contaisn||contains ··· 462 452 decompres||decompress 463 453 decsribed||described 464 454 decription||description 455 + detault||default 465 456 dectected||detected 466 457 defailt||default 467 458 deferal||deferral ··· 498 487 desactivate||deactivate 499 488 desciptor||descriptor 500 489 desciptors||descriptors 490 + descritpor||descriptor 501 491 descripto||descriptor 502 492 descripton||description 503 493 descrition||description ··· 613 601 encorporating||incorporating 614 602 encrupted||encrypted 615 603 encrypiton||encryption 604 + encryped||encrypted 616 605 encryptio||encryption 617 606 endianess||endianness 618 607 enpoint||endpoint ··· 643 630 evalute||evaluate 644 631 evalutes||evaluates 645 632 evalution||evaluation 633 + evaulated||evaluated 646 634 excecutable||executable 647 635 excceed||exceed 648 636 exceded||exceeded ··· 664 650 exlcuding||excluding 665 651 exlcusive||exclusive 666 652 exlusive||exclusive 653 + exlicitly||explicitly 667 654 exmaple||example 668 655 expecially||especially 669 656 experies||expires ··· 674 659 explictely||explicitly 675 660 explictly||explicitly 676 661 expresion||expression 662 + exprienced||experienced 677 663 exprimental||experimental 678 664 extened||extended 679 665 exteneded||extended ··· 850 834 informtion||information 851 835 infromation||information 852 836 ingore||ignore 837 + inheritence||inheritance 853 838 inital||initial 854 839 initalized||initialized 855 840 initalised||initialized ··· 895 878 interuupt||interrupt 896 879 interupt||interrupt 897 880 interupts||interrupts 881 + interurpt||interrupt 898 882 interrface||interface 899 883 interrrupt||interrupt 900 884 interrup||interrupt ··· 943 925 juse||just 944 926 jus||just 945 927 kown||known 928 + lable||label 946 929 langage||language 947 930 langauage||language 948 931 langauge||language ··· 1014 995 micropone||microphone 1015 996 microprocesspr||microprocessor 1016 997 migrateable||migratable 998 + miliseconds||milliseconds 1017 999 millenium||millennium 1018 1000 milliseonds||milliseconds 1019 1001 minimim||minimum ··· 1152 1132 paramameters||parameters 1153 1133 paramaters||parameters 1154 1134 paramater||parameter 1135 + paramenters||parameters 1155 1136 parametes||parameters 1156 1137 parametised||parametrised 1157 1138 paramter||parameter ··· 1198 1177 posible||possible 1199 1178 positon||position 1200 1179 possibilites||possibilities 1180 + postion||position 1201 1181 potocol||protocol 1202 1182 powerfull||powerful 1203 1183 pramater||parameter 1184 + preambule||preamble 1204 1185 preamle||preamble 1205 1186 preample||preamble 1206 1187 preapre||prepare ··· 1292 1269 reasearcher||researcher 1293 1270 reasearchers||researchers 1294 1271 reasearch||research 1272 + recalcualte||recalculate 1295 1273 receieve||receive 1296 1274 recepient||recipient 1297 1275 recevied||received ··· 1315 1291 refence||reference 1316 1292 refered||referred 1317 1293 referenace||reference 1294 + refererence||reference 1318 1295 refering||referring 1319 1296 refernces||references 1320 1297 refernnce||reference ··· 1340 1315 remoote||remote 1341 1316 remore||remote 1342 1317 removeable||removable 1318 + repective||respective 1343 1319 repectively||respectively 1344 1320 replacable||replaceable 1345 1321 replacments||replacements 1346 1322 replys||replies 1347 1323 reponse||response 1348 1324 representaion||representation 1325 + repsonse||response 1349 1326 reqeust||request 1350 1327 reqister||register 1351 1328 requed||requeued ··· 1389 1362 reuqest||request 1390 1363 reutnred||returned 1391 1364 revsion||revision 1365 + rewritting||rewriting 1392 1366 rmeoved||removed 1393 1367 rmeove||remove 1394 1368 rmeoves||removes ··· 1472 1444 souce||source 1473 1445 speach||speech 1474 1446 specfic||specific 1447 + specfication||specification 1475 1448 specfield||specified 1476 1449 speciefied||specified 1477 1450 specifc||specific ··· 1573 1544 syste||system 1574 1545 sytem||system 1575 1546 sythesis||synthesis 1547 + tagert||target 1576 1548 taht||that 1577 1549 tained||tainted 1578 1550 tarffic||traffic ··· 1604 1574 tiggered||triggered 1605 1575 tipically||typically 1606 1576 timeing||timing 1577 + timming||timing 1607 1578 timout||timeout 1608 1579 tmis||this 1609 1580 toogle||toggle ··· 1628 1597 transistioned||transitioned 1629 1598 transmittd||transmitted 1630 1599 transormed||transformed 1600 + trasaction||transaction 1631 1601 trasfer||transfer 1632 1602 trasmission||transmission 1603 + trasmitter||transmitter 1633 1604 treshold||threshold 1634 1605 triggerd||triggered 1635 1606 trigerred||triggered
+2 -2
security/lsm_audit.c
··· 207 207 BUILD_BUG_ON(sizeof(a->u) > sizeof(void *)*2); 208 208 209 209 audit_log_format(ab, " pid=%d comm=", task_tgid_nr(current)); 210 - audit_log_untrustedstring(ab, memcpy(comm, current->comm, sizeof(comm))); 210 + audit_log_untrustedstring(ab, get_task_comm(comm, current)); 211 211 212 212 switch (a->type) { 213 213 case LSM_AUDIT_DATA_NONE: ··· 302 302 char comm[sizeof(tsk->comm)]; 303 303 audit_log_format(ab, " opid=%d ocomm=", pid); 304 304 audit_log_untrustedstring(ab, 305 - memcpy(comm, tsk->comm, sizeof(comm))); 305 + get_task_comm(comm, tsk)); 306 306 } 307 307 } 308 308 break;
+1 -1
security/selinux/selinuxfs.c
··· 708 708 if (new_value) { 709 709 char comm[sizeof(current->comm)]; 710 710 711 - memcpy(comm, current->comm, sizeof(comm)); 711 + strscpy(comm, current->comm); 712 712 pr_err("SELinux: %s (%d) set checkreqprot to 1. This is no longer supported.\n", 713 713 comm, current->pid); 714 714 }
+2
tools/bpf/bpftool/pids.c
··· 54 54 ref = &refs->refs[refs->ref_cnt]; 55 55 ref->pid = e->pid; 56 56 memcpy(ref->comm, e->comm, sizeof(ref->comm)); 57 + ref->comm[sizeof(ref->comm) - 1] = '\0'; 57 58 refs->ref_cnt++; 58 59 59 60 return; ··· 78 77 ref = &refs->refs[0]; 79 78 ref->pid = e->pid; 80 79 memcpy(ref->comm, e->comm, sizeof(ref->comm)); 80 + ref->comm[sizeof(ref->comm) - 1] = '\0'; 81 81 refs->ref_cnt = 1; 82 82 refs->has_bpf_cookie = e->has_bpf_cookie; 83 83 refs->bpf_cookie = e->bpf_cookie;
+1 -1
tools/include/linux/compiler-gcc.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #ifndef _TOOLS_LINUX_COMPILER_H_ 3 - #error "Please don't include <linux/compiler-gcc.h> directly, include <linux/compiler.h> instead." 3 + #error "Please do not include <linux/compiler-gcc.h> directly, include <linux/compiler.h> instead." 4 4 #endif 5 5 6 6 /*
-2
tools/lib/list_sort.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - #include <linux/kernel.h> 3 2 #include <linux/compiler.h> 4 3 #include <linux/export.h> 5 - #include <linux/string.h> 6 4 #include <linux/list_sort.h> 7 5 #include <linux/list.h> 8 6
+2 -9
tools/perf/check-header_ignore_hunks/lib/list_sort.c
··· 1 - @@ -1,5 +1,6 @@ 2 - // SPDX-License-Identifier: GPL-2.0 3 - #include <linux/kernel.h> 4 - +#include <linux/bug.h> 5 - #include <linux/compiler.h> 6 - #include <linux/export.h> 7 - #include <linux/string.h> 8 - @@ -52,6 +53,7 @@ 1 + @@ -50,6 +50,7 @@ 9 2 struct list_head *a, struct list_head *b) 10 3 { 11 4 struct list_head *tail = head; ··· 6 13 7 14 for (;;) { 8 15 /* if equal, take 'a' -- important for sort stability */ 9 - @@ -77,6 +79,15 @@ 16 + @@ -75,6 +76,15 @@ 10 17 /* Finish linking remainder of list b on to tail */ 11 18 tail->next = b; 12 19 do {
+9 -5
tools/testing/shared/linux.c
··· 96 96 p = node; 97 97 } else { 98 98 pthread_mutex_unlock(&cachep->lock); 99 - if (cachep->align) 100 - posix_memalign(&p, cachep->align, cachep->size); 101 - else 99 + if (cachep->align) { 100 + if (posix_memalign(&p, cachep->align, cachep->size) < 0) 101 + return NULL; 102 + } else { 102 103 p = malloc(cachep->size); 104 + } 105 + 103 106 if (cachep->ctor) 104 107 cachep->ctor(p); 105 108 else if (gfp & __GFP_ZERO) ··· 198 195 } 199 196 200 197 if (cachep->align) { 201 - posix_memalign(&p[i], cachep->align, 202 - cachep->size); 198 + if (posix_memalign(&p[i], cachep->align, 199 + cachep->size) < 0) 200 + break; 203 201 } else { 204 202 p[i] = malloc(cachep->size); 205 203 if (!p[i])