Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'akpm' (patches from Andrew)

Merge more updates from Andrew Morton:

- a few block updates that fell in my lap

- lib/ updates

- checkpatch

- autofs

- ipc

- a ton of misc other things

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (100 commits)
mm: split gfp_mask and mapping flags into separate fields
fs: use mapping_set_error instead of opencoded set_bit
treewide: remove redundant #include <linux/kconfig.h>
hung_task: allow hung_task_panic when hung_task_warnings is 0
kthread: add kerneldoc for kthread_create()
kthread: better support freezable kthread workers
kthread: allow to modify delayed kthread work
kthread: allow to cancel kthread work
kthread: initial support for delayed kthread work
kthread: detect when a kthread work is used by more workers
kthread: add kthread_destroy_worker()
kthread: add kthread_create_worker*()
kthread: allow to call __kthread_create_on_node() with va_list args
kthread/smpboot: do not park in kthread_create_on_cpu()
kthread: kthread worker API cleanup
kthread: rename probe_kthread_data() to kthread_probe_data()
scripts/tags.sh: enable code completion in VIM
mm: kmemleak: avoid using __va() on addresses that don't have a lowmem mapping
kdump, vmcoreinfo: report memory sections virtual addresses
ipc/sem.c: add cond_resched in exit_sme
...

+2299 -1060
+17
Documentation/DMA-attributes.txt
··· 126 126 127 127 NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, 128 128 though ARM64 patches will likely be posted soon. 129 + 130 + DMA_ATTR_NO_WARN 131 + ---------------- 132 + 133 + This tells the DMA-mapping subsystem to suppress allocation failure reports 134 + (similarly to __GFP_NOWARN). 135 + 136 + On some architectures allocation failures are reported with error messages 137 + to the system logs. Although this can help to identify and debug problems, 138 + drivers which handle failures (eg, retry later) have no problems with them, 139 + and can actually flood the system logs with error messages that aren't any 140 + problem at all, depending on the implementation of the retry mechanism. 141 + 142 + So, this provides a way for drivers to avoid those error messages on calls 143 + where allocation failures are not a problem, and shouldn't bother the logs. 144 + 145 + NOTE: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC.
+1 -1
Documentation/RCU/lockdep-splat.txt
··· 57 57 [<ffffffff817db154>] kernel_thread_helper+0x4/0x10 58 58 [<ffffffff81066430>] ? finish_task_switch+0x80/0x110 59 59 [<ffffffff817d9c04>] ? retint_restore_args+0xe/0xe 60 - [<ffffffff81097510>] ? __init_kthread_worker+0x70/0x70 60 + [<ffffffff81097510>] ? __kthread_init_worker+0x70/0x70 61 61 [<ffffffff817db150>] ? gs_change+0xb/0xb 62 62 63 63 Line 2776 of block/cfq-iosched.c in v3.0-rc5 is as follows:
+9
Documentation/dev-tools/kmemleak.rst
··· 162 162 - ``kmemleak_alloc_recursive`` - as kmemleak_alloc but checks the recursiveness 163 163 - ``kmemleak_free_recursive`` - as kmemleak_free but checks the recursiveness 164 164 165 + The following functions take a physical address as the object pointer 166 + and only perform the corresponding action if the address has a lowmem 167 + mapping: 168 + 169 + - ``kmemleak_alloc_phys`` 170 + - ``kmemleak_free_part_phys`` 171 + - ``kmemleak_not_leak_phys`` 172 + - ``kmemleak_ignore_phys`` 173 + 165 174 Dealing with false positives/negatives 166 175 -------------------------------------- 167 176
+42 -29
Documentation/filesystems/autofs4-mount-control.txt
··· 179 179 * including this struct */ 180 180 __s32 ioctlfd; /* automount command fd */ 181 181 182 - __u32 arg1; /* Command parameters */ 183 - __u32 arg2; 182 + union { 183 + struct args_protover protover; 184 + struct args_protosubver protosubver; 185 + struct args_openmount openmount; 186 + struct args_ready ready; 187 + struct args_fail fail; 188 + struct args_setpipefd setpipefd; 189 + struct args_timeout timeout; 190 + struct args_requester requester; 191 + struct args_expire expire; 192 + struct args_askumount askumount; 193 + struct args_ismountpoint ismountpoint; 194 + }; 184 195 185 196 char path[0]; 186 197 }; ··· 203 192 mount point file descriptor, and when requesting the uid and gid of the 204 193 last successful mount on a directory within the autofs file system. 205 194 206 - The fields arg1 and arg2 are used to communicate parameters and results of 207 - calls made as described below. 195 + The union is used to communicate parameters and results of calls made 196 + as described below. 208 197 209 198 The path field is used to pass a path where it is needed and the size field 210 199 is used account for the increased structure length when translating the ··· 256 245 Get the major and minor version of the autofs4 protocol version understood 257 246 by loaded module. This call requires an initialized struct autofs_dev_ioctl 258 247 with the ioctlfd field set to a valid autofs mount point descriptor 259 - and sets the requested version number in structure field arg1. These 260 - commands return 0 on success or one of the negative error codes if 261 - validation fails. 248 + and sets the requested version number in version field of struct args_protover 249 + or sub_version field of struct args_protosubver. These commands return 250 + 0 on success or one of the negative error codes if validation fails. 262 251 263 252 264 253 AUTOFS_DEV_IOCTL_OPENMOUNT and AUTOFS_DEV_IOCTL_CLOSEMOUNT ··· 267 256 Obtain and release a file descriptor for an autofs managed mount point 268 257 path. The open call requires an initialized struct autofs_dev_ioctl with 269 258 the path field set and the size field adjusted appropriately as well 270 - as the arg1 field set to the device number of the autofs mount. The 271 - device number can be obtained from the mount options shown in 272 - /proc/mounts. The close call requires an initialized struct 259 + as the devid field of struct args_openmount set to the device number of 260 + the autofs mount. The device number can be obtained from the mount options 261 + shown in /proc/mounts. The close call requires an initialized struct 273 262 autofs_dev_ioct with the ioctlfd field set to the descriptor obtained 274 263 from the open call. The release of the file descriptor can also be done 275 264 with close(2) so any open descriptors will also be closed at process exit. ··· 283 272 Return mount and expire result status from user space to the kernel. 284 273 Both of these calls require an initialized struct autofs_dev_ioctl 285 274 with the ioctlfd field set to the descriptor obtained from the open 286 - call and the arg1 field set to the wait queue token number, received 287 - by user space in the foregoing mount or expire request. The arg2 field 288 - is set to the status to be returned. For the ready call this is always 289 - 0 and for the fail call it is set to the errno of the operation. 275 + call and the token field of struct args_ready or struct args_fail set 276 + to the wait queue token number, received by user space in the foregoing 277 + mount or expire request. The status field of struct args_fail is set to 278 + the errno of the operation. It is set to 0 on success. 290 279 291 280 292 281 AUTOFS_DEV_IOCTL_SETPIPEFD_CMD ··· 301 290 302 291 The call requires an initialized struct autofs_dev_ioctl with the 303 292 ioctlfd field set to the descriptor obtained from the open call and 304 - the arg1 field set to descriptor of the pipe. On success the call 305 - also sets the process group id used to identify the controlling process 306 - (eg. the owning automount(8) daemon) to the process group of the caller. 293 + the pipefd field of struct args_setpipefd set to descriptor of the pipe. 294 + On success the call also sets the process group id used to identify the 295 + controlling process (eg. the owning automount(8) daemon) to the process 296 + group of the caller. 307 297 308 298 309 299 AUTOFS_DEV_IOCTL_CATATONIC_CMD ··· 335 323 336 324 The call requires an initialized struct autofs_dev_ioctl with the path 337 325 field set to the mount point in question and the size field adjusted 338 - appropriately as well as the arg1 field set to the device number of the 339 - containing autofs mount. Upon return the struct field arg1 contains the 340 - uid and arg2 the gid. 326 + appropriately. Upon return the uid field of struct args_requester contains 327 + the uid and gid field the gid. 341 328 342 329 When reconstructing an autofs mount tree with active mounts we need to 343 330 re-connect to mounts that may have used the original process uid and ··· 354 343 The call requires an initialized struct autofs_dev_ioctl with the 355 344 ioctlfd field set to the descriptor obtained from the open call. In 356 345 addition an immediate expire, independent of the mount timeout, can be 357 - requested by setting the arg1 field to 1. If no expire candidates can 358 - be found the ioctl returns -1 with errno set to EAGAIN. 346 + requested by setting the how field of struct args_expire to 1. If no 347 + expire candidates can be found the ioctl returns -1 with errno set to 348 + EAGAIN. 359 349 360 350 This call causes the kernel module to check the mount corresponding 361 351 to the given ioctlfd for mounts that can be expired, issues an expire ··· 369 357 370 358 The call requires an initialized struct autofs_dev_ioctl with the 371 359 ioctlfd field set to the descriptor obtained from the open call and 372 - it returns the result in the arg1 field, 1 for busy and 0 otherwise. 360 + it returns the result in the may_umount field of struct args_askumount, 361 + 1 for busy and 0 otherwise. 373 362 374 363 375 364 AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD ··· 382 369 possible variations. Both use the path field set to the path of the mount 383 370 point to check and the size field adjusted appropriately. One uses the 384 371 ioctlfd field to identify a specific mount point to check while the other 385 - variation uses the path and optionally arg1 set to an autofs mount type. 386 - The call returns 1 if this is a mount point and sets arg1 to the device 387 - number of the mount and field arg2 to the relevant super block magic 388 - number (described below) or 0 if it isn't a mountpoint. In both cases 389 - the the device number (as returned by new_encode_dev()) is returned 390 - in field arg1. 372 + variation uses the path and optionally in.type field of struct args_ismountpoint 373 + set to an autofs mount type. The call returns 1 if this is a mount point 374 + and sets out.devid field to the device number of the mount and out.magic 375 + field to the relevant super block magic number (described below) or 0 if 376 + it isn't a mountpoint. In both cases the the device number (as returned 377 + by new_encode_dev()) is returned in out.devid field. 391 378 392 379 If supplied with a file descriptor we're looking for a specific mount, 393 380 not necessarily at the top of the mounted stack. In this case the path
+4 -4
Documentation/filesystems/autofs4.txt
··· 203 203 Mountpoint expiry 204 204 ----------------- 205 205 206 - The VFS has a mechansim for automatically expiring unused mounts, 206 + The VFS has a mechanism for automatically expiring unused mounts, 207 207 much as it can expire any unused dentry information from the dcache. 208 - This is guided by the MNT_SHRINKABLE flag. This only applies to 208 + This is guided by the MNT_SHRINKABLE flag. This only applies to 209 209 mounts that were created by `d_automount()` returning a filesystem to be 210 210 mounted. As autofs doesn't return such a filesystem but leaves the 211 211 mounting to the automount daemon, it must involve the automount daemon ··· 298 298 autofs knows whether a process requesting some operation is the daemon 299 299 or not based on its process-group id number (see getpgid(1)). 300 300 301 - When an autofs filesystem it mounted the pgid of the mounting 301 + When an autofs filesystem is mounted the pgid of the mounting 302 302 processes is recorded unless the "pgrp=" option is given, in which 303 303 case that number is recorded instead. Any request arriving from a 304 304 process in that process group is considered to come from the daemon. ··· 450 450 numbers for existing filesystems can be found in 451 451 `/proc/self/mountinfo`. 452 452 - **AUTOFS_DEV_IOCTL_CLOSEMOUNT_CMD**: same as `close(ioctlfd)`. 453 - - **AUTOFS_DEV_IOCTL_SETPIPEFD_CMD**: if the filesystem is in 453 + - **AUTOFS_DEV_IOCTL_SETPIPEFD_CMD**: if the filesystem is in 454 454 catatonic mode, this can provide the write end of a new pipe 455 455 in `arg1` to re-establish communication with a daemon. The 456 456 process group of the calling process is used to identify the
+36 -14
Documentation/kernel-parameters.txt
··· 33 33 Double-quotes can be used to protect spaces in values, e.g.: 34 34 param="spaces in here" 35 35 36 + cpu lists: 37 + ---------- 38 + 39 + Some kernel parameters take a list of CPUs as a value, e.g. isolcpus, 40 + nohz_full, irqaffinity, rcu_nocbs. The format of this list is: 41 + 42 + <cpu number>,...,<cpu number> 43 + 44 + or 45 + 46 + <cpu number>-<cpu number> 47 + (must be a positive range in ascending order) 48 + 49 + or a mixture 50 + 51 + <cpu number>,...,<cpu number>-<cpu number> 52 + 53 + Note that for the special case of a range one can split the range into equal 54 + sized groups and for each group use some amount from the beginning of that 55 + group: 56 + 57 + <cpu number>-cpu number>:<used size>/<group size> 58 + 59 + For example one can add to the command line following parameter: 60 + 61 + isolcpus=1,2,10-20,100-2000:2/25 62 + 63 + where the final item represents CPUs 100,101,125,126,150,151,... 64 + 65 + 66 + 36 67 This document may not be entirely up to date and comprehensive. The command 37 68 "modinfo -p ${modulename}" shows a current list of all parameters of a loadable 38 69 module. Loadable modules, after being loaded into the running kernel, also ··· 1820 1789 See Documentation/filesystems/nfs/nfsroot.txt. 1821 1790 1822 1791 irqaffinity= [SMP] Set the default irq affinity mask 1823 - Format: 1824 - <cpu number>,...,<cpu number> 1825 - or 1826 - <cpu number>-<cpu number> 1827 - (must be a positive range in ascending order) 1828 - or a mixture 1829 - <cpu number>,...,<cpu number>-<cpu number> 1792 + The argument is a cpu list, as described above. 1830 1793 1831 1794 irqfixup [HW] 1832 1795 When an interrupt is not handled search all handlers ··· 1837 1812 Format: <RDP>,<reset>,<pci_scan>,<verbosity> 1838 1813 1839 1814 isolcpus= [KNL,SMP] Isolate CPUs from the general scheduler. 1840 - Format: 1841 - <cpu number>,...,<cpu number> 1842 - or 1843 - <cpu number>-<cpu number> 1844 - (must be a positive range in ascending order) 1845 - or a mixture 1846 - <cpu number>,...,<cpu number>-<cpu number> 1815 + The argument is a cpu list, as described above. 1847 1816 1848 1817 This option can be used to specify one or more CPUs 1849 1818 to isolate from the general SMP balancing and scheduling ··· 2699 2680 Default: on 2700 2681 2701 2682 nohz_full= [KNL,BOOT] 2683 + The argument is a cpu list, as described above. 2702 2684 In kernels built with CONFIG_NO_HZ_FULL=y, set 2703 2685 the specified list of CPUs whose tick will be stopped 2704 2686 whenever possible. The boot CPU will be forced outside ··· 3305 3285 See Documentation/blockdev/ramdisk.txt. 3306 3286 3307 3287 rcu_nocbs= [KNL] 3288 + The argument is a cpu list, as described above. 3289 + 3308 3290 In kernels built with CONFIG_RCU_NOCB_CPU=y, set 3309 3291 the specified list of CPUs to be no-callback CPUs. 3310 3292 Invocation of these CPUs' RCU callbacks will
-1
arch/arm/include/asm/trusted_foundations.h
··· 26 26 #ifndef __ASM_ARM_TRUSTED_FOUNDATIONS_H 27 27 #define __ASM_ARM_TRUSTED_FOUNDATIONS_H 28 28 29 - #include <linux/kconfig.h> 30 29 #include <linux/printk.h> 31 30 #include <linux/bug.h> 32 31 #include <linux/of.h>
+1 -2
arch/arm/kernel/process.c
··· 318 318 319 319 unsigned long arch_randomize_brk(struct mm_struct *mm) 320 320 { 321 - unsigned long range_end = mm->brk + 0x02000000; 322 - return randomize_range(mm->brk, range_end, 0) ? : mm->brk; 321 + return randomize_page(mm->brk, 0x02000000); 323 322 } 324 323 325 324 #ifdef CONFIG_MMU
-1
arch/arm64/include/asm/alternative.h
··· 7 7 #ifndef __ASSEMBLY__ 8 8 9 9 #include <linux/init.h> 10 - #include <linux/kconfig.h> 11 10 #include <linux/types.h> 12 11 #include <linux/stddef.h> 13 12 #include <linux/stringify.h>
+2 -6
arch/arm64/kernel/process.c
··· 372 372 373 373 unsigned long arch_randomize_brk(struct mm_struct *mm) 374 374 { 375 - unsigned long range_end = mm->brk; 376 - 377 375 if (is_compat_task()) 378 - range_end += 0x02000000; 376 + return randomize_page(mm->brk, 0x02000000); 379 377 else 380 - range_end += 0x40000000; 381 - 382 - return randomize_range(mm->brk, range_end, 0) ? : mm->brk; 378 + return randomize_page(mm->brk, 0x40000000); 383 379 }
+14
arch/mips/cavium-octeon/setup.c
··· 267 267 default_machine_crash_shutdown(regs); 268 268 } 269 269 270 + #ifdef CONFIG_SMP 271 + void octeon_crash_smp_send_stop(void) 272 + { 273 + int cpu; 274 + 275 + /* disable watchdogs */ 276 + for_each_online_cpu(cpu) 277 + cvmx_write_csr(CVMX_CIU_WDOGX(cpu_logical_map(cpu)), 0); 278 + } 279 + #endif 280 + 270 281 #endif /* CONFIG_KEXEC */ 271 282 272 283 #ifdef CONFIG_CAVIUM_RESERVE32 ··· 922 911 _machine_kexec_shutdown = octeon_shutdown; 923 912 _machine_crash_shutdown = octeon_crash_shutdown; 924 913 _machine_kexec_prepare = octeon_kexec_prepare; 914 + #ifdef CONFIG_SMP 915 + _crash_smp_send_stop = octeon_crash_smp_send_stop; 916 + #endif 925 917 #endif 926 918 927 919 octeon_user_io_init();
+1
arch/mips/include/asm/kexec.h
··· 45 45 extern unsigned long secondary_kexec_args[4]; 46 46 extern void (*relocated_kexec_smp_wait) (void *); 47 47 extern atomic_t kexec_ready_to_reboot; 48 + extern void (*_crash_smp_send_stop)(void); 48 49 #endif 49 50 #endif 50 51
-1
arch/mips/include/asm/mach-loongson64/loongson.h
··· 14 14 #include <linux/io.h> 15 15 #include <linux/init.h> 16 16 #include <linux/irq.h> 17 - #include <linux/kconfig.h> 18 17 #include <boot_param.h> 19 18 20 19 /* loongson internal northbridge initialization */
+17 -1
arch/mips/kernel/crash.c
··· 47 47 48 48 static void crash_kexec_prepare_cpus(void) 49 49 { 50 + static int cpus_stopped; 50 51 unsigned int msecs; 52 + unsigned int ncpus; 51 53 52 - unsigned int ncpus = num_online_cpus() - 1;/* Excluding the panic cpu */ 54 + if (cpus_stopped) 55 + return; 56 + 57 + ncpus = num_online_cpus() - 1;/* Excluding the panic cpu */ 53 58 54 59 dump_send_ipi(crash_shutdown_secondary); 55 60 smp_wmb(); ··· 69 64 cpu_relax(); 70 65 mdelay(1); 71 66 } 67 + 68 + cpus_stopped = 1; 69 + } 70 + 71 + /* Override the weak function in kernel/panic.c */ 72 + void crash_smp_send_stop(void) 73 + { 74 + if (_crash_smp_send_stop) 75 + _crash_smp_send_stop(); 76 + 77 + crash_kexec_prepare_cpus(); 72 78 } 73 79 74 80 #else /* !defined(CONFIG_SMP) */
+1
arch/mips/kernel/machine_kexec.c
··· 25 25 #ifdef CONFIG_SMP 26 26 void (*relocated_kexec_smp_wait) (void *); 27 27 atomic_t kexec_ready_to_reboot = ATOMIC_INIT(0); 28 + void (*_crash_smp_send_stop)(void) = NULL; 28 29 #endif 29 30 30 31 int
-1
arch/mips/math-emu/cp1emu.c
··· 35 35 */ 36 36 #include <linux/sched.h> 37 37 #include <linux/debugfs.h> 38 - #include <linux/kconfig.h> 39 38 #include <linux/percpu-defs.h> 40 39 #include <linux/perf_event.h> 41 40
-1
arch/mips/net/bpf_jit.c
··· 14 14 #include <linux/errno.h> 15 15 #include <linux/filter.h> 16 16 #include <linux/if_vlan.h> 17 - #include <linux/kconfig.h> 18 17 #include <linux/moduleloader.h> 19 18 #include <linux/netdevice.h> 20 19 #include <linux/string.h>
+4 -2
arch/powerpc/kernel/iommu.c
··· 479 479 480 480 /* Handle failure */ 481 481 if (unlikely(entry == DMA_ERROR_CODE)) { 482 - if (printk_ratelimit()) 482 + if (!(attrs & DMA_ATTR_NO_WARN) && 483 + printk_ratelimit()) 483 484 dev_info(dev, "iommu_alloc failed, tbl %p " 484 485 "vaddr %lx npages %lu\n", tbl, vaddr, 485 486 npages); ··· 777 776 mask >> tbl->it_page_shift, align, 778 777 attrs); 779 778 if (dma_handle == DMA_ERROR_CODE) { 780 - if (printk_ratelimit()) { 779 + if (!(attrs & DMA_ATTR_NO_WARN) && 780 + printk_ratelimit()) { 781 781 dev_info(dev, "iommu_alloc failed, tbl %p " 782 782 "vaddr %p npages %d\n", tbl, vaddr, 783 783 npages);
+1 -2
arch/tile/mm/mmap.c
··· 88 88 89 89 unsigned long arch_randomize_brk(struct mm_struct *mm) 90 90 { 91 - unsigned long range_end = mm->brk + 0x02000000; 92 - return randomize_range(mm->brk, range_end, 0) ? : mm->brk; 91 + return randomize_page(mm->brk, 0x02000000); 93 92 }
+1 -2
arch/unicore32/kernel/process.c
··· 295 295 296 296 unsigned long arch_randomize_brk(struct mm_struct *mm) 297 297 { 298 - unsigned long range_end = mm->brk + 0x02000000; 299 - return randomize_range(mm->brk, range_end, 0) ? : mm->brk; 298 + return randomize_page(mm->brk, 0x02000000); 300 299 } 301 300 302 301 /*
+1
arch/x86/include/asm/kexec.h
··· 210 210 211 211 typedef void crash_vmclear_fn(void); 212 212 extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss; 213 + extern void kdump_nmi_shootdown_cpus(void); 213 214 214 215 #endif /* __ASSEMBLY__ */ 215 216
+1
arch/x86/include/asm/smp.h
··· 47 47 void (*smp_cpus_done)(unsigned max_cpus); 48 48 49 49 void (*stop_other_cpus)(int wait); 50 + void (*crash_stop_other_cpus)(void); 50 51 void (*smp_send_reschedule)(int cpu); 51 52 52 53 int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
+19 -3
arch/x86/kernel/crash.c
··· 133 133 disable_local_APIC(); 134 134 } 135 135 136 - static void kdump_nmi_shootdown_cpus(void) 136 + void kdump_nmi_shootdown_cpus(void) 137 137 { 138 138 nmi_shootdown_cpus(kdump_nmi_callback); 139 139 140 140 disable_local_APIC(); 141 141 } 142 142 143 + /* Override the weak function in kernel/panic.c */ 144 + void crash_smp_send_stop(void) 145 + { 146 + static int cpus_stopped; 147 + 148 + if (cpus_stopped) 149 + return; 150 + 151 + if (smp_ops.crash_stop_other_cpus) 152 + smp_ops.crash_stop_other_cpus(); 153 + else 154 + smp_send_stop(); 155 + 156 + cpus_stopped = 1; 157 + } 158 + 143 159 #else 144 - static void kdump_nmi_shootdown_cpus(void) 160 + void crash_smp_send_stop(void) 145 161 { 146 162 /* There are no cpus to shootdown */ 147 163 } ··· 176 160 /* The kernel is broken so disable interrupts */ 177 161 local_irq_disable(); 178 162 179 - kdump_nmi_shootdown_cpus(); 163 + crash_smp_send_stop(); 180 164 181 165 /* 182 166 * VMCLEAR VMCSs loaded on this cpu if needed.
+3
arch/x86/kernel/machine_kexec_64.c
··· 337 337 #endif 338 338 vmcoreinfo_append_str("KERNELOFFSET=%lx\n", 339 339 kaslr_offset()); 340 + VMCOREINFO_PAGE_OFFSET(PAGE_OFFSET); 341 + VMCOREINFO_VMALLOC_START(VMALLOC_START); 342 + VMCOREINFO_VMEMMAP_START(VMEMMAP_START); 340 343 } 341 344 342 345 /* arch-dependent functionality related to kexec file-based syscall */
+1 -2
arch/x86/kernel/process.c
··· 509 509 510 510 unsigned long arch_randomize_brk(struct mm_struct *mm) 511 511 { 512 - unsigned long range_end = mm->brk + 0x02000000; 513 - return randomize_range(mm->brk, range_end, 0) ? : mm->brk; 512 + return randomize_page(mm->brk, 0x02000000); 514 513 } 515 514 516 515 /*
+5
arch/x86/kernel/smp.c
··· 32 32 #include <asm/nmi.h> 33 33 #include <asm/mce.h> 34 34 #include <asm/trace/irq_vectors.h> 35 + #include <asm/kexec.h> 36 + 35 37 /* 36 38 * Some notes on x86 processor bugs affecting SMP operation: 37 39 * ··· 344 342 .smp_cpus_done = native_smp_cpus_done, 345 343 346 344 .stop_other_cpus = native_stop_other_cpus, 345 + #if defined(CONFIG_KEXEC_CORE) 346 + .crash_stop_other_cpus = kdump_nmi_shootdown_cpus, 347 + #endif 347 348 .smp_send_reschedule = native_smp_send_reschedule, 348 349 349 350 .cpu_up = native_cpu_up,
+1 -4
arch/x86/kernel/sys_x86_64.c
··· 101 101 unsigned long *end) 102 102 { 103 103 if (!test_thread_flag(TIF_ADDR32) && (flags & MAP_32BIT)) { 104 - unsigned long new_begin; 105 104 /* This is usually used needed to map code in small 106 105 model, so it needs to be in the first 31bit. Limit 107 106 it to that. This means we need to move the ··· 111 112 *begin = 0x40000000; 112 113 *end = 0x80000000; 113 114 if (current->flags & PF_RANDOMIZE) { 114 - new_begin = randomize_range(*begin, *begin + 0x02000000, 0); 115 - if (new_begin) 116 - *begin = new_begin; 115 + *begin = randomize_page(*begin, 0x02000000); 117 116 } 118 117 } else { 119 118 *begin = current->mm->mmap_legacy_base;
+7 -7
arch/x86/kvm/i8254.c
··· 212 212 */ 213 213 smp_mb(); 214 214 if (atomic_dec_if_positive(&ps->pending) > 0) 215 - queue_kthread_work(&pit->worker, &pit->expired); 215 + kthread_queue_work(&pit->worker, &pit->expired); 216 216 } 217 217 218 218 void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu) ··· 233 233 static void destroy_pit_timer(struct kvm_pit *pit) 234 234 { 235 235 hrtimer_cancel(&pit->pit_state.timer); 236 - flush_kthread_work(&pit->expired); 236 + kthread_flush_work(&pit->expired); 237 237 } 238 238 239 239 static void pit_do_work(struct kthread_work *work) ··· 272 272 if (atomic_read(&ps->reinject)) 273 273 atomic_inc(&ps->pending); 274 274 275 - queue_kthread_work(&pt->worker, &pt->expired); 275 + kthread_queue_work(&pt->worker, &pt->expired); 276 276 277 277 if (ps->is_periodic) { 278 278 hrtimer_add_expires_ns(&ps->timer, ps->period); ··· 324 324 325 325 /* TODO The new value only affected after the retriggered */ 326 326 hrtimer_cancel(&ps->timer); 327 - flush_kthread_work(&pit->expired); 327 + kthread_flush_work(&pit->expired); 328 328 ps->period = interval; 329 329 ps->is_periodic = is_period; 330 330 ··· 667 667 pid_nr = pid_vnr(pid); 668 668 put_pid(pid); 669 669 670 - init_kthread_worker(&pit->worker); 670 + kthread_init_worker(&pit->worker); 671 671 pit->worker_task = kthread_run(kthread_worker_fn, &pit->worker, 672 672 "kvm-pit/%d", pid_nr); 673 673 if (IS_ERR(pit->worker_task)) 674 674 goto fail_kthread; 675 675 676 - init_kthread_work(&pit->expired, pit_do_work); 676 + kthread_init_work(&pit->expired, pit_do_work); 677 677 678 678 pit->kvm = kvm; 679 679 ··· 730 730 kvm_io_bus_unregister_dev(kvm, KVM_PIO_BUS, &pit->speaker_dev); 731 731 kvm_pit_set_reinject(pit, false); 732 732 hrtimer_cancel(&pit->pit_state.timer); 733 - flush_kthread_work(&pit->expired); 733 + kthread_flush_work(&pit->expired); 734 734 kthread_stop(pit->worker_task); 735 735 kvm_free_irq_source_id(kvm, pit->irq_source_id); 736 736 kfree(pit);
+15
block/blk-lib.c
··· 31 31 unsigned int granularity; 32 32 enum req_op op; 33 33 int alignment; 34 + sector_t bs_mask; 34 35 35 36 if (!q) 36 37 return -ENXIO; ··· 50 49 return -EOPNOTSUPP; 51 50 op = REQ_OP_DISCARD; 52 51 } 52 + 53 + bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1; 54 + if ((sector | nr_sects) & bs_mask) 55 + return -EINVAL; 53 56 54 57 /* Zero-sector (unknown) and one-sector granularities are the same. */ 55 58 granularity = max(q->limits.discard_granularity >> 9, 1U); ··· 155 150 unsigned int max_write_same_sectors; 156 151 struct bio *bio = NULL; 157 152 int ret = 0; 153 + sector_t bs_mask; 158 154 159 155 if (!q) 160 156 return -ENXIO; 157 + 158 + bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1; 159 + if ((sector | nr_sects) & bs_mask) 160 + return -EINVAL; 161 161 162 162 /* Ensure that max_write_same_sectors doesn't overflow bi_size */ 163 163 max_write_same_sectors = UINT_MAX >> 9; ··· 212 202 int ret; 213 203 struct bio *bio = NULL; 214 204 unsigned int sz; 205 + sector_t bs_mask; 206 + 207 + bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1; 208 + if ((sector | nr_sects) & bs_mask) 209 + return -EINVAL; 215 210 216 211 while (nr_sects != 0) { 217 212 bio = next_bio(bio, min(nr_sects, (sector_t)BIO_MAX_PAGES),
+12 -6
block/ioctl.c
··· 225 225 unsigned long arg) 226 226 { 227 227 uint64_t range[2]; 228 - uint64_t start, len; 228 + struct address_space *mapping; 229 + uint64_t start, end, len; 229 230 230 231 if (!(mode & FMODE_WRITE)) 231 232 return -EBADF; ··· 236 235 237 236 start = range[0]; 238 237 len = range[1]; 238 + end = start + len - 1; 239 239 240 240 if (start & 511) 241 241 return -EINVAL; 242 242 if (len & 511) 243 243 return -EINVAL; 244 - start >>= 9; 245 - len >>= 9; 246 - 247 - if (start + len > (i_size_read(bdev->bd_inode) >> 9)) 244 + if (end >= (uint64_t)i_size_read(bdev->bd_inode)) 245 + return -EINVAL; 246 + if (end < start) 248 247 return -EINVAL; 249 248 250 - return blkdev_issue_zeroout(bdev, start, len, GFP_KERNEL, false); 249 + /* Invalidate the page cache, including dirty pages */ 250 + mapping = bdev->bd_inode->i_mapping; 251 + truncate_inode_pages_range(mapping, start, end); 252 + 253 + return blkdev_issue_zeroout(bdev, start >> 9, len >> 9, GFP_KERNEL, 254 + false); 251 255 } 252 256 253 257 static int put_ushort(unsigned long arg, unsigned short val)
+10 -10
crypto/crypto_engine.c
··· 47 47 48 48 /* If another context is idling then defer */ 49 49 if (engine->idling) { 50 - queue_kthread_work(&engine->kworker, &engine->pump_requests); 50 + kthread_queue_work(&engine->kworker, &engine->pump_requests); 51 51 goto out; 52 52 } 53 53 ··· 58 58 59 59 /* Only do teardown in the thread */ 60 60 if (!in_kthread) { 61 - queue_kthread_work(&engine->kworker, 61 + kthread_queue_work(&engine->kworker, 62 62 &engine->pump_requests); 63 63 goto out; 64 64 } ··· 189 189 ret = ablkcipher_enqueue_request(&engine->queue, req); 190 190 191 191 if (!engine->busy && need_pump) 192 - queue_kthread_work(&engine->kworker, &engine->pump_requests); 192 + kthread_queue_work(&engine->kworker, &engine->pump_requests); 193 193 194 194 spin_unlock_irqrestore(&engine->queue_lock, flags); 195 195 return ret; ··· 231 231 ret = ahash_enqueue_request(&engine->queue, req); 232 232 233 233 if (!engine->busy && need_pump) 234 - queue_kthread_work(&engine->kworker, &engine->pump_requests); 234 + kthread_queue_work(&engine->kworker, &engine->pump_requests); 235 235 236 236 spin_unlock_irqrestore(&engine->queue_lock, flags); 237 237 return ret; ··· 284 284 285 285 req->base.complete(&req->base, err); 286 286 287 - queue_kthread_work(&engine->kworker, &engine->pump_requests); 287 + kthread_queue_work(&engine->kworker, &engine->pump_requests); 288 288 } 289 289 EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request); 290 290 ··· 321 321 322 322 req->base.complete(&req->base, err); 323 323 324 - queue_kthread_work(&engine->kworker, &engine->pump_requests); 324 + kthread_queue_work(&engine->kworker, &engine->pump_requests); 325 325 } 326 326 EXPORT_SYMBOL_GPL(crypto_finalize_hash_request); 327 327 ··· 345 345 engine->running = true; 346 346 spin_unlock_irqrestore(&engine->queue_lock, flags); 347 347 348 - queue_kthread_work(&engine->kworker, &engine->pump_requests); 348 + kthread_queue_work(&engine->kworker, &engine->pump_requests); 349 349 350 350 return 0; 351 351 } ··· 422 422 crypto_init_queue(&engine->queue, CRYPTO_ENGINE_MAX_QLEN); 423 423 spin_lock_init(&engine->queue_lock); 424 424 425 - init_kthread_worker(&engine->kworker); 425 + kthread_init_worker(&engine->kworker); 426 426 engine->kworker_task = kthread_run(kthread_worker_fn, 427 427 &engine->kworker, "%s", 428 428 engine->name); ··· 430 430 dev_err(dev, "failed to create crypto request pump task\n"); 431 431 return NULL; 432 432 } 433 - init_kthread_work(&engine->pump_requests, crypto_pump_work); 433 + kthread_init_work(&engine->pump_requests, crypto_pump_work); 434 434 435 435 if (engine->rt) { 436 436 dev_info(dev, "will run requests pump with realtime priority\n"); ··· 455 455 if (ret) 456 456 return ret; 457 457 458 - flush_kthread_worker(&engine->kworker); 458 + kthread_flush_worker(&engine->kworker); 459 459 kthread_stop(engine->kworker_task); 460 460 461 461 return 0;
+4 -4
drivers/block/loop.c
··· 840 840 841 841 static void loop_unprepare_queue(struct loop_device *lo) 842 842 { 843 - flush_kthread_worker(&lo->worker); 843 + kthread_flush_worker(&lo->worker); 844 844 kthread_stop(lo->worker_task); 845 845 } 846 846 847 847 static int loop_prepare_queue(struct loop_device *lo) 848 848 { 849 - init_kthread_worker(&lo->worker); 849 + kthread_init_worker(&lo->worker); 850 850 lo->worker_task = kthread_run(kthread_worker_fn, 851 851 &lo->worker, "loop%d", lo->lo_number); 852 852 if (IS_ERR(lo->worker_task)) ··· 1658 1658 break; 1659 1659 } 1660 1660 1661 - queue_kthread_work(&lo->worker, &cmd->work); 1661 + kthread_queue_work(&lo->worker, &cmd->work); 1662 1662 1663 1663 return BLK_MQ_RQ_QUEUE_OK; 1664 1664 } ··· 1696 1696 struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq); 1697 1697 1698 1698 cmd->rq = rq; 1699 - init_kthread_work(&cmd->work, loop_queue_work); 1699 + kthread_init_work(&cmd->work, loop_queue_work); 1700 1700 1701 1701 return 0; 1702 1702 }
+25 -11
drivers/char/random.c
··· 2100 2100 } 2101 2101 EXPORT_SYMBOL(get_random_long); 2102 2102 2103 - /* 2104 - * randomize_range() returns a start address such that 2103 + /** 2104 + * randomize_page - Generate a random, page aligned address 2105 + * @start: The smallest acceptable address the caller will take. 2106 + * @range: The size of the area, starting at @start, within which the 2107 + * random address must fall. 2105 2108 * 2106 - * [...... <range> .....] 2107 - * start end 2109 + * If @start + @range would overflow, @range is capped. 2108 2110 * 2109 - * a <range> with size "len" starting at the return value is inside in the 2110 - * area defined by [start, end], but is otherwise randomized. 2111 + * NOTE: Historical use of randomize_range, which this replaces, presumed that 2112 + * @start was already page aligned. We now align it regardless. 2113 + * 2114 + * Return: A page aligned address within [start, start + range). On error, 2115 + * @start is returned. 2111 2116 */ 2112 2117 unsigned long 2113 - randomize_range(unsigned long start, unsigned long end, unsigned long len) 2118 + randomize_page(unsigned long start, unsigned long range) 2114 2119 { 2115 - unsigned long range = end - len - start; 2120 + if (!PAGE_ALIGNED(start)) { 2121 + range -= PAGE_ALIGN(start) - start; 2122 + start = PAGE_ALIGN(start); 2123 + } 2116 2124 2117 - if (end <= start + len) 2118 - return 0; 2119 - return PAGE_ALIGN(get_random_int() % range + start); 2125 + if (start > ULONG_MAX - range) 2126 + range = ULONG_MAX - start; 2127 + 2128 + range >>= PAGE_SHIFT; 2129 + 2130 + if (range == 0) 2131 + return start; 2132 + 2133 + return start + (get_random_long() % range << PAGE_SHIFT); 2120 2134 } 2121 2135 2122 2136 /* Interface for in-kernel drivers of true hardware RNGs.
-1
drivers/char/virtio_console.c
··· 38 38 #include <linux/workqueue.h> 39 39 #include <linux/module.h> 40 40 #include <linux/dma-mapping.h> 41 - #include <linux/kconfig.h> 42 41 #include "../tty/hvc/hvc_console.h" 43 42 44 43 #define is_rproc_enabled IS_ENABLED(CONFIG_REMOTEPROC)
+5 -5
drivers/infiniband/sw/rdmavt/cq.c
··· 129 129 if (likely(worker)) { 130 130 cq->notify = RVT_CQ_NONE; 131 131 cq->triggered++; 132 - queue_kthread_work(worker, &cq->comptask); 132 + kthread_queue_work(worker, &cq->comptask); 133 133 } 134 134 } 135 135 ··· 265 265 cq->ibcq.cqe = entries; 266 266 cq->notify = RVT_CQ_NONE; 267 267 spin_lock_init(&cq->lock); 268 - init_kthread_work(&cq->comptask, send_complete); 268 + kthread_init_work(&cq->comptask, send_complete); 269 269 cq->queue = wc; 270 270 271 271 ret = &cq->ibcq; ··· 295 295 struct rvt_cq *cq = ibcq_to_rvtcq(ibcq); 296 296 struct rvt_dev_info *rdi = cq->rdi; 297 297 298 - flush_kthread_work(&cq->comptask); 298 + kthread_flush_work(&cq->comptask); 299 299 spin_lock(&rdi->n_cqs_lock); 300 300 rdi->n_cqs_allocated--; 301 301 spin_unlock(&rdi->n_cqs_lock); ··· 514 514 rdi->worker = kzalloc(sizeof(*rdi->worker), GFP_KERNEL); 515 515 if (!rdi->worker) 516 516 return -ENOMEM; 517 - init_kthread_worker(rdi->worker); 517 + kthread_init_worker(rdi->worker); 518 518 task = kthread_create_on_node( 519 519 kthread_worker_fn, 520 520 rdi->worker, ··· 547 547 /* blocks future queuing from send_complete() */ 548 548 rdi->worker = NULL; 549 549 smp_wmb(); /* See rdi_cq_enter */ 550 - flush_kthread_worker(worker); 550 + kthread_flush_worker(worker); 551 551 kthread_stop(worker->task); 552 552 kfree(worker); 553 553 }
-1
drivers/input/rmi4/rmi_bus.c
··· 9 9 10 10 #include <linux/kernel.h> 11 11 #include <linux/device.h> 12 - #include <linux/kconfig.h> 13 12 #include <linux/list.h> 14 13 #include <linux/pm.h> 15 14 #include <linux/rmi.h>
-1
drivers/input/rmi4/rmi_driver.c
··· 17 17 #include <linux/bitmap.h> 18 18 #include <linux/delay.h> 19 19 #include <linux/fs.h> 20 - #include <linux/kconfig.h> 21 20 #include <linux/pm.h> 22 21 #include <linux/slab.h> 23 22 #include <linux/of.h>
-1
drivers/input/rmi4/rmi_f01.c
··· 8 8 */ 9 9 10 10 #include <linux/kernel.h> 11 - #include <linux/kconfig.h> 12 11 #include <linux/rmi.h> 13 12 #include <linux/slab.h> 14 13 #include <linux/uaccess.h>
-1
drivers/input/rmi4/rmi_f11.c
··· 12 12 #include <linux/device.h> 13 13 #include <linux/input.h> 14 14 #include <linux/input/mt.h> 15 - #include <linux/kconfig.h> 16 15 #include <linux/rmi.h> 17 16 #include <linux/slab.h> 18 17 #include <linux/of.h>
-1
drivers/irqchip/irq-bcm6345-l1.c
··· 52 52 53 53 #include <linux/bitops.h> 54 54 #include <linux/cpumask.h> 55 - #include <linux/kconfig.h> 56 55 #include <linux/kernel.h> 57 56 #include <linux/init.h> 58 57 #include <linux/interrupt.h>
-1
drivers/irqchip/irq-bcm7038-l1.c
··· 12 12 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 13 14 14 #include <linux/bitops.h> 15 - #include <linux/kconfig.h> 16 15 #include <linux/kernel.h> 17 16 #include <linux/init.h> 18 17 #include <linux/interrupt.h>
-1
drivers/irqchip/irq-bcm7120-l2.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/slab.h> 15 15 #include <linux/module.h> 16 - #include <linux/kconfig.h> 17 16 #include <linux/kernel.h> 18 17 #include <linux/platform_device.h> 19 18 #include <linux/of.h>
-1
drivers/irqchip/irq-brcmstb-l2.c
··· 18 18 #include <linux/init.h> 19 19 #include <linux/slab.h> 20 20 #include <linux/module.h> 21 - #include <linux/kconfig.h> 22 21 #include <linux/platform_device.h> 23 22 #include <linux/spinlock.h> 24 23 #include <linux/of.h>
+3 -3
drivers/md/dm-rq.c
··· 581 581 if (!md->init_tio_pdu) 582 582 memset(&tio->info, 0, sizeof(tio->info)); 583 583 if (md->kworker_task) 584 - init_kthread_work(&tio->work, map_tio_request); 584 + kthread_init_work(&tio->work, map_tio_request); 585 585 } 586 586 587 587 static struct dm_rq_target_io *dm_old_prep_tio(struct request *rq, ··· 831 831 tio = tio_from_request(rq); 832 832 /* Establish tio->ti before queuing work (map_tio_request) */ 833 833 tio->ti = ti; 834 - queue_kthread_work(&md->kworker, &tio->work); 834 + kthread_queue_work(&md->kworker, &tio->work); 835 835 BUG_ON(!irqs_disabled()); 836 836 } 837 837 } ··· 853 853 blk_queue_prep_rq(md->queue, dm_old_prep_fn); 854 854 855 855 /* Initialize the request-based DM worker thread */ 856 - init_kthread_worker(&md->kworker); 856 + kthread_init_worker(&md->kworker); 857 857 md->kworker_task = kthread_run(kthread_worker_fn, &md->kworker, 858 858 "kdmwork-%s", dm_device_name(md)); 859 859 if (IS_ERR(md->kworker_task))
+2 -2
drivers/md/dm.c
··· 1891 1891 spin_unlock_irq(q->queue_lock); 1892 1892 1893 1893 if (dm_request_based(md) && md->kworker_task) 1894 - flush_kthread_worker(&md->kworker); 1894 + kthread_flush_worker(&md->kworker); 1895 1895 1896 1896 /* 1897 1897 * Take suspend_lock so that presuspend and postsuspend methods ··· 2147 2147 if (dm_request_based(md)) { 2148 2148 dm_stop_queue(md->queue); 2149 2149 if (md->kworker_task) 2150 - flush_kthread_worker(&md->kworker); 2150 + kthread_flush_worker(&md->kworker); 2151 2151 } 2152 2152 2153 2153 flush_workqueue(md->wq);
-1
drivers/media/dvb-frontends/af9013.h
··· 25 25 #ifndef AF9013_H 26 26 #define AF9013_H 27 27 28 - #include <linux/kconfig.h> 29 28 #include <linux/dvb/frontend.h> 30 29 31 30 /* AF9013/5 GPIOs (mostly guessed)
-2
drivers/media/dvb-frontends/af9033.h
··· 22 22 #ifndef AF9033_H 23 23 #define AF9033_H 24 24 25 - #include <linux/kconfig.h> 26 - 27 25 /* 28 26 * I2C address (TODO: are these in 8-bit format?) 29 27 * 0x38, 0x3a, 0x3c, 0x3e
-1
drivers/media/dvb-frontends/ascot2e.h
··· 22 22 #ifndef __DVB_ASCOT2E_H__ 23 23 #define __DVB_ASCOT2E_H__ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 #include <linux/i2c.h> 28 27
-1
drivers/media/dvb-frontends/atbm8830.h
··· 22 22 #ifndef __ATBM8830_H__ 23 23 #define __ATBM8830_H__ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 #include <linux/i2c.h> 28 27
-1
drivers/media/dvb-frontends/au8522.h
··· 22 22 #ifndef __AU8522_H__ 23 23 #define __AU8522_H__ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 enum au8522_if_freq {
-1
drivers/media/dvb-frontends/cx22702.h
··· 28 28 #ifndef CX22702_H 29 29 #define CX22702_H 30 30 31 - #include <linux/kconfig.h> 32 31 #include <linux/dvb/frontend.h> 33 32 34 33 struct cx22702_config {
-2
drivers/media/dvb-frontends/cx24113.h
··· 22 22 #ifndef CX24113_H 23 23 #define CX24113_H 24 24 25 - #include <linux/kconfig.h> 26 - 27 25 struct dvb_frontend; 28 26 29 27 struct cx24113_config {
-1
drivers/media/dvb-frontends/cx24116.h
··· 21 21 #ifndef CX24116_H 22 22 #define CX24116_H 23 23 24 - #include <linux/kconfig.h> 25 24 #include <linux/dvb/frontend.h> 26 25 27 26 struct cx24116_config {
-1
drivers/media/dvb-frontends/cx24117.h
··· 22 22 #ifndef CX24117_H 23 23 #define CX24117_H 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 struct cx24117_config {
-1
drivers/media/dvb-frontends/cx24120.h
··· 20 20 #ifndef CX24120_H 21 21 #define CX24120_H 22 22 23 - #include <linux/kconfig.h> 24 23 #include <linux/dvb/frontend.h> 25 24 #include <linux/firmware.h> 26 25
-1
drivers/media/dvb-frontends/cx24123.h
··· 21 21 #ifndef CX24123_H 22 22 #define CX24123_H 23 23 24 - #include <linux/kconfig.h> 25 24 #include <linux/dvb/frontend.h> 26 25 27 26 struct cx24123_config {
-1
drivers/media/dvb-frontends/cxd2820r.h
··· 22 22 #ifndef CXD2820R_H 23 23 #define CXD2820R_H 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 #define CXD2820R_GPIO_D (0 << 0) /* disable */
-1
drivers/media/dvb-frontends/cxd2841er.h
··· 22 22 #ifndef CXD2841ER_H 23 23 #define CXD2841ER_H 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 enum cxd2841er_xtal {
-2
drivers/media/dvb-frontends/dib3000mc.h
··· 13 13 #ifndef DIB3000MC_H 14 14 #define DIB3000MC_H 15 15 16 - #include <linux/kconfig.h> 17 - 18 16 #include "dibx000_common.h" 19 17 20 18 struct dib3000mc_config {
-2
drivers/media/dvb-frontends/dib7000m.h
··· 1 1 #ifndef DIB7000M_H 2 2 #define DIB7000M_H 3 3 4 - #include <linux/kconfig.h> 5 - 6 4 #include "dibx000_common.h" 7 5 8 6 struct dib7000m_config {
-2
drivers/media/dvb-frontends/dib7000p.h
··· 1 1 #ifndef DIB7000P_H 2 2 #define DIB7000P_H 3 3 4 - #include <linux/kconfig.h> 5 - 6 4 #include "dibx000_common.h" 7 5 8 6 struct dib7000p_config {
-1
drivers/media/dvb-frontends/drxd.h
··· 24 24 #ifndef _DRXD_H_ 25 25 #define _DRXD_H_ 26 26 27 - #include <linux/kconfig.h> 28 27 #include <linux/types.h> 29 28 #include <linux/i2c.h> 30 29
-1
drivers/media/dvb-frontends/drxk.h
··· 1 1 #ifndef _DRXK_H_ 2 2 #define _DRXK_H_ 3 3 4 - #include <linux/kconfig.h> 5 4 #include <linux/types.h> 6 5 #include <linux/i2c.h> 7 6
-1
drivers/media/dvb-frontends/ds3000.h
··· 22 22 #ifndef DS3000_H 23 23 #define DS3000_H 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 struct ds3000_config {
-1
drivers/media/dvb-frontends/dvb_dummy_fe.h
··· 22 22 #ifndef DVB_DUMMY_FE_H 23 23 #define DVB_DUMMY_FE_H 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 #include "dvb_frontend.h" 28 27
-1
drivers/media/dvb-frontends/ec100.h
··· 22 22 #ifndef EC100_H 23 23 #define EC100_H 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 struct ec100_config {
-1
drivers/media/dvb-frontends/hd29l2.h
··· 23 23 #ifndef HD29L2_H 24 24 #define HD29L2_H 25 25 26 - #include <linux/kconfig.h> 27 26 #include <linux/dvb/frontend.h> 28 27 29 28 struct hd29l2_config {
-1
drivers/media/dvb-frontends/helene.h
··· 21 21 #ifndef __DVB_HELENE_H__ 22 22 #define __DVB_HELENE_H__ 23 23 24 - #include <linux/kconfig.h> 25 24 #include <linux/dvb/frontend.h> 26 25 #include <linux/i2c.h> 27 26
-1
drivers/media/dvb-frontends/horus3a.h
··· 22 22 #ifndef __DVB_HORUS3A_H__ 23 23 #define __DVB_HORUS3A_H__ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 #include <linux/i2c.h> 28 27
-1
drivers/media/dvb-frontends/ix2505v.h
··· 20 20 #ifndef DVB_IX2505V_H 21 21 #define DVB_IX2505V_H 22 22 23 - #include <linux/kconfig.h> 24 23 #include <linux/i2c.h> 25 24 #include "dvb_frontend.h" 26 25
-1
drivers/media/dvb-frontends/lg2160.h
··· 22 22 #ifndef _LG2160_H_ 23 23 #define _LG2160_H_ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/i2c.h> 27 26 #include "dvb_frontend.h" 28 27
-1
drivers/media/dvb-frontends/lgdt3305.h
··· 22 22 #ifndef _LGDT3305_H_ 23 23 #define _LGDT3305_H_ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/i2c.h> 27 26 #include "dvb_frontend.h" 28 27
-1
drivers/media/dvb-frontends/lgs8gl5.h
··· 23 23 #ifndef LGS8GL5_H 24 24 #define LGS8GL5_H 25 25 26 - #include <linux/kconfig.h> 27 26 #include <linux/dvb/frontend.h> 28 27 29 28 struct lgs8gl5_config {
-1
drivers/media/dvb-frontends/lgs8gxx.h
··· 26 26 #ifndef __LGS8GXX_H__ 27 27 #define __LGS8GXX_H__ 28 28 29 - #include <linux/kconfig.h> 30 29 #include <linux/dvb/frontend.h> 31 30 #include <linux/i2c.h> 32 31
-2
drivers/media/dvb-frontends/lnbh24.h
··· 23 23 #ifndef _LNBH24_H 24 24 #define _LNBH24_H 25 25 26 - #include <linux/kconfig.h> 27 - 28 26 /* system register bits */ 29 27 #define LNBH24_OLF 0x01 30 28 #define LNBH24_OTF 0x02
-1
drivers/media/dvb-frontends/lnbh25.h
··· 22 22 #define LNBH25_H 23 23 24 24 #include <linux/i2c.h> 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 /* 22 kHz tone enabled. Tone output controlled by DSQIN pin */
-2
drivers/media/dvb-frontends/lnbp21.h
··· 27 27 #ifndef _LNBP21_H 28 28 #define _LNBP21_H 29 29 30 - #include <linux/kconfig.h> 31 - 32 30 /* system register bits */ 33 31 /* [RO] 0=OK; 1=over current limit flag */ 34 32 #define LNBP21_OLF 0x01
-2
drivers/media/dvb-frontends/lnbp22.h
··· 28 28 #ifndef _LNBP22_H 29 29 #define _LNBP22_H 30 30 31 - #include <linux/kconfig.h> 32 - 33 31 /* Enable */ 34 32 #define LNBP22_EN 0x10 35 33 /* Voltage selection */
-1
drivers/media/dvb-frontends/m88rs2000.h
··· 20 20 #ifndef M88RS2000_H 21 21 #define M88RS2000_H 22 22 23 - #include <linux/kconfig.h> 24 23 #include <linux/dvb/frontend.h> 25 24 #include "dvb_frontend.h" 26 25
-1
drivers/media/dvb-frontends/mb86a20s.h
··· 16 16 #ifndef MB86A20S_H 17 17 #define MB86A20S_H 18 18 19 - #include <linux/kconfig.h> 20 19 #include <linux/dvb/frontend.h> 21 20 22 21 /**
-1
drivers/media/dvb-frontends/s5h1409.h
··· 22 22 #ifndef __S5H1409_H__ 23 23 #define __S5H1409_H__ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 struct s5h1409_config {
-1
drivers/media/dvb-frontends/s5h1411.h
··· 22 22 #ifndef __S5H1411_H__ 23 23 #define __S5H1411_H__ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 #define S5H1411_I2C_TOP_ADDR (0x32 >> 1)
-1
drivers/media/dvb-frontends/s5h1432.h
··· 22 22 #ifndef __S5H1432_H__ 23 23 #define __S5H1432_H__ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 #define S5H1432_I2C_TOP_ADDR (0x02 >> 1)
-1
drivers/media/dvb-frontends/s921.h
··· 17 17 #ifndef S921_H 18 18 #define S921_H 19 19 20 - #include <linux/kconfig.h> 21 20 #include <linux/dvb/frontend.h> 22 21 23 22 struct s921_config {
-1
drivers/media/dvb-frontends/si21xx.h
··· 1 1 #ifndef SI21XX_H 2 2 #define SI21XX_H 3 3 4 - #include <linux/kconfig.h> 5 4 #include <linux/dvb/frontend.h> 6 5 #include "dvb_frontend.h" 7 6
-1
drivers/media/dvb-frontends/sp2.h
··· 17 17 #ifndef SP2_H 18 18 #define SP2_H 19 19 20 - #include <linux/kconfig.h> 21 20 #include "dvb_ca_en50221.h" 22 21 23 22 /*
-1
drivers/media/dvb-frontends/stb6000.h
··· 23 23 #ifndef __DVB_STB6000_H__ 24 24 #define __DVB_STB6000_H__ 25 25 26 - #include <linux/kconfig.h> 27 26 #include <linux/i2c.h> 28 27 #include "dvb_frontend.h" 29 28
-1
drivers/media/dvb-frontends/stv0288.h
··· 27 27 #ifndef STV0288_H 28 28 #define STV0288_H 29 29 30 - #include <linux/kconfig.h> 31 30 #include <linux/dvb/frontend.h> 32 31 #include "dvb_frontend.h" 33 32
-1
drivers/media/dvb-frontends/stv0367.h
··· 26 26 #ifndef STV0367_H 27 27 #define STV0367_H 28 28 29 - #include <linux/kconfig.h> 30 29 #include <linux/dvb/frontend.h> 31 30 #include "dvb_frontend.h" 32 31
-1
drivers/media/dvb-frontends/stv0900.h
··· 26 26 #ifndef STV0900_H 27 27 #define STV0900_H 28 28 29 - #include <linux/kconfig.h> 30 29 #include <linux/dvb/frontend.h> 31 30 #include "dvb_frontend.h" 32 31
-1
drivers/media/dvb-frontends/stv6110.h
··· 25 25 #ifndef __DVB_STV6110_H__ 26 26 #define __DVB_STV6110_H__ 27 27 28 - #include <linux/kconfig.h> 29 28 #include <linux/i2c.h> 30 29 #include "dvb_frontend.h" 31 30
-1
drivers/media/dvb-frontends/tda10048.h
··· 22 22 #ifndef TDA10048_H 23 23 #define TDA10048_H 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 #include <linux/firmware.h> 28 27
-2
drivers/media/dvb-frontends/tda18271c2dd.h
··· 1 1 #ifndef _TDA18271C2DD_H_ 2 2 #define _TDA18271C2DD_H_ 3 3 4 - #include <linux/kconfig.h> 5 - 6 4 #if IS_REACHABLE(CONFIG_DVB_TDA18271C2DD) 7 5 struct dvb_frontend *tda18271c2dd_attach(struct dvb_frontend *fe, 8 6 struct i2c_adapter *i2c, u8 adr);
-1
drivers/media/dvb-frontends/ts2020.h
··· 22 22 #ifndef TS2020_H 23 23 #define TS2020_H 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/dvb/frontend.h> 27 26 28 27 struct ts2020_config {
-1
drivers/media/dvb-frontends/zl10036.h
··· 21 21 #ifndef DVB_ZL10036_H 22 22 #define DVB_ZL10036_H 23 23 24 - #include <linux/kconfig.h> 25 24 #include <linux/i2c.h> 26 25 #include "dvb_frontend.h" 27 26
-2
drivers/media/dvb-frontends/zl10039.h
··· 22 22 #ifndef ZL10039_H 23 23 #define ZL10039_H 24 24 25 - #include <linux/kconfig.h> 26 - 27 25 #if IS_REACHABLE(CONFIG_DVB_ZL10039) 28 26 struct dvb_frontend *zl10039_attach(struct dvb_frontend *fe, 29 27 u8 i2c_addr,
-2
drivers/media/pci/cx23885/altera-ci.h
··· 20 20 #ifndef __ALTERA_CI_H 21 21 #define __ALTERA_CI_H 22 22 23 - #include <linux/kconfig.h> 24 - 25 23 #define ALT_DATA 0x000000ff 26 24 #define ALT_TDI 0x00008000 27 25 #define ALT_TDO 0x00004000
+3 -3
drivers/media/pci/ivtv/ivtv-driver.c
··· 750 750 spin_lock_init(&itv->lock); 751 751 spin_lock_init(&itv->dma_reg_lock); 752 752 753 - init_kthread_worker(&itv->irq_worker); 753 + kthread_init_worker(&itv->irq_worker); 754 754 itv->irq_worker_task = kthread_run(kthread_worker_fn, &itv->irq_worker, 755 755 "%s", itv->v4l2_dev.name); 756 756 if (IS_ERR(itv->irq_worker_task)) { ··· 760 760 /* must use the FIFO scheduler as it is realtime sensitive */ 761 761 sched_setscheduler(itv->irq_worker_task, SCHED_FIFO, &param); 762 762 763 - init_kthread_work(&itv->irq_work, ivtv_irq_work_handler); 763 + kthread_init_work(&itv->irq_work, ivtv_irq_work_handler); 764 764 765 765 /* Initial settings */ 766 766 itv->cxhdl.port = CX2341X_PORT_MEMORY; ··· 1441 1441 del_timer_sync(&itv->dma_timer); 1442 1442 1443 1443 /* Kill irq worker */ 1444 - flush_kthread_worker(&itv->irq_worker); 1444 + kthread_flush_worker(&itv->irq_worker); 1445 1445 kthread_stop(itv->irq_worker_task); 1446 1446 1447 1447 ivtv_streams_cleanup(itv);
+1 -1
drivers/media/pci/ivtv/ivtv-irq.c
··· 1062 1062 } 1063 1063 1064 1064 if (test_and_clear_bit(IVTV_F_I_HAVE_WORK, &itv->i_flags)) { 1065 - queue_kthread_work(&itv->irq_worker, &itv->irq_work); 1065 + kthread_queue_work(&itv->irq_worker, &itv->irq_work); 1066 1066 } 1067 1067 1068 1068 spin_unlock(&itv->dma_reg_lock);
-1
drivers/media/tuners/fc0011.h
··· 1 1 #ifndef LINUX_FC0011_H_ 2 2 #define LINUX_FC0011_H_ 3 3 4 - #include <linux/kconfig.h> 5 4 #include "dvb_frontend.h" 6 5 7 6
-1
drivers/media/tuners/fc0012.h
··· 21 21 #ifndef _FC0012_H_ 22 22 #define _FC0012_H_ 23 23 24 - #include <linux/kconfig.h> 25 24 #include "dvb_frontend.h" 26 25 #include "fc001x-common.h" 27 26
-1
drivers/media/tuners/fc0013.h
··· 22 22 #ifndef _FC0013_H_ 23 23 #define _FC0013_H_ 24 24 25 - #include <linux/kconfig.h> 26 25 #include "dvb_frontend.h" 27 26 #include "fc001x-common.h" 28 27
-2
drivers/media/tuners/max2165.h
··· 22 22 #ifndef __MAX2165_H__ 23 23 #define __MAX2165_H__ 24 24 25 - #include <linux/kconfig.h> 26 - 27 25 struct dvb_frontend; 28 26 struct i2c_adapter; 29 27
-2
drivers/media/tuners/mc44s803.h
··· 22 22 #ifndef MC44S803_H 23 23 #define MC44S803_H 24 24 25 - #include <linux/kconfig.h> 26 - 27 25 struct dvb_frontend; 28 26 struct i2c_adapter; 29 27
-2
drivers/media/tuners/mxl5005s.h
··· 23 23 #ifndef __MXL5005S_H 24 24 #define __MXL5005S_H 25 25 26 - #include <linux/kconfig.h> 27 - 28 26 #include <linux/i2c.h> 29 27 #include "dvb_frontend.h" 30 28
-1
drivers/media/tuners/r820t.h
··· 21 21 #ifndef R820T_H 22 22 #define R820T_H 23 23 24 - #include <linux/kconfig.h> 25 24 #include "dvb_frontend.h" 26 25 27 26 enum r820t_chip {
-1
drivers/media/tuners/si2157.h
··· 17 17 #ifndef SI2157_H 18 18 #define SI2157_H 19 19 20 - #include <linux/kconfig.h> 21 20 #include <media/media-device.h> 22 21 #include "dvb_frontend.h" 23 22
-1
drivers/media/tuners/tda18212.h
··· 21 21 #ifndef TDA18212_H 22 22 #define TDA18212_H 23 23 24 - #include <linux/kconfig.h> 25 24 #include "dvb_frontend.h" 26 25 27 26 struct tda18212_config {
-1
drivers/media/tuners/tda18218.h
··· 21 21 #ifndef TDA18218_H 22 22 #define TDA18218_H 23 23 24 - #include <linux/kconfig.h> 25 24 #include "dvb_frontend.h" 26 25 27 26 struct tda18218_config {
-1
drivers/media/tuners/xc5000.h
··· 22 22 #ifndef __XC5000_H__ 23 23 #define __XC5000_H__ 24 24 25 - #include <linux/kconfig.h> 26 25 #include <linux/firmware.h> 27 26 28 27 struct dvb_frontend;
-1
drivers/media/usb/dvb-usb-v2/mxl111sf-demod.h
··· 21 21 #ifndef __MXL111SF_DEMOD_H__ 22 22 #define __MXL111SF_DEMOD_H__ 23 23 24 - #include <linux/kconfig.h> 25 24 #include "dvb_frontend.h" 26 25 #include "mxl111sf.h" 27 26
-1
drivers/media/usb/dvb-usb-v2/mxl111sf-tuner.h
··· 21 21 #ifndef __MXL111SF_TUNER_H__ 22 22 #define __MXL111SF_TUNER_H__ 23 23 24 - #include <linux/kconfig.h> 25 24 #include "dvb_frontend.h" 26 25 #include "mxl111sf.h" 27 26
-1
drivers/media/usb/dvb-usb/dibusb-common.c
··· 9 9 * see Documentation/dvb/README.dvb-usb for more information 10 10 */ 11 11 12 - #include <linux/kconfig.h> 13 12 #include "dibusb.h" 14 13 15 14 /* Max transfer size done by I2C transfer functions */
-1
drivers/media/usb/hdpvr/hdpvr-video.c
··· 10 10 */ 11 11 12 12 #include <linux/kernel.h> 13 - #include <linux/kconfig.h> 14 13 #include <linux/errno.h> 15 14 #include <linux/init.h> 16 15 #include <linux/slab.h>
-1
drivers/mtd/mtdcore.c
··· 39 39 #include <linux/gfp.h> 40 40 #include <linux/slab.h> 41 41 #include <linux/reboot.h> 42 - #include <linux/kconfig.h> 43 42 #include <linux/leds.h> 44 43 45 44 #include <linux/mtd/mtd.h>
-1
drivers/mtd/mtdpart.c
··· 30 30 #include <linux/mtd/mtd.h> 31 31 #include <linux/mtd/partitions.h> 32 32 #include <linux/err.h> 33 - #include <linux/kconfig.h> 34 33 35 34 #include "mtdcore.h" 36 35
-1
drivers/net/dsa/b53/b53_mmap.c
··· 17 17 */ 18 18 19 19 #include <linux/kernel.h> 20 - #include <linux/kconfig.h> 21 20 #include <linux/module.h> 22 21 #include <linux/io.h> 23 22 #include <linux/platform_device.h>
+5 -5
drivers/net/ethernet/microchip/encx24j600.c
··· 821 821 } 822 822 823 823 if (oldfilter != priv->rxfilter) 824 - queue_kthread_work(&priv->kworker, &priv->setrx_work); 824 + kthread_queue_work(&priv->kworker, &priv->setrx_work); 825 825 } 826 826 827 827 static void encx24j600_hw_tx(struct encx24j600_priv *priv) ··· 879 879 /* Remember the skb for deferred processing */ 880 880 priv->tx_skb = skb; 881 881 882 - queue_kthread_work(&priv->kworker, &priv->tx_work); 882 + kthread_queue_work(&priv->kworker, &priv->tx_work); 883 883 884 884 return NETDEV_TX_OK; 885 885 } ··· 1037 1037 goto out_free; 1038 1038 } 1039 1039 1040 - init_kthread_worker(&priv->kworker); 1041 - init_kthread_work(&priv->tx_work, encx24j600_tx_proc); 1042 - init_kthread_work(&priv->setrx_work, encx24j600_setrx_proc); 1040 + kthread_init_worker(&priv->kworker); 1041 + kthread_init_work(&priv->tx_work, encx24j600_tx_proc); 1042 + kthread_init_work(&priv->setrx_work, encx24j600_setrx_proc); 1043 1043 1044 1044 priv->kworker_task = kthread_run(kthread_worker_fn, &priv->kworker, 1045 1045 "encx24j600");
-1
drivers/net/ethernet/sun/ldmvsw.c
··· 11 11 #include <linux/highmem.h> 12 12 #include <linux/if_vlan.h> 13 13 #include <linux/init.h> 14 - #include <linux/kconfig.h> 15 14 #include <linux/kernel.h> 16 15 #include <linux/module.h> 17 16 #include <linux/mutex.h>
-1
drivers/net/ethernet/wiznet/w5100.c
··· 9 9 10 10 #include <linux/kernel.h> 11 11 #include <linux/module.h> 12 - #include <linux/kconfig.h> 13 12 #include <linux/netdevice.h> 14 13 #include <linux/etherdevice.h> 15 14 #include <linux/platform_device.h>
-1
drivers/net/ethernet/wiznet/w5300.c
··· 10 10 11 11 #include <linux/kernel.h> 12 12 #include <linux/module.h> 13 - #include <linux/kconfig.h> 14 13 #include <linux/netdevice.h> 15 14 #include <linux/etherdevice.h> 16 15 #include <linux/platform_device.h>
+2 -1
drivers/nvme/host/pci.c
··· 515 515 goto out; 516 516 517 517 ret = BLK_MQ_RQ_QUEUE_BUSY; 518 - if (!dma_map_sg(dev->dev, iod->sg, iod->nents, dma_dir)) 518 + if (!dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, dma_dir, 519 + DMA_ATTR_NO_WARN)) 519 520 goto out; 520 521 521 522 if (!nvme_setup_prps(dev, req, size))
+1 -1
drivers/pps/Kconfig
··· 31 31 32 32 config NTP_PPS 33 33 bool "PPS kernel consumer support" 34 - depends on !NO_HZ 34 + depends on !NO_HZ_COMMON 35 35 help 36 36 This option adds support for direct in-kernel time 37 37 synchronization using an external PPS signal.
+5 -10
drivers/rapidio/rio_cm.c
··· 1841 1841 { 1842 1842 struct rio_cm_msg msg; 1843 1843 void *buf; 1844 - int ret = 0; 1844 + int ret; 1845 1845 1846 1846 if (copy_from_user(&msg, arg, sizeof(msg))) 1847 1847 return -EFAULT; 1848 1848 if (msg.size > RIO_MAX_MSG_SIZE) 1849 1849 return -EINVAL; 1850 1850 1851 - buf = kmalloc(msg.size, GFP_KERNEL); 1852 - if (!buf) 1853 - return -ENOMEM; 1854 - 1855 - if (copy_from_user(buf, (void __user *)(uintptr_t)msg.msg, msg.size)) { 1856 - ret = -EFAULT; 1857 - goto out; 1858 - } 1851 + buf = memdup_user((void __user *)(uintptr_t)msg.msg, msg.size); 1852 + if (IS_ERR(buf)) 1853 + return PTR_ERR(buf); 1859 1854 1860 1855 ret = riocm_ch_send(msg.ch_num, buf, msg.size); 1861 - out: 1856 + 1862 1857 kfree(buf); 1863 1858 return ret; 1864 1859 }
+9 -9
drivers/spi/spi.c
··· 1112 1112 1113 1113 /* If another context is idling the device then defer */ 1114 1114 if (master->idling) { 1115 - queue_kthread_work(&master->kworker, &master->pump_messages); 1115 + kthread_queue_work(&master->kworker, &master->pump_messages); 1116 1116 spin_unlock_irqrestore(&master->queue_lock, flags); 1117 1117 return; 1118 1118 } ··· 1126 1126 1127 1127 /* Only do teardown in the thread */ 1128 1128 if (!in_kthread) { 1129 - queue_kthread_work(&master->kworker, 1129 + kthread_queue_work(&master->kworker, 1130 1130 &master->pump_messages); 1131 1131 spin_unlock_irqrestore(&master->queue_lock, flags); 1132 1132 return; ··· 1250 1250 master->running = false; 1251 1251 master->busy = false; 1252 1252 1253 - init_kthread_worker(&master->kworker); 1253 + kthread_init_worker(&master->kworker); 1254 1254 master->kworker_task = kthread_run(kthread_worker_fn, 1255 1255 &master->kworker, "%s", 1256 1256 dev_name(&master->dev)); ··· 1258 1258 dev_err(&master->dev, "failed to create message pump task\n"); 1259 1259 return PTR_ERR(master->kworker_task); 1260 1260 } 1261 - init_kthread_work(&master->pump_messages, spi_pump_messages); 1261 + kthread_init_work(&master->pump_messages, spi_pump_messages); 1262 1262 1263 1263 /* 1264 1264 * Master config will indicate if this controller should run the ··· 1331 1331 spin_lock_irqsave(&master->queue_lock, flags); 1332 1332 master->cur_msg = NULL; 1333 1333 master->cur_msg_prepared = false; 1334 - queue_kthread_work(&master->kworker, &master->pump_messages); 1334 + kthread_queue_work(&master->kworker, &master->pump_messages); 1335 1335 spin_unlock_irqrestore(&master->queue_lock, flags); 1336 1336 1337 1337 trace_spi_message_done(mesg); ··· 1357 1357 master->cur_msg = NULL; 1358 1358 spin_unlock_irqrestore(&master->queue_lock, flags); 1359 1359 1360 - queue_kthread_work(&master->kworker, &master->pump_messages); 1360 + kthread_queue_work(&master->kworker, &master->pump_messages); 1361 1361 1362 1362 return 0; 1363 1363 } ··· 1404 1404 ret = spi_stop_queue(master); 1405 1405 1406 1406 /* 1407 - * flush_kthread_worker will block until all work is done. 1407 + * kthread_flush_worker will block until all work is done. 1408 1408 * If the reason that stop_queue timed out is that the work will never 1409 1409 * finish, then it does no good to call flush/stop thread, so 1410 1410 * return anyway. ··· 1414 1414 return ret; 1415 1415 } 1416 1416 1417 - flush_kthread_worker(&master->kworker); 1417 + kthread_flush_worker(&master->kworker); 1418 1418 kthread_stop(master->kworker_task); 1419 1419 1420 1420 return 0; ··· 1438 1438 1439 1439 list_add_tail(&msg->queue, &master->queue); 1440 1440 if (!master->busy && need_pump) 1441 - queue_kthread_work(&master->kworker, &master->pump_messages); 1441 + kthread_queue_work(&master->kworker, &master->pump_messages); 1442 1442 1443 1443 spin_unlock_irqrestore(&master->queue_lock, flags); 1444 1444 return 0;
+1 -4
drivers/staging/lustre/lustre/llite/vvp_page.c
··· 241 241 obj->vob_discard_page_warned = 0; 242 242 } else { 243 243 SetPageError(vmpage); 244 - if (ioret == -ENOSPC) 245 - set_bit(AS_ENOSPC, &inode->i_mapping->flags); 246 - else 247 - set_bit(AS_EIO, &inode->i_mapping->flags); 244 + mapping_set_error(inode->i_mapping, ioret); 248 245 249 246 if ((ioret == -ESHUTDOWN || ioret == -EINTR) && 250 247 obj->vob_discard_page_warned == 0) {
+11 -11
drivers/tty/serial/sc16is7xx.c
··· 708 708 { 709 709 struct sc16is7xx_port *s = (struct sc16is7xx_port *)dev_id; 710 710 711 - queue_kthread_work(&s->kworker, &s->irq_work); 711 + kthread_queue_work(&s->kworker, &s->irq_work); 712 712 713 713 return IRQ_HANDLED; 714 714 } ··· 784 784 785 785 one->config.flags |= SC16IS7XX_RECONF_IER; 786 786 one->config.ier_clear |= bit; 787 - queue_kthread_work(&s->kworker, &one->reg_work); 787 + kthread_queue_work(&s->kworker, &one->reg_work); 788 788 } 789 789 790 790 static void sc16is7xx_stop_tx(struct uart_port *port) ··· 802 802 struct sc16is7xx_port *s = dev_get_drvdata(port->dev); 803 803 struct sc16is7xx_one *one = to_sc16is7xx_one(port, port); 804 804 805 - queue_kthread_work(&s->kworker, &one->tx_work); 805 + kthread_queue_work(&s->kworker, &one->tx_work); 806 806 } 807 807 808 808 static unsigned int sc16is7xx_tx_empty(struct uart_port *port) ··· 828 828 struct sc16is7xx_one *one = to_sc16is7xx_one(port, port); 829 829 830 830 one->config.flags |= SC16IS7XX_RECONF_MD; 831 - queue_kthread_work(&s->kworker, &one->reg_work); 831 + kthread_queue_work(&s->kworker, &one->reg_work); 832 832 } 833 833 834 834 static void sc16is7xx_break_ctl(struct uart_port *port, int break_state) ··· 957 957 958 958 port->rs485 = *rs485; 959 959 one->config.flags |= SC16IS7XX_RECONF_RS485; 960 - queue_kthread_work(&s->kworker, &one->reg_work); 960 + kthread_queue_work(&s->kworker, &one->reg_work); 961 961 962 962 return 0; 963 963 } ··· 1030 1030 1031 1031 sc16is7xx_power(port, 0); 1032 1032 1033 - flush_kthread_worker(&s->kworker); 1033 + kthread_flush_worker(&s->kworker); 1034 1034 } 1035 1035 1036 1036 static const char *sc16is7xx_type(struct uart_port *port) ··· 1176 1176 s->devtype = devtype; 1177 1177 dev_set_drvdata(dev, s); 1178 1178 1179 - init_kthread_worker(&s->kworker); 1180 - init_kthread_work(&s->irq_work, sc16is7xx_ist); 1179 + kthread_init_worker(&s->kworker); 1180 + kthread_init_work(&s->irq_work, sc16is7xx_ist); 1181 1181 s->kworker_task = kthread_run(kthread_worker_fn, &s->kworker, 1182 1182 "sc16is7xx"); 1183 1183 if (IS_ERR(s->kworker_task)) { ··· 1234 1234 SC16IS7XX_EFCR_RXDISABLE_BIT | 1235 1235 SC16IS7XX_EFCR_TXDISABLE_BIT); 1236 1236 /* Initialize kthread work structs */ 1237 - init_kthread_work(&s->p[i].tx_work, sc16is7xx_tx_proc); 1238 - init_kthread_work(&s->p[i].reg_work, sc16is7xx_reg_proc); 1237 + kthread_init_work(&s->p[i].tx_work, sc16is7xx_tx_proc); 1238 + kthread_init_work(&s->p[i].reg_work, sc16is7xx_reg_proc); 1239 1239 /* Register port */ 1240 1240 uart_add_one_port(&sc16is7xx_uart, &s->p[i].port); 1241 1241 ··· 1301 1301 sc16is7xx_power(&s->p[i].port, 0); 1302 1302 } 1303 1303 1304 - flush_kthread_worker(&s->kworker); 1304 + kthread_flush_worker(&s->kworker); 1305 1305 kthread_stop(s->kworker_task); 1306 1306 1307 1307 if (!IS_ERR(s->clk))
-1
drivers/usb/early/ehci-dbgp.c
··· 20 20 #include <linux/usb/ehci_def.h> 21 21 #include <linux/delay.h> 22 22 #include <linux/serial_core.h> 23 - #include <linux/kconfig.h> 24 23 #include <linux/kgdb.h> 25 24 #include <linux/kthread.h> 26 25 #include <asm/io.h>
-1
drivers/usb/gadget/udc/bcm63xx_udc.c
··· 21 21 #include <linux/errno.h> 22 22 #include <linux/interrupt.h> 23 23 #include <linux/ioport.h> 24 - #include <linux/kconfig.h> 25 24 #include <linux/kernel.h> 26 25 #include <linux/list.h> 27 26 #include <linux/module.h>
-1
drivers/usb/host/pci-quirks.c
··· 9 9 */ 10 10 11 11 #include <linux/types.h> 12 - #include <linux/kconfig.h> 13 12 #include <linux/kernel.h> 14 13 #include <linux/pci.h> 15 14 #include <linux/delay.h>
+2 -3
fs/afs/write.c
··· 398 398 switch (ret) { 399 399 case -EDQUOT: 400 400 case -ENOSPC: 401 - set_bit(AS_ENOSPC, 402 - &wb->vnode->vfs_inode.i_mapping->flags); 401 + mapping_set_error(wb->vnode->vfs_inode.i_mapping, -ENOSPC); 403 402 break; 404 403 case -EROFS: 405 404 case -EIO: ··· 408 409 case -ENOMEDIUM: 409 410 case -ENXIO: 410 411 afs_kill_pages(wb->vnode, true, first, last); 411 - set_bit(AS_EIO, &wb->vnode->vfs_inode.i_mapping->flags); 412 + mapping_set_error(wb->vnode->vfs_inode.i_mapping, -EIO); 412 413 break; 413 414 case -EACCES: 414 415 case -EPERM:
+3 -6
fs/autofs4/autofs_i.h
··· 20 20 #define AUTOFS_IOC_COUNT 32 21 21 22 22 #define AUTOFS_DEV_IOCTL_IOC_FIRST (AUTOFS_DEV_IOCTL_VERSION) 23 - #define AUTOFS_DEV_IOCTL_IOC_COUNT (AUTOFS_IOC_COUNT - 11) 23 + #define AUTOFS_DEV_IOCTL_IOC_COUNT \ 24 + (AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD - AUTOFS_DEV_IOCTL_VERSION_CMD) 24 25 25 26 #include <linux/kernel.h> 26 27 #include <linux/slab.h> ··· 33 32 #include <linux/namei.h> 34 33 #include <asm/current.h> 35 34 #include <linux/uaccess.h> 36 - 37 - /* #define DEBUG */ 38 35 39 36 #ifdef pr_fmt 40 37 #undef pr_fmt ··· 110 111 int max_proto; 111 112 unsigned long exp_timeout; 112 113 unsigned int type; 113 - int reghost_enabled; 114 - int needs_reghost; 115 114 struct super_block *sb; 116 115 struct mutex wq_mutex; 117 116 struct mutex pipe_mutex; ··· 268 271 } 269 272 } 270 273 271 - extern void autofs4_kill_sb(struct super_block *); 274 + void autofs4_kill_sb(struct super_block *);
+35 -42
fs/autofs4/dev-ioctl.c
··· 75 75 if ((param->ver_major != AUTOFS_DEV_IOCTL_VERSION_MAJOR) || 76 76 (param->ver_minor > AUTOFS_DEV_IOCTL_VERSION_MINOR)) { 77 77 pr_warn("ioctl control interface version mismatch: " 78 - "kernel(%u.%u), user(%u.%u), cmd(%d)\n", 78 + "kernel(%u.%u), user(%u.%u), cmd(0x%08x)\n", 79 79 AUTOFS_DEV_IOCTL_VERSION_MAJOR, 80 80 AUTOFS_DEV_IOCTL_VERSION_MINOR, 81 81 param->ver_major, param->ver_minor, cmd); ··· 170 170 sbi = autofs4_sbi(inode->i_sb); 171 171 } 172 172 return sbi; 173 + } 174 + 175 + /* Return autofs dev ioctl version */ 176 + static int autofs_dev_ioctl_version(struct file *fp, 177 + struct autofs_sb_info *sbi, 178 + struct autofs_dev_ioctl *param) 179 + { 180 + /* This should have already been set. */ 181 + param->ver_major = AUTOFS_DEV_IOCTL_VERSION_MAJOR; 182 + param->ver_minor = AUTOFS_DEV_IOCTL_VERSION_MINOR; 183 + return 0; 173 184 } 174 185 175 186 /* Return autofs module protocol version */ ··· 597 586 598 587 static ioctl_fn lookup_dev_ioctl(unsigned int cmd) 599 588 { 600 - static struct { 601 - int cmd; 602 - ioctl_fn fn; 603 - } _ioctls[] = { 604 - {cmd_idx(AUTOFS_DEV_IOCTL_VERSION_CMD), NULL}, 605 - {cmd_idx(AUTOFS_DEV_IOCTL_PROTOVER_CMD), 606 - autofs_dev_ioctl_protover}, 607 - {cmd_idx(AUTOFS_DEV_IOCTL_PROTOSUBVER_CMD), 608 - autofs_dev_ioctl_protosubver}, 609 - {cmd_idx(AUTOFS_DEV_IOCTL_OPENMOUNT_CMD), 610 - autofs_dev_ioctl_openmount}, 611 - {cmd_idx(AUTOFS_DEV_IOCTL_CLOSEMOUNT_CMD), 612 - autofs_dev_ioctl_closemount}, 613 - {cmd_idx(AUTOFS_DEV_IOCTL_READY_CMD), 614 - autofs_dev_ioctl_ready}, 615 - {cmd_idx(AUTOFS_DEV_IOCTL_FAIL_CMD), 616 - autofs_dev_ioctl_fail}, 617 - {cmd_idx(AUTOFS_DEV_IOCTL_SETPIPEFD_CMD), 618 - autofs_dev_ioctl_setpipefd}, 619 - {cmd_idx(AUTOFS_DEV_IOCTL_CATATONIC_CMD), 620 - autofs_dev_ioctl_catatonic}, 621 - {cmd_idx(AUTOFS_DEV_IOCTL_TIMEOUT_CMD), 622 - autofs_dev_ioctl_timeout}, 623 - {cmd_idx(AUTOFS_DEV_IOCTL_REQUESTER_CMD), 624 - autofs_dev_ioctl_requester}, 625 - {cmd_idx(AUTOFS_DEV_IOCTL_EXPIRE_CMD), 626 - autofs_dev_ioctl_expire}, 627 - {cmd_idx(AUTOFS_DEV_IOCTL_ASKUMOUNT_CMD), 628 - autofs_dev_ioctl_askumount}, 629 - {cmd_idx(AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD), 630 - autofs_dev_ioctl_ismountpoint} 589 + static ioctl_fn _ioctls[] = { 590 + autofs_dev_ioctl_version, 591 + autofs_dev_ioctl_protover, 592 + autofs_dev_ioctl_protosubver, 593 + autofs_dev_ioctl_openmount, 594 + autofs_dev_ioctl_closemount, 595 + autofs_dev_ioctl_ready, 596 + autofs_dev_ioctl_fail, 597 + autofs_dev_ioctl_setpipefd, 598 + autofs_dev_ioctl_catatonic, 599 + autofs_dev_ioctl_timeout, 600 + autofs_dev_ioctl_requester, 601 + autofs_dev_ioctl_expire, 602 + autofs_dev_ioctl_askumount, 603 + autofs_dev_ioctl_ismountpoint, 631 604 }; 632 605 unsigned int idx = cmd_idx(cmd); 633 606 634 - return (idx >= ARRAY_SIZE(_ioctls)) ? NULL : _ioctls[idx].fn; 607 + return (idx >= ARRAY_SIZE(_ioctls)) ? NULL : _ioctls[idx]; 635 608 } 636 609 637 610 /* ioctl dispatcher */ ··· 637 642 cmd = _IOC_NR(command); 638 643 639 644 if (_IOC_TYPE(command) != _IOC_TYPE(AUTOFS_DEV_IOCTL_IOC_FIRST) || 640 - cmd - cmd_first >= AUTOFS_DEV_IOCTL_IOC_COUNT) { 645 + cmd - cmd_first > AUTOFS_DEV_IOCTL_IOC_COUNT) { 641 646 return -ENOTTY; 642 647 } 643 648 ··· 650 655 if (err) 651 656 goto out; 652 657 653 - /* The validate routine above always sets the version */ 654 - if (cmd == AUTOFS_DEV_IOCTL_VERSION_CMD) 655 - goto done; 656 - 657 658 fn = lookup_dev_ioctl(cmd); 658 659 if (!fn) { 659 660 pr_warn("unknown command 0x%08x\n", command); 660 - return -ENOTTY; 661 + err = -ENOTTY; 662 + goto out; 661 663 } 662 664 663 665 fp = NULL; ··· 663 671 /* 664 672 * For obvious reasons the openmount can't have a file 665 673 * descriptor yet. We don't take a reference to the 666 - * file during close to allow for immediate release. 674 + * file during close to allow for immediate release, 675 + * and the same for retrieving ioctl version. 667 676 */ 668 - if (cmd != AUTOFS_DEV_IOCTL_OPENMOUNT_CMD && 677 + if (cmd != AUTOFS_DEV_IOCTL_VERSION_CMD && 678 + cmd != AUTOFS_DEV_IOCTL_OPENMOUNT_CMD && 669 679 cmd != AUTOFS_DEV_IOCTL_CLOSEMOUNT_CMD) { 670 680 fp = fget(param->ioctlfd); 671 681 if (!fp) { ··· 700 706 701 707 if (fp) 702 708 fput(fp); 703 - done: 704 709 if (err >= 0 && copy_to_user(user, param, AUTOFS_DEV_IOCTL_SIZE)) 705 710 err = -EFAULT; 706 711 out:
+23 -22
fs/autofs4/inode.c
··· 274 274 goto fail_dput; 275 275 } 276 276 277 + /* Test versions first */ 278 + if (sbi->max_proto < AUTOFS_MIN_PROTO_VERSION || 279 + sbi->min_proto > AUTOFS_MAX_PROTO_VERSION) { 280 + pr_err("kernel does not match daemon version " 281 + "daemon (%d, %d) kernel (%d, %d)\n", 282 + sbi->min_proto, sbi->max_proto, 283 + AUTOFS_MIN_PROTO_VERSION, AUTOFS_MAX_PROTO_VERSION); 284 + goto fail_dput; 285 + } 286 + 287 + /* Establish highest kernel protocol version */ 288 + if (sbi->max_proto > AUTOFS_MAX_PROTO_VERSION) 289 + sbi->version = AUTOFS_MAX_PROTO_VERSION; 290 + else 291 + sbi->version = sbi->max_proto; 292 + sbi->sub_version = AUTOFS_PROTO_SUBVERSION; 293 + 277 294 if (pgrp_set) { 278 295 sbi->oz_pgrp = find_get_pid(pgrp); 279 296 if (!sbi->oz_pgrp) { ··· 308 291 root_inode->i_fop = &autofs4_root_operations; 309 292 root_inode->i_op = &autofs4_dir_inode_operations; 310 293 311 - /* Couldn't this be tested earlier? */ 312 - if (sbi->max_proto < AUTOFS_MIN_PROTO_VERSION || 313 - sbi->min_proto > AUTOFS_MAX_PROTO_VERSION) { 314 - pr_err("kernel does not match daemon version " 315 - "daemon (%d, %d) kernel (%d, %d)\n", 316 - sbi->min_proto, sbi->max_proto, 317 - AUTOFS_MIN_PROTO_VERSION, AUTOFS_MAX_PROTO_VERSION); 318 - goto fail_dput; 319 - } 320 - 321 - /* Establish highest kernel protocol version */ 322 - if (sbi->max_proto > AUTOFS_MAX_PROTO_VERSION) 323 - sbi->version = AUTOFS_MAX_PROTO_VERSION; 324 - else 325 - sbi->version = sbi->max_proto; 326 - sbi->sub_version = AUTOFS_PROTO_SUBVERSION; 327 - 328 294 pr_debug("pipe fd = %d, pgrp = %u\n", pipefd, pid_nr(sbi->oz_pgrp)); 329 295 pipe = fget(pipefd); 330 296 331 297 if (!pipe) { 332 298 pr_err("could not open pipe file descriptor\n"); 333 - goto fail_dput; 299 + goto fail_put_pid; 334 300 } 335 301 ret = autofs_prepare_pipe(pipe); 336 302 if (ret < 0) ··· 334 334 fail_fput: 335 335 pr_err("pipe file descriptor does not contain proper ops\n"); 336 336 fput(pipe); 337 - /* fall through */ 337 + fail_put_pid: 338 + put_pid(sbi->oz_pgrp); 338 339 fail_dput: 339 340 dput(root); 340 341 goto fail_free; 341 342 fail_ino: 342 - kfree(ino); 343 + autofs4_free_ino(ino); 343 344 fail_free: 344 - put_pid(sbi->oz_pgrp); 345 345 kfree(sbi); 346 346 s->s_fs_info = NULL; 347 347 return ret; ··· 368 368 inode->i_fop = &autofs4_dir_operations; 369 369 } else if (S_ISLNK(mode)) { 370 370 inode->i_op = &autofs4_symlink_inode_operations; 371 - } 371 + } else 372 + WARN_ON(1); 372 373 373 374 return inode; 374 375 }
+1 -3
fs/autofs4/root.c
··· 577 577 inode = autofs4_get_inode(dir->i_sb, S_IFLNK | 0555); 578 578 if (!inode) { 579 579 kfree(cp); 580 - if (!dentry->d_fsdata) 581 - kfree(ino); 582 580 return -ENOMEM; 583 581 } 584 582 inode->i_private = cp; ··· 840 842 if (may_umount(mnt)) 841 843 status = 1; 842 844 843 - pr_debug("returning %d\n", status); 845 + pr_debug("may umount %d\n", status); 844 846 845 847 status = put_user(status, p); 846 848
+77
fs/block_dev.c
··· 30 30 #include <linux/cleancache.h> 31 31 #include <linux/dax.h> 32 32 #include <linux/badblocks.h> 33 + #include <linux/falloc.h> 33 34 #include <asm/uaccess.h> 34 35 #include "internal.h" 35 36 ··· 1776 1775 .is_dirty_writeback = buffer_check_dirty_writeback, 1777 1776 }; 1778 1777 1778 + #define BLKDEV_FALLOC_FL_SUPPORTED \ 1779 + (FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | \ 1780 + FALLOC_FL_ZERO_RANGE | FALLOC_FL_NO_HIDE_STALE) 1781 + 1782 + static long blkdev_fallocate(struct file *file, int mode, loff_t start, 1783 + loff_t len) 1784 + { 1785 + struct block_device *bdev = I_BDEV(bdev_file_inode(file)); 1786 + struct request_queue *q = bdev_get_queue(bdev); 1787 + struct address_space *mapping; 1788 + loff_t end = start + len - 1; 1789 + loff_t isize; 1790 + int error; 1791 + 1792 + /* Fail if we don't recognize the flags. */ 1793 + if (mode & ~BLKDEV_FALLOC_FL_SUPPORTED) 1794 + return -EOPNOTSUPP; 1795 + 1796 + /* Don't go off the end of the device. */ 1797 + isize = i_size_read(bdev->bd_inode); 1798 + if (start >= isize) 1799 + return -EINVAL; 1800 + if (end >= isize) { 1801 + if (mode & FALLOC_FL_KEEP_SIZE) { 1802 + len = isize - start; 1803 + end = start + len - 1; 1804 + } else 1805 + return -EINVAL; 1806 + } 1807 + 1808 + /* 1809 + * Don't allow IO that isn't aligned to logical block size. 1810 + */ 1811 + if ((start | len) & (bdev_logical_block_size(bdev) - 1)) 1812 + return -EINVAL; 1813 + 1814 + /* Invalidate the page cache, including dirty pages. */ 1815 + mapping = bdev->bd_inode->i_mapping; 1816 + truncate_inode_pages_range(mapping, start, end); 1817 + 1818 + switch (mode) { 1819 + case FALLOC_FL_ZERO_RANGE: 1820 + case FALLOC_FL_ZERO_RANGE | FALLOC_FL_KEEP_SIZE: 1821 + error = blkdev_issue_zeroout(bdev, start >> 9, len >> 9, 1822 + GFP_KERNEL, false); 1823 + break; 1824 + case FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE: 1825 + /* Only punch if the device can do zeroing discard. */ 1826 + if (!blk_queue_discard(q) || !q->limits.discard_zeroes_data) 1827 + return -EOPNOTSUPP; 1828 + error = blkdev_issue_discard(bdev, start >> 9, len >> 9, 1829 + GFP_KERNEL, 0); 1830 + break; 1831 + case FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE | FALLOC_FL_NO_HIDE_STALE: 1832 + if (!blk_queue_discard(q)) 1833 + return -EOPNOTSUPP; 1834 + error = blkdev_issue_discard(bdev, start >> 9, len >> 9, 1835 + GFP_KERNEL, 0); 1836 + break; 1837 + default: 1838 + return -EOPNOTSUPP; 1839 + } 1840 + if (error) 1841 + return error; 1842 + 1843 + /* 1844 + * Invalidate again; if someone wandered in and dirtied a page, 1845 + * the caller will be given -EBUSY. The third argument is 1846 + * inclusive, so the rounding here is safe. 1847 + */ 1848 + return invalidate_inode_pages2_range(mapping, 1849 + start >> PAGE_SHIFT, 1850 + end >> PAGE_SHIFT); 1851 + } 1852 + 1779 1853 const struct file_operations def_blk_fops = { 1780 1854 .open = blkdev_open, 1781 1855 .release = blkdev_close, ··· 1865 1789 #endif 1866 1790 .splice_read = generic_file_splice_read, 1867 1791 .splice_write = iter_file_splice_write, 1792 + .fallocate = blkdev_fallocate, 1868 1793 }; 1869 1794 1870 1795 int ioctl_by_bdev(struct block_device *bdev, unsigned cmd, unsigned long arg)
+2 -2
fs/buffer.c
··· 351 351 set_buffer_uptodate(bh); 352 352 } else { 353 353 buffer_io_error(bh, ", lost async page write"); 354 - set_bit(AS_EIO, &page->mapping->flags); 354 + mapping_set_error(page->mapping, -EIO); 355 355 set_buffer_write_io_error(bh); 356 356 clear_buffer_uptodate(bh); 357 357 SetPageError(page); ··· 3249 3249 bh = head; 3250 3250 do { 3251 3251 if (buffer_write_io_error(bh) && page->mapping) 3252 - set_bit(AS_EIO, &page->mapping->flags); 3252 + mapping_set_error(page->mapping, -EIO); 3253 3253 if (buffer_busy(bh)) 3254 3254 goto failed; 3255 3255 bh = bh->b_this_page;
+1 -1
fs/exofs/inode.c
··· 778 778 fail: 779 779 EXOFS_DBGMSG("Error: writepage_strip(0x%lx, 0x%lx)=>%d\n", 780 780 inode->i_ino, page->index, ret); 781 - set_bit(AS_EIO, &page->mapping->flags); 781 + mapping_set_error(page->mapping, -EIO); 782 782 unlock_page(page); 783 783 return ret; 784 784 }
+1 -1
fs/ext4/page-io.c
··· 88 88 89 89 if (bio->bi_error) { 90 90 SetPageError(page); 91 - set_bit(AS_EIO, &page->mapping->flags); 91 + mapping_set_error(page->mapping, -EIO); 92 92 } 93 93 bh = head = page_buffers(page); 94 94 /*
+1 -1
fs/f2fs/data.c
··· 75 75 fscrypt_pullback_bio_page(&page, true); 76 76 77 77 if (unlikely(bio->bi_error)) { 78 - set_bit(AS_EIO, &page->mapping->flags); 78 + mapping_set_error(page->mapping, -EIO); 79 79 f2fs_stop_checkpoint(sbi, true); 80 80 } 81 81 end_page_writeback(page);
+1 -2
fs/jbd2/commit.c
··· 269 269 * filemap_fdatawait_range(), set it again so 270 270 * that user process can get -EIO from fsync(). 271 271 */ 272 - set_bit(AS_EIO, 273 - &jinode->i_vfs_inode->i_mapping->flags); 272 + mapping_set_error(jinode->i_vfs_inode->i_mapping, -EIO); 274 273 275 274 if (!ret) 276 275 ret = err;
-2
fs/lockd/procfs.h
··· 6 6 #ifndef _LOCKD_PROCFS_H 7 7 #define _LOCKD_PROCFS_H 8 8 9 - #include <linux/kconfig.h> 10 - 11 9 #if IS_ENABLED(CONFIG_PROC_FS) 12 10 int lockd_create_procfs(void); 13 11 void lockd_remove_procfs(void);
+3
fs/ocfs2/dlm/dlmmaster.c
··· 3188 3188 migrate->new_master, 3189 3189 migrate->master); 3190 3190 3191 + if (ret < 0) 3192 + kmem_cache_free(dlm_mle_cache, mle); 3193 + 3191 3194 spin_unlock(&dlm->master_lock); 3192 3195 unlock: 3193 3196 spin_unlock(&dlm->spinlock);
+2 -1
fs/open.c
··· 300 300 * Let individual file system decide if it supports preallocation 301 301 * for directories or not. 302 302 */ 303 - if (!S_ISREG(inode->i_mode) && !S_ISDIR(inode->i_mode)) 303 + if (!S_ISREG(inode->i_mode) && !S_ISDIR(inode->i_mode) && 304 + !S_ISBLK(inode->i_mode)) 304 305 return -ENODEV; 305 306 306 307 /* Check for wrap through zero too */
+95 -69
fs/pipe.c
··· 601 601 return retval; 602 602 } 603 603 604 - static void account_pipe_buffers(struct pipe_inode_info *pipe, 604 + static unsigned long account_pipe_buffers(struct user_struct *user, 605 605 unsigned long old, unsigned long new) 606 606 { 607 - atomic_long_add(new - old, &pipe->user->pipe_bufs); 607 + return atomic_long_add_return(new - old, &user->pipe_bufs); 608 608 } 609 609 610 - static bool too_many_pipe_buffers_soft(struct user_struct *user) 610 + static bool too_many_pipe_buffers_soft(unsigned long user_bufs) 611 611 { 612 - return pipe_user_pages_soft && 613 - atomic_long_read(&user->pipe_bufs) >= pipe_user_pages_soft; 612 + return pipe_user_pages_soft && user_bufs >= pipe_user_pages_soft; 614 613 } 615 614 616 - static bool too_many_pipe_buffers_hard(struct user_struct *user) 615 + static bool too_many_pipe_buffers_hard(unsigned long user_bufs) 617 616 { 618 - return pipe_user_pages_hard && 619 - atomic_long_read(&user->pipe_bufs) >= pipe_user_pages_hard; 617 + return pipe_user_pages_hard && user_bufs >= pipe_user_pages_hard; 620 618 } 621 619 622 620 struct pipe_inode_info *alloc_pipe_info(void) 623 621 { 624 622 struct pipe_inode_info *pipe; 623 + unsigned long pipe_bufs = PIPE_DEF_BUFFERS; 624 + struct user_struct *user = get_current_user(); 625 + unsigned long user_bufs; 625 626 626 627 pipe = kzalloc(sizeof(struct pipe_inode_info), GFP_KERNEL_ACCOUNT); 627 - if (pipe) { 628 - unsigned long pipe_bufs = PIPE_DEF_BUFFERS; 629 - struct user_struct *user = get_current_user(); 628 + if (pipe == NULL) 629 + goto out_free_uid; 630 630 631 - if (!too_many_pipe_buffers_hard(user)) { 632 - if (too_many_pipe_buffers_soft(user)) 633 - pipe_bufs = 1; 634 - pipe->bufs = kcalloc(pipe_bufs, 635 - sizeof(struct pipe_buffer), 636 - GFP_KERNEL_ACCOUNT); 637 - } 631 + if (pipe_bufs * PAGE_SIZE > pipe_max_size && !capable(CAP_SYS_RESOURCE)) 632 + pipe_bufs = pipe_max_size >> PAGE_SHIFT; 638 633 639 - if (pipe->bufs) { 640 - init_waitqueue_head(&pipe->wait); 641 - pipe->r_counter = pipe->w_counter = 1; 642 - pipe->buffers = pipe_bufs; 643 - pipe->user = user; 644 - account_pipe_buffers(pipe, 0, pipe_bufs); 645 - mutex_init(&pipe->mutex); 646 - return pipe; 647 - } 648 - free_uid(user); 649 - kfree(pipe); 634 + user_bufs = account_pipe_buffers(user, 0, pipe_bufs); 635 + 636 + if (too_many_pipe_buffers_soft(user_bufs)) { 637 + user_bufs = account_pipe_buffers(user, pipe_bufs, 1); 638 + pipe_bufs = 1; 650 639 } 651 640 641 + if (too_many_pipe_buffers_hard(user_bufs)) 642 + goto out_revert_acct; 643 + 644 + pipe->bufs = kcalloc(pipe_bufs, sizeof(struct pipe_buffer), 645 + GFP_KERNEL_ACCOUNT); 646 + 647 + if (pipe->bufs) { 648 + init_waitqueue_head(&pipe->wait); 649 + pipe->r_counter = pipe->w_counter = 1; 650 + pipe->buffers = pipe_bufs; 651 + pipe->user = user; 652 + mutex_init(&pipe->mutex); 653 + return pipe; 654 + } 655 + 656 + out_revert_acct: 657 + (void) account_pipe_buffers(user, pipe_bufs, 0); 658 + kfree(pipe); 659 + out_free_uid: 660 + free_uid(user); 652 661 return NULL; 653 662 } 654 663 ··· 665 656 { 666 657 int i; 667 658 668 - account_pipe_buffers(pipe, pipe->buffers, 0); 659 + (void) account_pipe_buffers(pipe->user, pipe->buffers, 0); 669 660 free_uid(pipe->user); 670 661 for (i = 0; i < pipe->buffers; i++) { 671 662 struct pipe_buffer *buf = pipe->bufs + i; ··· 1017 1008 }; 1018 1009 1019 1010 /* 1011 + * Currently we rely on the pipe array holding a power-of-2 number 1012 + * of pages. 1013 + */ 1014 + static inline unsigned int round_pipe_size(unsigned int size) 1015 + { 1016 + unsigned long nr_pages; 1017 + 1018 + nr_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; 1019 + return roundup_pow_of_two(nr_pages) << PAGE_SHIFT; 1020 + } 1021 + 1022 + /* 1020 1023 * Allocate a new array of pipe buffers and copy the info over. Returns the 1021 1024 * pipe size if successful, or return -ERROR on error. 1022 1025 */ 1023 - static long pipe_set_size(struct pipe_inode_info *pipe, unsigned long nr_pages) 1026 + static long pipe_set_size(struct pipe_inode_info *pipe, unsigned long arg) 1024 1027 { 1025 1028 struct pipe_buffer *bufs; 1029 + unsigned int size, nr_pages; 1030 + unsigned long user_bufs; 1031 + long ret = 0; 1032 + 1033 + size = round_pipe_size(arg); 1034 + nr_pages = size >> PAGE_SHIFT; 1035 + 1036 + if (!nr_pages) 1037 + return -EINVAL; 1038 + 1039 + /* 1040 + * If trying to increase the pipe capacity, check that an 1041 + * unprivileged user is not trying to exceed various limits 1042 + * (soft limit check here, hard limit check just below). 1043 + * Decreasing the pipe capacity is always permitted, even 1044 + * if the user is currently over a limit. 1045 + */ 1046 + if (nr_pages > pipe->buffers && 1047 + size > pipe_max_size && !capable(CAP_SYS_RESOURCE)) 1048 + return -EPERM; 1049 + 1050 + user_bufs = account_pipe_buffers(pipe->user, pipe->buffers, nr_pages); 1051 + 1052 + if (nr_pages > pipe->buffers && 1053 + (too_many_pipe_buffers_hard(user_bufs) || 1054 + too_many_pipe_buffers_soft(user_bufs)) && 1055 + !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN)) { 1056 + ret = -EPERM; 1057 + goto out_revert_acct; 1058 + } 1026 1059 1027 1060 /* 1028 1061 * We can shrink the pipe, if arg >= pipe->nrbufs. Since we don't ··· 1072 1021 * again like we would do for growing. If the pipe currently 1073 1022 * contains more buffers than arg, then return busy. 1074 1023 */ 1075 - if (nr_pages < pipe->nrbufs) 1076 - return -EBUSY; 1024 + if (nr_pages < pipe->nrbufs) { 1025 + ret = -EBUSY; 1026 + goto out_revert_acct; 1027 + } 1077 1028 1078 1029 bufs = kcalloc(nr_pages, sizeof(*bufs), 1079 1030 GFP_KERNEL_ACCOUNT | __GFP_NOWARN); 1080 - if (unlikely(!bufs)) 1081 - return -ENOMEM; 1031 + if (unlikely(!bufs)) { 1032 + ret = -ENOMEM; 1033 + goto out_revert_acct; 1034 + } 1082 1035 1083 1036 /* 1084 1037 * The pipe array wraps around, so just start the new one at zero ··· 1105 1050 memcpy(bufs + head, pipe->bufs, tail * sizeof(struct pipe_buffer)); 1106 1051 } 1107 1052 1108 - account_pipe_buffers(pipe, pipe->buffers, nr_pages); 1109 1053 pipe->curbuf = 0; 1110 1054 kfree(pipe->bufs); 1111 1055 pipe->bufs = bufs; 1112 1056 pipe->buffers = nr_pages; 1113 1057 return nr_pages * PAGE_SIZE; 1114 - } 1115 1058 1116 - /* 1117 - * Currently we rely on the pipe array holding a power-of-2 number 1118 - * of pages. 1119 - */ 1120 - static inline unsigned int round_pipe_size(unsigned int size) 1121 - { 1122 - unsigned long nr_pages; 1123 - 1124 - nr_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; 1125 - return roundup_pow_of_two(nr_pages) << PAGE_SHIFT; 1059 + out_revert_acct: 1060 + (void) account_pipe_buffers(pipe->user, nr_pages, pipe->buffers); 1061 + return ret; 1126 1062 } 1127 1063 1128 1064 /* ··· 1155 1109 __pipe_lock(pipe); 1156 1110 1157 1111 switch (cmd) { 1158 - case F_SETPIPE_SZ: { 1159 - unsigned int size, nr_pages; 1160 - 1161 - size = round_pipe_size(arg); 1162 - nr_pages = size >> PAGE_SHIFT; 1163 - 1164 - ret = -EINVAL; 1165 - if (!nr_pages) 1166 - goto out; 1167 - 1168 - if (!capable(CAP_SYS_RESOURCE) && size > pipe_max_size) { 1169 - ret = -EPERM; 1170 - goto out; 1171 - } else if ((too_many_pipe_buffers_hard(pipe->user) || 1172 - too_many_pipe_buffers_soft(pipe->user)) && 1173 - !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN)) { 1174 - ret = -EPERM; 1175 - goto out; 1176 - } 1177 - ret = pipe_set_size(pipe, nr_pages); 1112 + case F_SETPIPE_SZ: 1113 + ret = pipe_set_size(pipe, arg); 1178 1114 break; 1179 - } 1180 1115 case F_GETPIPE_SZ: 1181 1116 ret = pipe->buffers * PAGE_SIZE; 1182 1117 break; ··· 1166 1139 break; 1167 1140 } 1168 1141 1169 - out: 1170 1142 __pipe_unlock(pipe); 1171 1143 return ret; 1172 1144 }
+11 -3
fs/select.c
··· 29 29 #include <linux/sched/rt.h> 30 30 #include <linux/freezer.h> 31 31 #include <net/busy_poll.h> 32 + #include <linux/vmalloc.h> 32 33 33 34 #include <asm/uaccess.h> 34 35 ··· 555 554 fd_set_bits fds; 556 555 void *bits; 557 556 int ret, max_fds; 558 - unsigned int size; 557 + size_t size, alloc_size; 559 558 struct fdtable *fdt; 560 559 /* Allocate small arguments on the stack to save memory and be faster */ 561 560 long stack_fds[SELECT_STACK_ALLOC/sizeof(long)]; ··· 582 581 if (size > sizeof(stack_fds) / 6) { 583 582 /* Not enough space in on-stack array; must use kmalloc */ 584 583 ret = -ENOMEM; 585 - bits = kmalloc(6 * size, GFP_KERNEL); 584 + if (size > (SIZE_MAX / 6)) 585 + goto out_nofds; 586 + 587 + alloc_size = 6 * size; 588 + bits = kmalloc(alloc_size, GFP_KERNEL|__GFP_NOWARN); 589 + if (!bits && alloc_size > PAGE_SIZE) 590 + bits = vmalloc(alloc_size); 591 + 586 592 if (!bits) 587 593 goto out_nofds; 588 594 } ··· 626 618 627 619 out: 628 620 if (bits != stack_fds) 629 - kfree(bits); 621 + kvfree(bits); 630 622 out_nofds: 631 623 return ret; 632 624 }
+1 -210
include/linux/auto_dev-ioctl.h
··· 10 10 #ifndef _LINUX_AUTO_DEV_IOCTL_H 11 11 #define _LINUX_AUTO_DEV_IOCTL_H 12 12 13 - #include <linux/auto_fs.h> 14 - #include <linux/string.h> 15 - 16 - #define AUTOFS_DEVICE_NAME "autofs" 17 - 18 - #define AUTOFS_DEV_IOCTL_VERSION_MAJOR 1 19 - #define AUTOFS_DEV_IOCTL_VERSION_MINOR 0 20 - 21 - #define AUTOFS_DEVID_LEN 16 22 - 23 - #define AUTOFS_DEV_IOCTL_SIZE sizeof(struct autofs_dev_ioctl) 24 - 25 - /* 26 - * An ioctl interface for autofs mount point control. 27 - */ 28 - 29 - struct args_protover { 30 - __u32 version; 31 - }; 32 - 33 - struct args_protosubver { 34 - __u32 sub_version; 35 - }; 36 - 37 - struct args_openmount { 38 - __u32 devid; 39 - }; 40 - 41 - struct args_ready { 42 - __u32 token; 43 - }; 44 - 45 - struct args_fail { 46 - __u32 token; 47 - __s32 status; 48 - }; 49 - 50 - struct args_setpipefd { 51 - __s32 pipefd; 52 - }; 53 - 54 - struct args_timeout { 55 - __u64 timeout; 56 - }; 57 - 58 - struct args_requester { 59 - __u32 uid; 60 - __u32 gid; 61 - }; 62 - 63 - struct args_expire { 64 - __u32 how; 65 - }; 66 - 67 - struct args_askumount { 68 - __u32 may_umount; 69 - }; 70 - 71 - struct args_ismountpoint { 72 - union { 73 - struct args_in { 74 - __u32 type; 75 - } in; 76 - struct args_out { 77 - __u32 devid; 78 - __u32 magic; 79 - } out; 80 - }; 81 - }; 82 - 83 - /* 84 - * All the ioctls use this structure. 85 - * When sending a path size must account for the total length 86 - * of the chunk of memory otherwise is is the size of the 87 - * structure. 88 - */ 89 - 90 - struct autofs_dev_ioctl { 91 - __u32 ver_major; 92 - __u32 ver_minor; 93 - __u32 size; /* total size of data passed in 94 - * including this struct */ 95 - __s32 ioctlfd; /* automount command fd */ 96 - 97 - /* Command parameters */ 98 - 99 - union { 100 - struct args_protover protover; 101 - struct args_protosubver protosubver; 102 - struct args_openmount openmount; 103 - struct args_ready ready; 104 - struct args_fail fail; 105 - struct args_setpipefd setpipefd; 106 - struct args_timeout timeout; 107 - struct args_requester requester; 108 - struct args_expire expire; 109 - struct args_askumount askumount; 110 - struct args_ismountpoint ismountpoint; 111 - }; 112 - 113 - char path[0]; 114 - }; 115 - 116 - static inline void init_autofs_dev_ioctl(struct autofs_dev_ioctl *in) 117 - { 118 - memset(in, 0, sizeof(struct autofs_dev_ioctl)); 119 - in->ver_major = AUTOFS_DEV_IOCTL_VERSION_MAJOR; 120 - in->ver_minor = AUTOFS_DEV_IOCTL_VERSION_MINOR; 121 - in->size = sizeof(struct autofs_dev_ioctl); 122 - in->ioctlfd = -1; 123 - } 124 - 125 - /* 126 - * If you change this make sure you make the corresponding change 127 - * to autofs-dev-ioctl.c:lookup_ioctl() 128 - */ 129 - enum { 130 - /* Get various version info */ 131 - AUTOFS_DEV_IOCTL_VERSION_CMD = 0x71, 132 - AUTOFS_DEV_IOCTL_PROTOVER_CMD, 133 - AUTOFS_DEV_IOCTL_PROTOSUBVER_CMD, 134 - 135 - /* Open mount ioctl fd */ 136 - AUTOFS_DEV_IOCTL_OPENMOUNT_CMD, 137 - 138 - /* Close mount ioctl fd */ 139 - AUTOFS_DEV_IOCTL_CLOSEMOUNT_CMD, 140 - 141 - /* Mount/expire status returns */ 142 - AUTOFS_DEV_IOCTL_READY_CMD, 143 - AUTOFS_DEV_IOCTL_FAIL_CMD, 144 - 145 - /* Activate/deactivate autofs mount */ 146 - AUTOFS_DEV_IOCTL_SETPIPEFD_CMD, 147 - AUTOFS_DEV_IOCTL_CATATONIC_CMD, 148 - 149 - /* Expiry timeout */ 150 - AUTOFS_DEV_IOCTL_TIMEOUT_CMD, 151 - 152 - /* Get mount last requesting uid and gid */ 153 - AUTOFS_DEV_IOCTL_REQUESTER_CMD, 154 - 155 - /* Check for eligible expire candidates */ 156 - AUTOFS_DEV_IOCTL_EXPIRE_CMD, 157 - 158 - /* Request busy status */ 159 - AUTOFS_DEV_IOCTL_ASKUMOUNT_CMD, 160 - 161 - /* Check if path is a mountpoint */ 162 - AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD, 163 - }; 164 - 165 - #define AUTOFS_IOCTL 0x93 166 - 167 - #define AUTOFS_DEV_IOCTL_VERSION \ 168 - _IOWR(AUTOFS_IOCTL, \ 169 - AUTOFS_DEV_IOCTL_VERSION_CMD, struct autofs_dev_ioctl) 170 - 171 - #define AUTOFS_DEV_IOCTL_PROTOVER \ 172 - _IOWR(AUTOFS_IOCTL, \ 173 - AUTOFS_DEV_IOCTL_PROTOVER_CMD, struct autofs_dev_ioctl) 174 - 175 - #define AUTOFS_DEV_IOCTL_PROTOSUBVER \ 176 - _IOWR(AUTOFS_IOCTL, \ 177 - AUTOFS_DEV_IOCTL_PROTOSUBVER_CMD, struct autofs_dev_ioctl) 178 - 179 - #define AUTOFS_DEV_IOCTL_OPENMOUNT \ 180 - _IOWR(AUTOFS_IOCTL, \ 181 - AUTOFS_DEV_IOCTL_OPENMOUNT_CMD, struct autofs_dev_ioctl) 182 - 183 - #define AUTOFS_DEV_IOCTL_CLOSEMOUNT \ 184 - _IOWR(AUTOFS_IOCTL, \ 185 - AUTOFS_DEV_IOCTL_CLOSEMOUNT_CMD, struct autofs_dev_ioctl) 186 - 187 - #define AUTOFS_DEV_IOCTL_READY \ 188 - _IOWR(AUTOFS_IOCTL, \ 189 - AUTOFS_DEV_IOCTL_READY_CMD, struct autofs_dev_ioctl) 190 - 191 - #define AUTOFS_DEV_IOCTL_FAIL \ 192 - _IOWR(AUTOFS_IOCTL, \ 193 - AUTOFS_DEV_IOCTL_FAIL_CMD, struct autofs_dev_ioctl) 194 - 195 - #define AUTOFS_DEV_IOCTL_SETPIPEFD \ 196 - _IOWR(AUTOFS_IOCTL, \ 197 - AUTOFS_DEV_IOCTL_SETPIPEFD_CMD, struct autofs_dev_ioctl) 198 - 199 - #define AUTOFS_DEV_IOCTL_CATATONIC \ 200 - _IOWR(AUTOFS_IOCTL, \ 201 - AUTOFS_DEV_IOCTL_CATATONIC_CMD, struct autofs_dev_ioctl) 202 - 203 - #define AUTOFS_DEV_IOCTL_TIMEOUT \ 204 - _IOWR(AUTOFS_IOCTL, \ 205 - AUTOFS_DEV_IOCTL_TIMEOUT_CMD, struct autofs_dev_ioctl) 206 - 207 - #define AUTOFS_DEV_IOCTL_REQUESTER \ 208 - _IOWR(AUTOFS_IOCTL, \ 209 - AUTOFS_DEV_IOCTL_REQUESTER_CMD, struct autofs_dev_ioctl) 210 - 211 - #define AUTOFS_DEV_IOCTL_EXPIRE \ 212 - _IOWR(AUTOFS_IOCTL, \ 213 - AUTOFS_DEV_IOCTL_EXPIRE_CMD, struct autofs_dev_ioctl) 214 - 215 - #define AUTOFS_DEV_IOCTL_ASKUMOUNT \ 216 - _IOWR(AUTOFS_IOCTL, \ 217 - AUTOFS_DEV_IOCTL_ASKUMOUNT_CMD, struct autofs_dev_ioctl) 218 - 219 - #define AUTOFS_DEV_IOCTL_ISMOUNTPOINT \ 220 - _IOWR(AUTOFS_IOCTL, \ 221 - AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD, struct autofs_dev_ioctl) 222 - 13 + #include <uapi/linux/auto_dev-ioctl.h> 223 14 #endif /* _LINUX_AUTO_DEV_IOCTL_H */
-1
include/linux/auto_fs.h
··· 10 10 #define _LINUX_AUTO_FS_H 11 11 12 12 #include <linux/fs.h> 13 - #include <linux/limits.h> 14 13 #include <linux/ioctl.h> 15 14 #include <uapi/linux/auto_fs.h> 16 15 #endif /* _LINUX_AUTO_FS_H */
+4 -1
include/linux/ctype.h
··· 22 22 #define isalnum(c) ((__ismask(c)&(_U|_L|_D)) != 0) 23 23 #define isalpha(c) ((__ismask(c)&(_U|_L)) != 0) 24 24 #define iscntrl(c) ((__ismask(c)&(_C)) != 0) 25 - #define isdigit(c) ((__ismask(c)&(_D)) != 0) 25 + static inline int isdigit(int c) 26 + { 27 + return '0' <= c && c <= '9'; 28 + } 26 29 #define isgraph(c) ((__ismask(c)&(_P|_U|_L|_D)) != 0) 27 30 #define islower(c) ((__ismask(c)&(_L)) != 0) 28 31 #define isprint(c) ((__ismask(c)&(_P|_U|_L|_D|_SP)) != 0)
+5
include/linux/dma-mapping.h
··· 56 56 * that gives better TLB efficiency. 57 57 */ 58 58 #define DMA_ATTR_ALLOC_SINGLE_PAGES (1UL << 7) 59 + /* 60 + * DMA_ATTR_NO_WARN: This tells the DMA-mapping subsystem to suppress 61 + * allocation failure reports (similarly to __GFP_NOWARN). 62 + */ 63 + #define DMA_ATTR_NO_WARN (1UL << 8) 59 64 60 65 /* 61 66 * A dma_addr_t can hold any valid DMA or bus address for the platform.
-1
include/linux/export.h
··· 78 78 79 79 #elif defined(CONFIG_TRIM_UNUSED_KSYMS) 80 80 81 - #include <linux/kconfig.h> 82 81 #include <generated/autoksyms.h> 83 82 84 83 #define __EXPORT_SYMBOL(sym, sec) \
+2 -1
include/linux/fs.h
··· 440 440 unsigned long nrexceptional; 441 441 pgoff_t writeback_index;/* writeback starts here */ 442 442 const struct address_space_operations *a_ops; /* methods */ 443 - unsigned long flags; /* error bits/gfp mask */ 443 + unsigned long flags; /* error bits */ 444 444 spinlock_t private_lock; /* for use by the address_space */ 445 + gfp_t gfp_mask; /* implicit gfp mask for allocations */ 445 446 struct list_head private_list; /* ditto */ 446 447 void *private_data; /* ditto */ 447 448 } __attribute__((aligned(sizeof(long))));
-1
include/linux/gpio/driver.h
··· 8 8 #include <linux/irqdomain.h> 9 9 #include <linux/lockdep.h> 10 10 #include <linux/pinctrl/pinctrl.h> 11 - #include <linux/kconfig.h> 12 11 13 12 struct gpio_desc; 14 13 struct of_phandle_args;
+6
include/linux/kexec.h
··· 259 259 vmcoreinfo_append_str("NUMBER(%s)=%ld\n", #name, (long)name) 260 260 #define VMCOREINFO_CONFIG(name) \ 261 261 vmcoreinfo_append_str("CONFIG_%s=y\n", #name) 262 + #define VMCOREINFO_PAGE_OFFSET(value) \ 263 + vmcoreinfo_append_str("PAGE_OFFSET=%lx\n", (unsigned long)value) 264 + #define VMCOREINFO_VMALLOC_START(value) \ 265 + vmcoreinfo_append_str("VMALLOC_START=%lx\n", (unsigned long)value) 266 + #define VMCOREINFO_VMEMMAP_START(value) \ 267 + vmcoreinfo_append_str("VMEMMAP_START=%lx\n", (unsigned long)value) 262 268 263 269 extern struct kimage *kexec_image; 264 270 extern struct kimage *kexec_crash_image;
+18
include/linux/kmemleak.h
··· 38 38 extern void kmemleak_ignore(const void *ptr) __ref; 39 39 extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref; 40 40 extern void kmemleak_no_scan(const void *ptr) __ref; 41 + extern void kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count, 42 + gfp_t gfp) __ref; 43 + extern void kmemleak_free_part_phys(phys_addr_t phys, size_t size) __ref; 44 + extern void kmemleak_not_leak_phys(phys_addr_t phys) __ref; 45 + extern void kmemleak_ignore_phys(phys_addr_t phys) __ref; 41 46 42 47 static inline void kmemleak_alloc_recursive(const void *ptr, size_t size, 43 48 int min_count, unsigned long flags, ··· 109 104 { 110 105 } 111 106 static inline void kmemleak_no_scan(const void *ptr) 107 + { 108 + } 109 + static inline void kmemleak_alloc_phys(phys_addr_t phys, size_t size, 110 + int min_count, gfp_t gfp) 111 + { 112 + } 113 + static inline void kmemleak_free_part_phys(phys_addr_t phys, size_t size) 114 + { 115 + } 116 + static inline void kmemleak_not_leak_phys(phys_addr_t phys) 117 + { 118 + } 119 + static inline void kmemleak_ignore_phys(phys_addr_t phys) 112 120 { 113 121 } 114 122
+78 -10
include/linux/kthread.h
··· 10 10 int node, 11 11 const char namefmt[], ...); 12 12 13 + /** 14 + * kthread_create - create a kthread on the current node 15 + * @threadfn: the function to run in the thread 16 + * @data: data pointer for @threadfn() 17 + * @namefmt: printf-style format string for the thread name 18 + * @...: arguments for @namefmt. 19 + * 20 + * This macro will create a kthread on the current node, leaving it in 21 + * the stopped state. This is just a helper for kthread_create_on_node(); 22 + * see the documentation there for more details. 23 + */ 13 24 #define kthread_create(threadfn, data, namefmt, arg...) \ 14 25 kthread_create_on_node(threadfn, data, NUMA_NO_NODE, namefmt, ##arg) 15 26 ··· 55 44 bool kthread_should_park(void); 56 45 bool kthread_freezable_should_stop(bool *was_frozen); 57 46 void *kthread_data(struct task_struct *k); 58 - void *probe_kthread_data(struct task_struct *k); 47 + void *kthread_probe_data(struct task_struct *k); 59 48 int kthread_park(struct task_struct *k); 60 49 void kthread_unpark(struct task_struct *k); 61 50 void kthread_parkme(void); ··· 68 57 * Simple work processor based on kthread. 69 58 * 70 59 * This provides easier way to make use of kthreads. A kthread_work 71 - * can be queued and flushed using queue/flush_kthread_work() 60 + * can be queued and flushed using queue/kthread_flush_work() 72 61 * respectively. Queued kthread_works are processed by a kthread 73 62 * running kthread_worker_fn(). 74 63 */ 75 64 struct kthread_work; 76 65 typedef void (*kthread_work_func_t)(struct kthread_work *work); 66 + void kthread_delayed_work_timer_fn(unsigned long __data); 67 + 68 + enum { 69 + KTW_FREEZABLE = 1 << 0, /* freeze during suspend */ 70 + }; 77 71 78 72 struct kthread_worker { 73 + unsigned int flags; 79 74 spinlock_t lock; 80 75 struct list_head work_list; 76 + struct list_head delayed_work_list; 81 77 struct task_struct *task; 82 78 struct kthread_work *current_work; 83 79 }; ··· 93 75 struct list_head node; 94 76 kthread_work_func_t func; 95 77 struct kthread_worker *worker; 78 + /* Number of canceling calls that are running at the moment. */ 79 + int canceling; 80 + }; 81 + 82 + struct kthread_delayed_work { 83 + struct kthread_work work; 84 + struct timer_list timer; 96 85 }; 97 86 98 87 #define KTHREAD_WORKER_INIT(worker) { \ 99 88 .lock = __SPIN_LOCK_UNLOCKED((worker).lock), \ 100 89 .work_list = LIST_HEAD_INIT((worker).work_list), \ 90 + .delayed_work_list = LIST_HEAD_INIT((worker).delayed_work_list),\ 101 91 } 102 92 103 93 #define KTHREAD_WORK_INIT(work, fn) { \ 104 94 .node = LIST_HEAD_INIT((work).node), \ 105 95 .func = (fn), \ 96 + } 97 + 98 + #define KTHREAD_DELAYED_WORK_INIT(dwork, fn) { \ 99 + .work = KTHREAD_WORK_INIT((dwork).work, (fn)), \ 100 + .timer = __TIMER_INITIALIZER(kthread_delayed_work_timer_fn, \ 101 + 0, (unsigned long)&(dwork), \ 102 + TIMER_IRQSAFE), \ 106 103 } 107 104 108 105 #define DEFINE_KTHREAD_WORKER(worker) \ ··· 126 93 #define DEFINE_KTHREAD_WORK(work, fn) \ 127 94 struct kthread_work work = KTHREAD_WORK_INIT(work, fn) 128 95 96 + #define DEFINE_KTHREAD_DELAYED_WORK(dwork, fn) \ 97 + struct kthread_delayed_work dwork = \ 98 + KTHREAD_DELAYED_WORK_INIT(dwork, fn) 99 + 129 100 /* 130 101 * kthread_worker.lock needs its own lockdep class key when defined on 131 102 * stack with lockdep enabled. Use the following macros in such cases. 132 103 */ 133 104 #ifdef CONFIG_LOCKDEP 134 105 # define KTHREAD_WORKER_INIT_ONSTACK(worker) \ 135 - ({ init_kthread_worker(&worker); worker; }) 106 + ({ kthread_init_worker(&worker); worker; }) 136 107 # define DEFINE_KTHREAD_WORKER_ONSTACK(worker) \ 137 108 struct kthread_worker worker = KTHREAD_WORKER_INIT_ONSTACK(worker) 138 109 #else 139 110 # define DEFINE_KTHREAD_WORKER_ONSTACK(worker) DEFINE_KTHREAD_WORKER(worker) 140 111 #endif 141 112 142 - extern void __init_kthread_worker(struct kthread_worker *worker, 113 + extern void __kthread_init_worker(struct kthread_worker *worker, 143 114 const char *name, struct lock_class_key *key); 144 115 145 - #define init_kthread_worker(worker) \ 116 + #define kthread_init_worker(worker) \ 146 117 do { \ 147 118 static struct lock_class_key __key; \ 148 - __init_kthread_worker((worker), "("#worker")->lock", &__key); \ 119 + __kthread_init_worker((worker), "("#worker")->lock", &__key); \ 149 120 } while (0) 150 121 151 - #define init_kthread_work(work, fn) \ 122 + #define kthread_init_work(work, fn) \ 152 123 do { \ 153 124 memset((work), 0, sizeof(struct kthread_work)); \ 154 125 INIT_LIST_HEAD(&(work)->node); \ 155 126 (work)->func = (fn); \ 156 127 } while (0) 157 128 129 + #define kthread_init_delayed_work(dwork, fn) \ 130 + do { \ 131 + kthread_init_work(&(dwork)->work, (fn)); \ 132 + __setup_timer(&(dwork)->timer, \ 133 + kthread_delayed_work_timer_fn, \ 134 + (unsigned long)(dwork), \ 135 + TIMER_IRQSAFE); \ 136 + } while (0) 137 + 158 138 int kthread_worker_fn(void *worker_ptr); 159 139 160 - bool queue_kthread_work(struct kthread_worker *worker, 140 + __printf(2, 3) 141 + struct kthread_worker * 142 + kthread_create_worker(unsigned int flags, const char namefmt[], ...); 143 + 144 + struct kthread_worker * 145 + kthread_create_worker_on_cpu(int cpu, unsigned int flags, 146 + const char namefmt[], ...); 147 + 148 + bool kthread_queue_work(struct kthread_worker *worker, 161 149 struct kthread_work *work); 162 - void flush_kthread_work(struct kthread_work *work); 163 - void flush_kthread_worker(struct kthread_worker *worker); 150 + 151 + bool kthread_queue_delayed_work(struct kthread_worker *worker, 152 + struct kthread_delayed_work *dwork, 153 + unsigned long delay); 154 + 155 + bool kthread_mod_delayed_work(struct kthread_worker *worker, 156 + struct kthread_delayed_work *dwork, 157 + unsigned long delay); 158 + 159 + void kthread_flush_work(struct kthread_work *work); 160 + void kthread_flush_worker(struct kthread_worker *worker); 161 + 162 + bool kthread_cancel_work_sync(struct kthread_work *work); 163 + bool kthread_cancel_delayed_work_sync(struct kthread_delayed_work *work); 164 + 165 + void kthread_destroy_worker(struct kthread_worker *worker); 164 166 165 167 #endif /* _LINUX_KTHREAD_H */
+9 -11
include/linux/pagemap.h
··· 16 16 #include <linux/hugetlb_inline.h> 17 17 18 18 /* 19 - * Bits in mapping->flags. The lower __GFP_BITS_SHIFT bits are the page 20 - * allocation mode flags. 19 + * Bits in mapping->flags. 21 20 */ 22 21 enum mapping_flags { 23 - AS_EIO = __GFP_BITS_SHIFT + 0, /* IO error on async write */ 24 - AS_ENOSPC = __GFP_BITS_SHIFT + 1, /* ENOSPC on async write */ 25 - AS_MM_ALL_LOCKS = __GFP_BITS_SHIFT + 2, /* under mm_take_all_locks() */ 26 - AS_UNEVICTABLE = __GFP_BITS_SHIFT + 3, /* e.g., ramdisk, SHM_LOCK */ 27 - AS_EXITING = __GFP_BITS_SHIFT + 4, /* final truncate in progress */ 22 + AS_EIO = 0, /* IO error on async write */ 23 + AS_ENOSPC = 1, /* ENOSPC on async write */ 24 + AS_MM_ALL_LOCKS = 2, /* under mm_take_all_locks() */ 25 + AS_UNEVICTABLE = 3, /* e.g., ramdisk, SHM_LOCK */ 26 + AS_EXITING = 4, /* final truncate in progress */ 28 27 /* writeback related tags are not used */ 29 - AS_NO_WRITEBACK_TAGS = __GFP_BITS_SHIFT + 5, 28 + AS_NO_WRITEBACK_TAGS = 5, 30 29 }; 31 30 32 31 static inline void mapping_set_error(struct address_space *mapping, int error) ··· 77 78 78 79 static inline gfp_t mapping_gfp_mask(struct address_space * mapping) 79 80 { 80 - return (__force gfp_t)mapping->flags & __GFP_BITS_MASK; 81 + return mapping->gfp_mask; 81 82 } 82 83 83 84 /* Restricts the given gfp_mask to what the mapping allows. */ ··· 93 94 */ 94 95 static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) 95 96 { 96 - m->flags = (m->flags & ~(__force unsigned long)__GFP_BITS_MASK) | 97 - (__force unsigned long)mask; 97 + m->gfp_mask = mask; 98 98 } 99 99 100 100 void release_pages(struct page **pages, int nr, bool cold);
+8
include/linux/radix-tree.h
··· 461 461 * 462 462 * This function updates @iter->index in the case of a successful lookup. 463 463 * For tagged lookup it also eats @iter->tags. 464 + * 465 + * There are several cases where 'slot' can be passed in as NULL to this 466 + * function. These cases result from the use of radix_tree_iter_next() or 467 + * radix_tree_iter_retry(). In these cases we don't end up dereferencing 468 + * 'slot' because either: 469 + * a) we are doing tagged iteration and iter->tags has been set to 0, or 470 + * b) we are doing non-tagged iteration, and iter->index and iter->next_index 471 + * have been set up so that radix_tree_chunk_size() returns 1 or 0. 464 472 */ 465 473 static __always_inline void ** 466 474 radix_tree_next_slot(void **slot, struct radix_tree_iter *iter, unsigned flags)
+1 -1
include/linux/random.h
··· 34 34 35 35 unsigned int get_random_int(void); 36 36 unsigned long get_random_long(void); 37 - unsigned long randomize_range(unsigned long start, unsigned long end, unsigned long len); 37 + unsigned long randomize_page(unsigned long start, unsigned long range); 38 38 39 39 u32 prandom_u32(void); 40 40 void prandom_bytes(void *buf, size_t nbytes);
+2 -1
include/linux/relay.h
··· 15 15 #include <linux/timer.h> 16 16 #include <linux/wait.h> 17 17 #include <linux/list.h> 18 + #include <linux/irq_work.h> 18 19 #include <linux/bug.h> 19 20 #include <linux/fs.h> 20 21 #include <linux/poll.h> ··· 39 38 size_t subbufs_consumed; /* count of sub-buffers consumed */ 40 39 struct rchan *chan; /* associated channel */ 41 40 wait_queue_head_t read_wait; /* reader wait queue */ 42 - struct timer_list timer; /* reader wake-up timer */ 41 + struct irq_work wakeup_work; /* reader wakeup */ 43 42 struct dentry *dentry; /* channel file dentry */ 44 43 struct kref kref; /* channel buffer refcount */ 45 44 struct page **page_array; /* array of current buffer pages */
+1
include/linux/sem.h
··· 21 21 struct list_head list_id; /* undo requests on this array */ 22 22 int sem_nsems; /* no. of semaphores in array */ 23 23 int complex_count; /* pending complex operations */ 24 + bool complex_mode; /* no parallel simple ops */ 24 25 }; 25 26 26 27 #ifdef CONFIG_SYSVIPC
+221
include/uapi/linux/auto_dev-ioctl.h
··· 1 + /* 2 + * Copyright 2008 Red Hat, Inc. All rights reserved. 3 + * Copyright 2008 Ian Kent <raven@themaw.net> 4 + * 5 + * This file is part of the Linux kernel and is made available under 6 + * the terms of the GNU General Public License, version 2, or at your 7 + * option, any later version, incorporated herein by reference. 8 + */ 9 + 10 + #ifndef _UAPI_LINUX_AUTO_DEV_IOCTL_H 11 + #define _UAPI_LINUX_AUTO_DEV_IOCTL_H 12 + 13 + #include <linux/auto_fs.h> 14 + #include <linux/string.h> 15 + 16 + #define AUTOFS_DEVICE_NAME "autofs" 17 + 18 + #define AUTOFS_DEV_IOCTL_VERSION_MAJOR 1 19 + #define AUTOFS_DEV_IOCTL_VERSION_MINOR 0 20 + 21 + #define AUTOFS_DEV_IOCTL_SIZE sizeof(struct autofs_dev_ioctl) 22 + 23 + /* 24 + * An ioctl interface for autofs mount point control. 25 + */ 26 + 27 + struct args_protover { 28 + __u32 version; 29 + }; 30 + 31 + struct args_protosubver { 32 + __u32 sub_version; 33 + }; 34 + 35 + struct args_openmount { 36 + __u32 devid; 37 + }; 38 + 39 + struct args_ready { 40 + __u32 token; 41 + }; 42 + 43 + struct args_fail { 44 + __u32 token; 45 + __s32 status; 46 + }; 47 + 48 + struct args_setpipefd { 49 + __s32 pipefd; 50 + }; 51 + 52 + struct args_timeout { 53 + __u64 timeout; 54 + }; 55 + 56 + struct args_requester { 57 + __u32 uid; 58 + __u32 gid; 59 + }; 60 + 61 + struct args_expire { 62 + __u32 how; 63 + }; 64 + 65 + struct args_askumount { 66 + __u32 may_umount; 67 + }; 68 + 69 + struct args_ismountpoint { 70 + union { 71 + struct args_in { 72 + __u32 type; 73 + } in; 74 + struct args_out { 75 + __u32 devid; 76 + __u32 magic; 77 + } out; 78 + }; 79 + }; 80 + 81 + /* 82 + * All the ioctls use this structure. 83 + * When sending a path size must account for the total length 84 + * of the chunk of memory otherwise is is the size of the 85 + * structure. 86 + */ 87 + 88 + struct autofs_dev_ioctl { 89 + __u32 ver_major; 90 + __u32 ver_minor; 91 + __u32 size; /* total size of data passed in 92 + * including this struct */ 93 + __s32 ioctlfd; /* automount command fd */ 94 + 95 + /* Command parameters */ 96 + 97 + union { 98 + struct args_protover protover; 99 + struct args_protosubver protosubver; 100 + struct args_openmount openmount; 101 + struct args_ready ready; 102 + struct args_fail fail; 103 + struct args_setpipefd setpipefd; 104 + struct args_timeout timeout; 105 + struct args_requester requester; 106 + struct args_expire expire; 107 + struct args_askumount askumount; 108 + struct args_ismountpoint ismountpoint; 109 + }; 110 + 111 + char path[0]; 112 + }; 113 + 114 + static inline void init_autofs_dev_ioctl(struct autofs_dev_ioctl *in) 115 + { 116 + memset(in, 0, sizeof(struct autofs_dev_ioctl)); 117 + in->ver_major = AUTOFS_DEV_IOCTL_VERSION_MAJOR; 118 + in->ver_minor = AUTOFS_DEV_IOCTL_VERSION_MINOR; 119 + in->size = sizeof(struct autofs_dev_ioctl); 120 + in->ioctlfd = -1; 121 + } 122 + 123 + /* 124 + * If you change this make sure you make the corresponding change 125 + * to autofs-dev-ioctl.c:lookup_ioctl() 126 + */ 127 + enum { 128 + /* Get various version info */ 129 + AUTOFS_DEV_IOCTL_VERSION_CMD = 0x71, 130 + AUTOFS_DEV_IOCTL_PROTOVER_CMD, 131 + AUTOFS_DEV_IOCTL_PROTOSUBVER_CMD, 132 + 133 + /* Open mount ioctl fd */ 134 + AUTOFS_DEV_IOCTL_OPENMOUNT_CMD, 135 + 136 + /* Close mount ioctl fd */ 137 + AUTOFS_DEV_IOCTL_CLOSEMOUNT_CMD, 138 + 139 + /* Mount/expire status returns */ 140 + AUTOFS_DEV_IOCTL_READY_CMD, 141 + AUTOFS_DEV_IOCTL_FAIL_CMD, 142 + 143 + /* Activate/deactivate autofs mount */ 144 + AUTOFS_DEV_IOCTL_SETPIPEFD_CMD, 145 + AUTOFS_DEV_IOCTL_CATATONIC_CMD, 146 + 147 + /* Expiry timeout */ 148 + AUTOFS_DEV_IOCTL_TIMEOUT_CMD, 149 + 150 + /* Get mount last requesting uid and gid */ 151 + AUTOFS_DEV_IOCTL_REQUESTER_CMD, 152 + 153 + /* Check for eligible expire candidates */ 154 + AUTOFS_DEV_IOCTL_EXPIRE_CMD, 155 + 156 + /* Request busy status */ 157 + AUTOFS_DEV_IOCTL_ASKUMOUNT_CMD, 158 + 159 + /* Check if path is a mountpoint */ 160 + AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD, 161 + }; 162 + 163 + #define AUTOFS_IOCTL 0x93 164 + 165 + #define AUTOFS_DEV_IOCTL_VERSION \ 166 + _IOWR(AUTOFS_IOCTL, \ 167 + AUTOFS_DEV_IOCTL_VERSION_CMD, struct autofs_dev_ioctl) 168 + 169 + #define AUTOFS_DEV_IOCTL_PROTOVER \ 170 + _IOWR(AUTOFS_IOCTL, \ 171 + AUTOFS_DEV_IOCTL_PROTOVER_CMD, struct autofs_dev_ioctl) 172 + 173 + #define AUTOFS_DEV_IOCTL_PROTOSUBVER \ 174 + _IOWR(AUTOFS_IOCTL, \ 175 + AUTOFS_DEV_IOCTL_PROTOSUBVER_CMD, struct autofs_dev_ioctl) 176 + 177 + #define AUTOFS_DEV_IOCTL_OPENMOUNT \ 178 + _IOWR(AUTOFS_IOCTL, \ 179 + AUTOFS_DEV_IOCTL_OPENMOUNT_CMD, struct autofs_dev_ioctl) 180 + 181 + #define AUTOFS_DEV_IOCTL_CLOSEMOUNT \ 182 + _IOWR(AUTOFS_IOCTL, \ 183 + AUTOFS_DEV_IOCTL_CLOSEMOUNT_CMD, struct autofs_dev_ioctl) 184 + 185 + #define AUTOFS_DEV_IOCTL_READY \ 186 + _IOWR(AUTOFS_IOCTL, \ 187 + AUTOFS_DEV_IOCTL_READY_CMD, struct autofs_dev_ioctl) 188 + 189 + #define AUTOFS_DEV_IOCTL_FAIL \ 190 + _IOWR(AUTOFS_IOCTL, \ 191 + AUTOFS_DEV_IOCTL_FAIL_CMD, struct autofs_dev_ioctl) 192 + 193 + #define AUTOFS_DEV_IOCTL_SETPIPEFD \ 194 + _IOWR(AUTOFS_IOCTL, \ 195 + AUTOFS_DEV_IOCTL_SETPIPEFD_CMD, struct autofs_dev_ioctl) 196 + 197 + #define AUTOFS_DEV_IOCTL_CATATONIC \ 198 + _IOWR(AUTOFS_IOCTL, \ 199 + AUTOFS_DEV_IOCTL_CATATONIC_CMD, struct autofs_dev_ioctl) 200 + 201 + #define AUTOFS_DEV_IOCTL_TIMEOUT \ 202 + _IOWR(AUTOFS_IOCTL, \ 203 + AUTOFS_DEV_IOCTL_TIMEOUT_CMD, struct autofs_dev_ioctl) 204 + 205 + #define AUTOFS_DEV_IOCTL_REQUESTER \ 206 + _IOWR(AUTOFS_IOCTL, \ 207 + AUTOFS_DEV_IOCTL_REQUESTER_CMD, struct autofs_dev_ioctl) 208 + 209 + #define AUTOFS_DEV_IOCTL_EXPIRE \ 210 + _IOWR(AUTOFS_IOCTL, \ 211 + AUTOFS_DEV_IOCTL_EXPIRE_CMD, struct autofs_dev_ioctl) 212 + 213 + #define AUTOFS_DEV_IOCTL_ASKUMOUNT \ 214 + _IOWR(AUTOFS_IOCTL, \ 215 + AUTOFS_DEV_IOCTL_ASKUMOUNT_CMD, struct autofs_dev_ioctl) 216 + 217 + #define AUTOFS_DEV_IOCTL_ISMOUNTPOINT \ 218 + _IOWR(AUTOFS_IOCTL, \ 219 + AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD, struct autofs_dev_ioctl) 220 + 221 + #endif /* _UAPI_LINUX_AUTO_DEV_IOCTL_H */
+1
include/uapi/linux/auto_fs.h
··· 12 12 #define _UAPI_LINUX_AUTO_FS_H 13 13 14 14 #include <linux/types.h> 15 + #include <linux/limits.h> 15 16 #ifndef __KERNEL__ 16 17 #include <sys/ioctl.h> 17 18 #endif /* __KERNEL__ */
+1
init/Kconfig
··· 1288 1288 1289 1289 config RELAY 1290 1290 bool "Kernel->user space relay support (formerly relayfs)" 1291 + select IRQ_WORK 1291 1292 help 1292 1293 This option enables support for relay interface support in 1293 1294 certain file systems (such as debugfs).
+97 -107
ipc/msg.c
··· 51 51 long r_msgtype; 52 52 long r_maxsize; 53 53 54 - /* 55 - * Mark r_msg volatile so that the compiler 56 - * does not try to get smart and optimize 57 - * it. We rely on this for the lockless 58 - * receive algorithm. 59 - */ 60 - struct msg_msg *volatile r_msg; 54 + struct msg_msg *r_msg; 61 55 }; 62 56 63 57 /* one msg_sender for each sleeping sender */ 64 58 struct msg_sender { 65 59 struct list_head list; 66 60 struct task_struct *tsk; 61 + size_t msgsz; 67 62 }; 68 63 69 64 #define SEARCH_ANY 1 ··· 154 159 return msq->q_perm.id; 155 160 } 156 161 157 - static inline void ss_add(struct msg_queue *msq, struct msg_sender *mss) 162 + static inline bool msg_fits_inqueue(struct msg_queue *msq, size_t msgsz) 163 + { 164 + return msgsz + msq->q_cbytes <= msq->q_qbytes && 165 + 1 + msq->q_qnum <= msq->q_qbytes; 166 + } 167 + 168 + static inline void ss_add(struct msg_queue *msq, 169 + struct msg_sender *mss, size_t msgsz) 158 170 { 159 171 mss->tsk = current; 172 + mss->msgsz = msgsz; 160 173 __set_current_state(TASK_INTERRUPTIBLE); 161 174 list_add_tail(&mss->list, &msq->q_senders); 162 175 } 163 176 164 177 static inline void ss_del(struct msg_sender *mss) 165 178 { 166 - if (mss->list.next != NULL) 179 + if (mss->list.next) 167 180 list_del(&mss->list); 168 181 } 169 182 170 - static void ss_wakeup(struct list_head *h, int kill) 183 + static void ss_wakeup(struct msg_queue *msq, 184 + struct wake_q_head *wake_q, bool kill) 171 185 { 172 186 struct msg_sender *mss, *t; 187 + struct task_struct *stop_tsk = NULL; 188 + struct list_head *h = &msq->q_senders; 173 189 174 190 list_for_each_entry_safe(mss, t, h, list) { 175 191 if (kill) 176 192 mss->list.next = NULL; 177 - wake_up_process(mss->tsk); 193 + 194 + /* 195 + * Stop at the first task we don't wakeup, 196 + * we've already iterated the original 197 + * sender queue. 198 + */ 199 + else if (stop_tsk == mss->tsk) 200 + break; 201 + /* 202 + * We are not in an EIDRM scenario here, therefore 203 + * verify that we really need to wakeup the task. 204 + * To maintain current semantics and wakeup order, 205 + * move the sender to the tail on behalf of the 206 + * blocked task. 207 + */ 208 + else if (!msg_fits_inqueue(msq, mss->msgsz)) { 209 + if (!stop_tsk) 210 + stop_tsk = mss->tsk; 211 + 212 + list_move_tail(&mss->list, &msq->q_senders); 213 + continue; 214 + } 215 + 216 + wake_q_add(wake_q, mss->tsk); 178 217 } 179 218 } 180 219 181 - static void expunge_all(struct msg_queue *msq, int res) 220 + static void expunge_all(struct msg_queue *msq, int res, 221 + struct wake_q_head *wake_q) 182 222 { 183 223 struct msg_receiver *msr, *t; 184 224 185 225 list_for_each_entry_safe(msr, t, &msq->q_receivers, r_list) { 186 - msr->r_msg = NULL; /* initialize expunge ordering */ 187 - wake_up_process(msr->r_tsk); 188 - /* 189 - * Ensure that the wakeup is visible before setting r_msg as 190 - * the receiving end depends on it: either spinning on a nil, 191 - * or dealing with -EAGAIN cases. See lockless receive part 1 192 - * and 2 in do_msgrcv(). 193 - */ 194 - smp_wmb(); /* barrier (B) */ 195 - msr->r_msg = ERR_PTR(res); 226 + wake_q_add(wake_q, msr->r_tsk); 227 + WRITE_ONCE(msr->r_msg, ERR_PTR(res)); 196 228 } 197 229 } 198 230 ··· 235 213 { 236 214 struct msg_msg *msg, *t; 237 215 struct msg_queue *msq = container_of(ipcp, struct msg_queue, q_perm); 216 + WAKE_Q(wake_q); 238 217 239 - expunge_all(msq, -EIDRM); 240 - ss_wakeup(&msq->q_senders, 1); 218 + expunge_all(msq, -EIDRM, &wake_q); 219 + ss_wakeup(msq, &wake_q, true); 241 220 msg_rmid(ns, msq); 242 221 ipc_unlock_object(&msq->q_perm); 222 + wake_up_q(&wake_q); 243 223 rcu_read_unlock(); 244 224 245 225 list_for_each_entry_safe(msg, t, &msq->q_messages, m_list) { ··· 396 372 freeque(ns, ipcp); 397 373 goto out_up; 398 374 case IPC_SET: 375 + { 376 + WAKE_Q(wake_q); 377 + 399 378 if (msqid64.msg_qbytes > ns->msg_ctlmnb && 400 379 !capable(CAP_SYS_RESOURCE)) { 401 380 err = -EPERM; ··· 413 386 msq->q_qbytes = msqid64.msg_qbytes; 414 387 415 388 msq->q_ctime = get_seconds(); 416 - /* sleeping receivers might be excluded by 389 + /* 390 + * Sleeping receivers might be excluded by 417 391 * stricter permissions. 418 392 */ 419 - expunge_all(msq, -EAGAIN); 420 - /* sleeping senders might be able to send 393 + expunge_all(msq, -EAGAIN, &wake_q); 394 + /* 395 + * Sleeping senders might be able to send 421 396 * due to a larger queue size. 422 397 */ 423 - ss_wakeup(&msq->q_senders, 0); 424 - break; 398 + ss_wakeup(msq, &wake_q, false); 399 + ipc_unlock_object(&msq->q_perm); 400 + wake_up_q(&wake_q); 401 + 402 + goto out_unlock1; 403 + } 425 404 default: 426 405 err = -EINVAL; 427 406 goto out_unlock1; ··· 599 566 return 0; 600 567 } 601 568 602 - static inline int pipelined_send(struct msg_queue *msq, struct msg_msg *msg) 569 + static inline int pipelined_send(struct msg_queue *msq, struct msg_msg *msg, 570 + struct wake_q_head *wake_q) 603 571 { 604 572 struct msg_receiver *msr, *t; 605 573 ··· 611 577 612 578 list_del(&msr->r_list); 613 579 if (msr->r_maxsize < msg->m_ts) { 614 - /* initialize pipelined send ordering */ 615 - msr->r_msg = NULL; 616 - wake_up_process(msr->r_tsk); 617 - /* barrier (B) see barrier comment below */ 618 - smp_wmb(); 619 - msr->r_msg = ERR_PTR(-E2BIG); 580 + wake_q_add(wake_q, msr->r_tsk); 581 + WRITE_ONCE(msr->r_msg, ERR_PTR(-E2BIG)); 620 582 } else { 621 - msr->r_msg = NULL; 622 583 msq->q_lrpid = task_pid_vnr(msr->r_tsk); 623 584 msq->q_rtime = get_seconds(); 624 - wake_up_process(msr->r_tsk); 625 - /* 626 - * Ensure that the wakeup is visible before 627 - * setting r_msg, as the receiving can otherwise 628 - * exit - once r_msg is set, the receiver can 629 - * continue. See lockless receive part 1 and 2 630 - * in do_msgrcv(). Barrier (B). 631 - */ 632 - smp_wmb(); 633 - msr->r_msg = msg; 634 585 586 + wake_q_add(wake_q, msr->r_tsk); 587 + WRITE_ONCE(msr->r_msg, msg); 635 588 return 1; 636 589 } 637 590 } ··· 634 613 struct msg_msg *msg; 635 614 int err; 636 615 struct ipc_namespace *ns; 616 + WAKE_Q(wake_q); 637 617 638 618 ns = current->nsproxy->ipc_ns; 639 619 ··· 676 654 if (err) 677 655 goto out_unlock0; 678 656 679 - if (msgsz + msq->q_cbytes <= msq->q_qbytes && 680 - 1 + msq->q_qnum <= msq->q_qbytes) { 657 + if (msg_fits_inqueue(msq, msgsz)) 681 658 break; 682 - } 683 659 684 660 /* queue full, wait: */ 685 661 if (msgflg & IPC_NOWAIT) { ··· 686 666 } 687 667 688 668 /* enqueue the sender and prepare to block */ 689 - ss_add(msq, &s); 669 + ss_add(msq, &s, msgsz); 690 670 691 671 if (!ipc_rcu_getref(msq)) { 692 672 err = -EIDRM; ··· 706 686 err = -EIDRM; 707 687 goto out_unlock0; 708 688 } 709 - 710 689 ss_del(&s); 711 690 712 691 if (signal_pending(current)) { ··· 714 695 } 715 696 716 697 } 698 + 717 699 msq->q_lspid = task_tgid_vnr(current); 718 700 msq->q_stime = get_seconds(); 719 701 720 - if (!pipelined_send(msq, msg)) { 702 + if (!pipelined_send(msq, msg, &wake_q)) { 721 703 /* no one is waiting for this message, enqueue it */ 722 704 list_add_tail(&msg->m_list, &msq->q_messages); 723 705 msq->q_cbytes += msgsz; ··· 732 712 733 713 out_unlock0: 734 714 ipc_unlock_object(&msq->q_perm); 715 + wake_up_q(&wake_q); 735 716 out_unlock1: 736 717 rcu_read_unlock(); 737 718 if (msg != NULL) ··· 850 829 struct msg_queue *msq; 851 830 struct ipc_namespace *ns; 852 831 struct msg_msg *msg, *copy = NULL; 832 + WAKE_Q(wake_q); 853 833 854 834 ns = current->nsproxy->ipc_ns; 855 835 ··· 915 893 msq->q_cbytes -= msg->m_ts; 916 894 atomic_sub(msg->m_ts, &ns->msg_bytes); 917 895 atomic_dec(&ns->msg_hdrs); 918 - ss_wakeup(&msq->q_senders, 0); 896 + ss_wakeup(msq, &wake_q, false); 919 897 920 898 goto out_unlock0; 921 899 } ··· 941 919 rcu_read_unlock(); 942 920 schedule(); 943 921 944 - /* Lockless receive, part 1: 945 - * Disable preemption. We don't hold a reference to the queue 946 - * and getting a reference would defeat the idea of a lockless 947 - * operation, thus the code relies on rcu to guarantee the 948 - * existence of msq: 922 + /* 923 + * Lockless receive, part 1: 924 + * We don't hold a reference to the queue and getting a 925 + * reference would defeat the idea of a lockless operation, 926 + * thus the code relies on rcu to guarantee the existence of 927 + * msq: 949 928 * Prior to destruction, expunge_all(-EIRDM) changes r_msg. 950 929 * Thus if r_msg is -EAGAIN, then the queue not yet destroyed. 951 - * rcu_read_lock() prevents preemption between reading r_msg 952 - * and acquiring the q_perm.lock in ipc_lock_object(). 953 930 */ 954 931 rcu_read_lock(); 955 932 956 - /* Lockless receive, part 2: 957 - * Wait until pipelined_send or expunge_all are outside of 958 - * wake_up_process(). There is a race with exit(), see 959 - * ipc/mqueue.c for the details. The correct serialization 960 - * ensures that a receiver cannot continue without the wakeup 961 - * being visibible _before_ setting r_msg: 933 + /* 934 + * Lockless receive, part 2: 935 + * The work in pipelined_send() and expunge_all(): 936 + * - Set pointer to message 937 + * - Queue the receiver task for later wakeup 938 + * - Wake up the process after the lock is dropped. 962 939 * 963 - * CPU 0 CPU 1 964 - * <loop receiver> 965 - * smp_rmb(); (A) <-- pair -. <waker thread> 966 - * <load ->r_msg> | msr->r_msg = NULL; 967 - * | wake_up_process(); 968 - * <continue> `------> smp_wmb(); (B) 969 - * msr->r_msg = msg; 970 - * 971 - * Where (A) orders the message value read and where (B) orders 972 - * the write to the r_msg -- done in both pipelined_send and 973 - * expunge_all. 940 + * Should the process wake up before this wakeup (due to a 941 + * signal) it will either see the message and continue ... 974 942 */ 975 - for (;;) { 976 - /* 977 - * Pairs with writer barrier in pipelined_send 978 - * or expunge_all. 979 - */ 980 - smp_rmb(); /* barrier (A) */ 981 - msg = (struct msg_msg *)msr_d.r_msg; 982 - if (msg) 983 - break; 984 - 985 - /* 986 - * The cpu_relax() call is a compiler barrier 987 - * which forces everything in this loop to be 988 - * re-loaded. 989 - */ 990 - cpu_relax(); 991 - } 992 - 993 - /* Lockless receive, part 3: 994 - * If there is a message or an error then accept it without 995 - * locking. 996 - */ 943 + msg = READ_ONCE(msr_d.r_msg); 997 944 if (msg != ERR_PTR(-EAGAIN)) 998 945 goto out_unlock1; 999 946 1000 - /* Lockless receive, part 3: 1001 - * Acquire the queue spinlock. 1002 - */ 947 + /* 948 + * ... or see -EAGAIN, acquire the lock to check the message 949 + * again. 950 + */ 1003 951 ipc_lock_object(&msq->q_perm); 1004 952 1005 - /* Lockless receive, part 4: 1006 - * Repeat test after acquiring the spinlock. 1007 - */ 1008 - msg = (struct msg_msg *)msr_d.r_msg; 953 + msg = msr_d.r_msg; 1009 954 if (msg != ERR_PTR(-EAGAIN)) 1010 955 goto out_unlock0; 1011 956 ··· 987 998 988 999 out_unlock0: 989 1000 ipc_unlock_object(&msq->q_perm); 1001 + wake_up_q(&wake_q); 990 1002 out_unlock1: 991 1003 rcu_read_unlock(); 992 1004 if (IS_ERR(msg)) {
+85 -55
ipc/sem.c
··· 162 162 163 163 /* 164 164 * Locking: 165 + * a) global sem_lock() for read/write 165 166 * sem_undo.id_next, 166 167 * sem_array.complex_count, 167 - * sem_array.pending{_alter,_cont}, 168 - * sem_array.sem_undo: global sem_lock() for read/write 169 - * sem_undo.proc_next: only "current" is allowed to read/write that field. 168 + * sem_array.complex_mode 169 + * sem_array.pending{_alter,_const}, 170 + * sem_array.sem_undo 170 171 * 172 + * b) global or semaphore sem_lock() for read/write: 171 173 * sem_array.sem_base[i].pending_{const,alter}: 172 - * global or semaphore sem_lock() for read/write 174 + * sem_array.complex_mode (for read) 175 + * 176 + * c) special: 177 + * sem_undo_list.list_proc: 178 + * * undo_list->lock for write 179 + * * rcu for read 173 180 */ 174 181 175 182 #define sc_semmsl sem_ctls[0] ··· 267 260 } 268 261 269 262 /* 270 - * Wait until all currently ongoing simple ops have completed. 263 + * Enter the mode suitable for non-simple operations: 271 264 * Caller must own sem_perm.lock. 272 - * New simple ops cannot start, because simple ops first check 273 - * that sem_perm.lock is free. 274 - * that a) sem_perm.lock is free and b) complex_count is 0. 275 265 */ 276 - static void sem_wait_array(struct sem_array *sma) 266 + static void complexmode_enter(struct sem_array *sma) 277 267 { 278 268 int i; 279 269 struct sem *sem; 280 270 281 - if (sma->complex_count) { 282 - /* The thread that increased sma->complex_count waited on 283 - * all sem->lock locks. Thus we don't need to wait again. 284 - */ 271 + if (sma->complex_mode) { 272 + /* We are already in complex_mode. Nothing to do */ 285 273 return; 286 274 } 275 + 276 + /* We need a full barrier after seting complex_mode: 277 + * The write to complex_mode must be visible 278 + * before we read the first sem->lock spinlock state. 279 + */ 280 + smp_store_mb(sma->complex_mode, true); 287 281 288 282 for (i = 0; i < sma->sem_nsems; i++) { 289 283 sem = sma->sem_base + i; 290 284 spin_unlock_wait(&sem->lock); 291 285 } 286 + /* 287 + * spin_unlock_wait() is not a memory barriers, it is only a 288 + * control barrier. The code must pair with spin_unlock(&sem->lock), 289 + * thus just the control barrier is insufficient. 290 + * 291 + * smp_rmb() is sufficient, as writes cannot pass the control barrier. 292 + */ 293 + smp_rmb(); 292 294 } 293 295 296 + /* 297 + * Try to leave the mode that disallows simple operations: 298 + * Caller must own sem_perm.lock. 299 + */ 300 + static void complexmode_tryleave(struct sem_array *sma) 301 + { 302 + if (sma->complex_count) { 303 + /* Complex ops are sleeping. 304 + * We must stay in complex mode 305 + */ 306 + return; 307 + } 308 + /* 309 + * Immediately after setting complex_mode to false, 310 + * a simple op can start. Thus: all memory writes 311 + * performed by the current operation must be visible 312 + * before we set complex_mode to false. 313 + */ 314 + smp_store_release(&sma->complex_mode, false); 315 + } 316 + 317 + #define SEM_GLOBAL_LOCK (-1) 294 318 /* 295 319 * If the request contains only one semaphore operation, and there are 296 320 * no complex transactions pending, lock only the semaphore involved. ··· 338 300 /* Complex operation - acquire a full lock */ 339 301 ipc_lock_object(&sma->sem_perm); 340 302 341 - /* And wait until all simple ops that are processed 342 - * right now have dropped their locks. 343 - */ 344 - sem_wait_array(sma); 345 - return -1; 303 + /* Prevent parallel simple ops */ 304 + complexmode_enter(sma); 305 + return SEM_GLOBAL_LOCK; 346 306 } 347 307 348 308 /* 349 309 * Only one semaphore affected - try to optimize locking. 350 - * The rules are: 351 - * - optimized locking is possible if no complex operation 352 - * is either enqueued or processed right now. 353 - * - The test for enqueued complex ops is simple: 354 - * sma->complex_count != 0 355 - * - Testing for complex ops that are processed right now is 356 - * a bit more difficult. Complex ops acquire the full lock 357 - * and first wait that the running simple ops have completed. 358 - * (see above) 359 - * Thus: If we own a simple lock and the global lock is free 360 - * and complex_count is now 0, then it will stay 0 and 361 - * thus just locking sem->lock is sufficient. 310 + * Optimized locking is possible if no complex operation 311 + * is either enqueued or processed right now. 312 + * 313 + * Both facts are tracked by complex_mode. 362 314 */ 363 315 sem = sma->sem_base + sops->sem_num; 364 316 365 - if (sma->complex_count == 0) { 317 + /* 318 + * Initial check for complex_mode. Just an optimization, 319 + * no locking, no memory barrier. 320 + */ 321 + if (!sma->complex_mode) { 366 322 /* 367 323 * It appears that no complex operation is around. 368 324 * Acquire the per-semaphore lock. 369 325 */ 370 326 spin_lock(&sem->lock); 371 327 372 - /* Then check that the global lock is free */ 373 - if (!spin_is_locked(&sma->sem_perm.lock)) { 374 - /* 375 - * We need a memory barrier with acquire semantics, 376 - * otherwise we can race with another thread that does: 377 - * complex_count++; 378 - * spin_unlock(sem_perm.lock); 379 - */ 380 - smp_acquire__after_ctrl_dep(); 328 + /* 329 + * See 51d7d5205d33 330 + * ("powerpc: Add smp_mb() to arch_spin_is_locked()"): 331 + * A full barrier is required: the write of sem->lock 332 + * must be visible before the read is executed 333 + */ 334 + smp_mb(); 381 335 382 - /* 383 - * Now repeat the test of complex_count: 384 - * It can't change anymore until we drop sem->lock. 385 - * Thus: if is now 0, then it will stay 0. 386 - */ 387 - if (sma->complex_count == 0) { 388 - /* fast path successful! */ 389 - return sops->sem_num; 390 - } 336 + if (!smp_load_acquire(&sma->complex_mode)) { 337 + /* fast path successful! */ 338 + return sops->sem_num; 391 339 } 392 340 spin_unlock(&sem->lock); 393 341 } ··· 393 369 /* Not a false alarm, thus complete the sequence for a 394 370 * full lock. 395 371 */ 396 - sem_wait_array(sma); 397 - return -1; 372 + complexmode_enter(sma); 373 + return SEM_GLOBAL_LOCK; 398 374 } 399 375 } 400 376 401 377 static inline void sem_unlock(struct sem_array *sma, int locknum) 402 378 { 403 - if (locknum == -1) { 379 + if (locknum == SEM_GLOBAL_LOCK) { 404 380 unmerge_queues(sma); 381 + complexmode_tryleave(sma); 405 382 ipc_unlock_object(&sma->sem_perm); 406 383 } else { 407 384 struct sem *sem = sma->sem_base + locknum; ··· 554 529 } 555 530 556 531 sma->complex_count = 0; 532 + sma->complex_mode = true; /* dropped by sem_unlock below */ 557 533 INIT_LIST_HEAD(&sma->pending_alter); 558 534 INIT_LIST_HEAD(&sma->pending_const); 559 535 INIT_LIST_HEAD(&sma->list_id); ··· 2105 2079 struct list_head tasks; 2106 2080 int semid, i; 2107 2081 2082 + cond_resched(); 2083 + 2108 2084 rcu_read_lock(); 2109 2085 un = list_entry_rcu(ulp->list_proc.next, 2110 2086 struct sem_undo, list_proc); ··· 2212 2184 /* 2213 2185 * The proc interface isn't aware of sem_lock(), it calls 2214 2186 * ipc_lock_object() directly (in sysvipc_find_ipc). 2215 - * In order to stay compatible with sem_lock(), we must wait until 2216 - * all simple semop() calls have left their critical regions. 2187 + * In order to stay compatible with sem_lock(), we must 2188 + * enter / leave complex_mode. 2217 2189 */ 2218 - sem_wait_array(sma); 2190 + complexmode_enter(sma); 2219 2191 2220 2192 sem_otime = get_semotime(sma); 2221 2193 ··· 2231 2203 from_kgid_munged(user_ns, sma->sem_perm.cgid), 2232 2204 sem_otime, 2233 2205 sma->sem_ctime); 2206 + 2207 + complexmode_tryleave(sma); 2234 2208 2235 2209 return 0; 2236 2210 }
+2 -5
kernel/configs/android-base.config
··· 11 11 CONFIG_ARMV8_DEPRECATED=y 12 12 CONFIG_ASHMEM=y 13 13 CONFIG_AUDIT=y 14 - CONFIG_BLK_DEV_DM=y 15 14 CONFIG_BLK_DEV_INITRD=y 16 15 CONFIG_CGROUPS=y 17 16 CONFIG_CGROUP_CPUACCT=y ··· 18 19 CONFIG_CGROUP_FREEZER=y 19 20 CONFIG_CGROUP_SCHED=y 20 21 CONFIG_CP15_BARRIER_EMULATION=y 21 - CONFIG_DM_CRYPT=y 22 - CONFIG_DM_VERITY=y 23 - CONFIG_DM_VERITY_FEC=y 22 + CONFIG_DEFAULT_SECURITY_SELINUX=y 24 23 CONFIG_EMBEDDED=y 25 24 CONFIG_FB=y 26 25 CONFIG_HIGH_RES_TIMERS=y ··· 38 41 CONFIG_IPV6_MIP6=y 39 42 CONFIG_IPV6_MULTIPLE_TABLES=y 40 43 CONFIG_IPV6_OPTIMISTIC_DAD=y 41 - CONFIG_IPV6_PRIVACY=y 42 44 CONFIG_IPV6_ROUTER_PREF=y 43 45 CONFIG_IPV6_ROUTE_INFO=y 44 46 CONFIG_IP_ADVANCED_ROUTER=y ··· 131 135 CONFIG_QUOTA=y 132 136 CONFIG_RTC_CLASS=y 133 137 CONFIG_RT_GROUP_SCHED=y 138 + CONFIG_SECCOMP=y 134 139 CONFIG_SECURITY=y 135 140 CONFIG_SECURITY_NETWORK=y 136 141 CONFIG_SECURITY_SELINUX=y
+4
kernel/configs/android-recommended.config
··· 6 6 # CONFIG_PM_WAKELOCKS_GC is not set 7 7 # CONFIG_VT is not set 8 8 CONFIG_BACKLIGHT_LCD_SUPPORT=y 9 + CONFIG_BLK_DEV_DM=y 9 10 CONFIG_BLK_DEV_LOOP=y 10 11 CONFIG_BLK_DEV_RAM=y 11 12 CONFIG_BLK_DEV_RAM_SIZE=8192 12 13 CONFIG_COMPACTION=y 13 14 CONFIG_DEBUG_RODATA=y 15 + CONFIG_DM_CRYPT=y 14 16 CONFIG_DM_UEVENT=y 17 + CONFIG_DM_VERITY=y 18 + CONFIG_DM_VERITY_FEC=y 15 19 CONFIG_DRAGONRISE_FF=y 16 20 CONFIG_ENABLE_DEFAULT_TRACERS=y 17 21 CONFIG_EXT4_FS=y
+14 -14
kernel/hung_task.c
··· 98 98 99 99 trace_sched_process_hang(t); 100 100 101 - if (!sysctl_hung_task_warnings) 101 + if (!sysctl_hung_task_warnings && !sysctl_hung_task_panic) 102 102 return; 103 - 104 - if (sysctl_hung_task_warnings > 0) 105 - sysctl_hung_task_warnings--; 106 103 107 104 /* 108 105 * Ok, the task did not get scheduled for more than 2 minutes, 109 106 * complain: 110 107 */ 111 - pr_err("INFO: task %s:%d blocked for more than %ld seconds.\n", 112 - t->comm, t->pid, timeout); 113 - pr_err(" %s %s %.*s\n", 114 - print_tainted(), init_utsname()->release, 115 - (int)strcspn(init_utsname()->version, " "), 116 - init_utsname()->version); 117 - pr_err("\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\"" 118 - " disables this message.\n"); 119 - sched_show_task(t); 120 - debug_show_all_locks(); 108 + if (sysctl_hung_task_warnings) { 109 + sysctl_hung_task_warnings--; 110 + pr_err("INFO: task %s:%d blocked for more than %ld seconds.\n", 111 + t->comm, t->pid, timeout); 112 + pr_err(" %s %s %.*s\n", 113 + print_tainted(), init_utsname()->release, 114 + (int)strcspn(init_utsname()->version, " "), 115 + init_utsname()->version); 116 + pr_err("\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\"" 117 + " disables this message.\n"); 118 + sched_show_task(t); 119 + debug_show_all_locks(); 120 + } 121 121 122 122 touch_nmi_watchdog(); 123 123
+1 -1
kernel/kprobes.c
··· 49 49 #include <linux/cpu.h> 50 50 #include <linux/jump_label.h> 51 51 52 - #include <asm-generic/sections.h> 52 + #include <asm/sections.h> 53 53 #include <asm/cacheflush.h> 54 54 #include <asm/errno.h> 55 55 #include <asm/uaccess.h>
+507 -70
kernel/kthread.c
··· 138 138 } 139 139 140 140 /** 141 - * probe_kthread_data - speculative version of kthread_data() 141 + * kthread_probe_data - speculative version of kthread_data() 142 142 * @task: possible kthread task in question 143 143 * 144 144 * @task could be a kthread task. Return the data value specified when it ··· 146 146 * inaccessible for any reason, %NULL is returned. This function requires 147 147 * that @task itself is safe to dereference. 148 148 */ 149 - void *probe_kthread_data(struct task_struct *task) 149 + void *kthread_probe_data(struct task_struct *task) 150 150 { 151 151 struct kthread *kthread = to_kthread(task); 152 152 void *data = NULL; ··· 244 244 } 245 245 } 246 246 247 - /** 248 - * kthread_create_on_node - create a kthread. 249 - * @threadfn: the function to run until signal_pending(current). 250 - * @data: data ptr for @threadfn. 251 - * @node: task and thread structures for the thread are allocated on this node 252 - * @namefmt: printf-style name for the thread. 253 - * 254 - * Description: This helper function creates and names a kernel 255 - * thread. The thread will be stopped: use wake_up_process() to start 256 - * it. See also kthread_run(). The new thread has SCHED_NORMAL policy and 257 - * is affine to all CPUs. 258 - * 259 - * If thread is going to be bound on a particular cpu, give its node 260 - * in @node, to get NUMA affinity for kthread stack, or else give NUMA_NO_NODE. 261 - * When woken, the thread will run @threadfn() with @data as its 262 - * argument. @threadfn() can either call do_exit() directly if it is a 263 - * standalone thread for which no one will call kthread_stop(), or 264 - * return when 'kthread_should_stop()' is true (which means 265 - * kthread_stop() has been called). The return value should be zero 266 - * or a negative error number; it will be passed to kthread_stop(). 267 - * 268 - * Returns a task_struct or ERR_PTR(-ENOMEM) or ERR_PTR(-EINTR). 269 - */ 270 - struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), 271 - void *data, int node, 272 - const char namefmt[], 273 - ...) 247 + static struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), 248 + void *data, int node, 249 + const char namefmt[], 250 + va_list args) 274 251 { 275 252 DECLARE_COMPLETION_ONSTACK(done); 276 253 struct task_struct *task; ··· 288 311 task = create->result; 289 312 if (!IS_ERR(task)) { 290 313 static const struct sched_param param = { .sched_priority = 0 }; 291 - va_list args; 292 314 293 - va_start(args, namefmt); 294 315 vsnprintf(task->comm, sizeof(task->comm), namefmt, args); 295 - va_end(args); 296 316 /* 297 317 * root may have changed our (kthreadd's) priority or CPU mask. 298 318 * The kernel thread should not inherit these properties. ··· 298 324 set_cpus_allowed_ptr(task, cpu_all_mask); 299 325 } 300 326 kfree(create); 327 + return task; 328 + } 329 + 330 + /** 331 + * kthread_create_on_node - create a kthread. 332 + * @threadfn: the function to run until signal_pending(current). 333 + * @data: data ptr for @threadfn. 334 + * @node: task and thread structures for the thread are allocated on this node 335 + * @namefmt: printf-style name for the thread. 336 + * 337 + * Description: This helper function creates and names a kernel 338 + * thread. The thread will be stopped: use wake_up_process() to start 339 + * it. See also kthread_run(). The new thread has SCHED_NORMAL policy and 340 + * is affine to all CPUs. 341 + * 342 + * If thread is going to be bound on a particular cpu, give its node 343 + * in @node, to get NUMA affinity for kthread stack, or else give NUMA_NO_NODE. 344 + * When woken, the thread will run @threadfn() with @data as its 345 + * argument. @threadfn() can either call do_exit() directly if it is a 346 + * standalone thread for which no one will call kthread_stop(), or 347 + * return when 'kthread_should_stop()' is true (which means 348 + * kthread_stop() has been called). The return value should be zero 349 + * or a negative error number; it will be passed to kthread_stop(). 350 + * 351 + * Returns a task_struct or ERR_PTR(-ENOMEM) or ERR_PTR(-EINTR). 352 + */ 353 + struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), 354 + void *data, int node, 355 + const char namefmt[], 356 + ...) 357 + { 358 + struct task_struct *task; 359 + va_list args; 360 + 361 + va_start(args, namefmt); 362 + task = __kthread_create_on_node(threadfn, data, node, namefmt, args); 363 + va_end(args); 364 + 301 365 return task; 302 366 } 303 367 EXPORT_SYMBOL(kthread_create_on_node); ··· 402 390 cpu); 403 391 if (IS_ERR(p)) 404 392 return p; 393 + kthread_bind(p, cpu); 394 + /* CPU hotplug need to bind once again when unparking the thread. */ 405 395 set_bit(KTHREAD_IS_PER_CPU, &to_kthread(p)->flags); 406 396 to_kthread(p)->cpu = cpu; 407 - /* Park the thread to get it out of TASK_UNINTERRUPTIBLE state */ 408 - kthread_park(p); 409 397 return p; 410 398 } 411 399 ··· 419 407 * which might be about to be cleared. 420 408 */ 421 409 if (test_and_clear_bit(KTHREAD_IS_PARKED, &kthread->flags)) { 410 + /* 411 + * Newly created kthread was parked when the CPU was offline. 412 + * The binding was lost and we need to set it again. 413 + */ 422 414 if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags)) 423 415 __kthread_bind(k, kthread->cpu, TASK_PARKED); 424 416 wake_up_state(k, TASK_PARKED); ··· 556 540 return 0; 557 541 } 558 542 559 - void __init_kthread_worker(struct kthread_worker *worker, 543 + void __kthread_init_worker(struct kthread_worker *worker, 560 544 const char *name, 561 545 struct lock_class_key *key) 562 546 { 547 + memset(worker, 0, sizeof(struct kthread_worker)); 563 548 spin_lock_init(&worker->lock); 564 549 lockdep_set_class_and_name(&worker->lock, key, name); 565 550 INIT_LIST_HEAD(&worker->work_list); 566 - worker->task = NULL; 551 + INIT_LIST_HEAD(&worker->delayed_work_list); 567 552 } 568 - EXPORT_SYMBOL_GPL(__init_kthread_worker); 553 + EXPORT_SYMBOL_GPL(__kthread_init_worker); 569 554 570 555 /** 571 556 * kthread_worker_fn - kthread function to process kthread_worker 572 557 * @worker_ptr: pointer to initialized kthread_worker 573 558 * 574 - * This function can be used as @threadfn to kthread_create() or 575 - * kthread_run() with @worker_ptr argument pointing to an initialized 576 - * kthread_worker. The started kthread will process work_list until 577 - * the it is stopped with kthread_stop(). A kthread can also call 578 - * this function directly after extra initialization. 559 + * This function implements the main cycle of kthread worker. It processes 560 + * work_list until it is stopped with kthread_stop(). It sleeps when the queue 561 + * is empty. 579 562 * 580 - * Different kthreads can be used for the same kthread_worker as long 581 - * as there's only one kthread attached to it at any given time. A 582 - * kthread_worker without an attached kthread simply collects queued 583 - * kthread_works. 563 + * The works are not allowed to keep any locks, disable preemption or interrupts 564 + * when they finish. There is defined a safe point for freezing when one work 565 + * finishes and before a new one is started. 566 + * 567 + * Also the works must not be handled by more than one worker at the same time, 568 + * see also kthread_queue_work(). 584 569 */ 585 570 int kthread_worker_fn(void *worker_ptr) 586 571 { 587 572 struct kthread_worker *worker = worker_ptr; 588 573 struct kthread_work *work; 589 574 590 - WARN_ON(worker->task); 575 + /* 576 + * FIXME: Update the check and remove the assignment when all kthread 577 + * worker users are created using kthread_create_worker*() functions. 578 + */ 579 + WARN_ON(worker->task && worker->task != current); 591 580 worker->task = current; 581 + 582 + if (worker->flags & KTW_FREEZABLE) 583 + set_freezable(); 584 + 592 585 repeat: 593 586 set_current_state(TASK_INTERRUPTIBLE); /* mb paired w/ kthread_stop */ 594 587 ··· 630 605 } 631 606 EXPORT_SYMBOL_GPL(kthread_worker_fn); 632 607 633 - /* insert @work before @pos in @worker */ 634 - static void insert_kthread_work(struct kthread_worker *worker, 635 - struct kthread_work *work, 636 - struct list_head *pos) 608 + static struct kthread_worker * 609 + __kthread_create_worker(int cpu, unsigned int flags, 610 + const char namefmt[], va_list args) 611 + { 612 + struct kthread_worker *worker; 613 + struct task_struct *task; 614 + 615 + worker = kzalloc(sizeof(*worker), GFP_KERNEL); 616 + if (!worker) 617 + return ERR_PTR(-ENOMEM); 618 + 619 + kthread_init_worker(worker); 620 + 621 + if (cpu >= 0) { 622 + char name[TASK_COMM_LEN]; 623 + 624 + /* 625 + * kthread_create_worker_on_cpu() allows to pass a generic 626 + * namefmt in compare with kthread_create_on_cpu. We need 627 + * to format it here. 628 + */ 629 + vsnprintf(name, sizeof(name), namefmt, args); 630 + task = kthread_create_on_cpu(kthread_worker_fn, worker, 631 + cpu, name); 632 + } else { 633 + task = __kthread_create_on_node(kthread_worker_fn, worker, 634 + -1, namefmt, args); 635 + } 636 + 637 + if (IS_ERR(task)) 638 + goto fail_task; 639 + 640 + worker->flags = flags; 641 + worker->task = task; 642 + wake_up_process(task); 643 + return worker; 644 + 645 + fail_task: 646 + kfree(worker); 647 + return ERR_CAST(task); 648 + } 649 + 650 + /** 651 + * kthread_create_worker - create a kthread worker 652 + * @flags: flags modifying the default behavior of the worker 653 + * @namefmt: printf-style name for the kthread worker (task). 654 + * 655 + * Returns a pointer to the allocated worker on success, ERR_PTR(-ENOMEM) 656 + * when the needed structures could not get allocated, and ERR_PTR(-EINTR) 657 + * when the worker was SIGKILLed. 658 + */ 659 + struct kthread_worker * 660 + kthread_create_worker(unsigned int flags, const char namefmt[], ...) 661 + { 662 + struct kthread_worker *worker; 663 + va_list args; 664 + 665 + va_start(args, namefmt); 666 + worker = __kthread_create_worker(-1, flags, namefmt, args); 667 + va_end(args); 668 + 669 + return worker; 670 + } 671 + EXPORT_SYMBOL(kthread_create_worker); 672 + 673 + /** 674 + * kthread_create_worker_on_cpu - create a kthread worker and bind it 675 + * it to a given CPU and the associated NUMA node. 676 + * @cpu: CPU number 677 + * @flags: flags modifying the default behavior of the worker 678 + * @namefmt: printf-style name for the kthread worker (task). 679 + * 680 + * Use a valid CPU number if you want to bind the kthread worker 681 + * to the given CPU and the associated NUMA node. 682 + * 683 + * A good practice is to add the cpu number also into the worker name. 684 + * For example, use kthread_create_worker_on_cpu(cpu, "helper/%d", cpu). 685 + * 686 + * Returns a pointer to the allocated worker on success, ERR_PTR(-ENOMEM) 687 + * when the needed structures could not get allocated, and ERR_PTR(-EINTR) 688 + * when the worker was SIGKILLed. 689 + */ 690 + struct kthread_worker * 691 + kthread_create_worker_on_cpu(int cpu, unsigned int flags, 692 + const char namefmt[], ...) 693 + { 694 + struct kthread_worker *worker; 695 + va_list args; 696 + 697 + va_start(args, namefmt); 698 + worker = __kthread_create_worker(cpu, flags, namefmt, args); 699 + va_end(args); 700 + 701 + return worker; 702 + } 703 + EXPORT_SYMBOL(kthread_create_worker_on_cpu); 704 + 705 + /* 706 + * Returns true when the work could not be queued at the moment. 707 + * It happens when it is already pending in a worker list 708 + * or when it is being cancelled. 709 + */ 710 + static inline bool queuing_blocked(struct kthread_worker *worker, 711 + struct kthread_work *work) 637 712 { 638 713 lockdep_assert_held(&worker->lock); 714 + 715 + return !list_empty(&work->node) || work->canceling; 716 + } 717 + 718 + static void kthread_insert_work_sanity_check(struct kthread_worker *worker, 719 + struct kthread_work *work) 720 + { 721 + lockdep_assert_held(&worker->lock); 722 + WARN_ON_ONCE(!list_empty(&work->node)); 723 + /* Do not use a work with >1 worker, see kthread_queue_work() */ 724 + WARN_ON_ONCE(work->worker && work->worker != worker); 725 + } 726 + 727 + /* insert @work before @pos in @worker */ 728 + static void kthread_insert_work(struct kthread_worker *worker, 729 + struct kthread_work *work, 730 + struct list_head *pos) 731 + { 732 + kthread_insert_work_sanity_check(worker, work); 639 733 640 734 list_add_tail(&work->node, pos); 641 735 work->worker = worker; ··· 763 619 } 764 620 765 621 /** 766 - * queue_kthread_work - queue a kthread_work 622 + * kthread_queue_work - queue a kthread_work 767 623 * @worker: target kthread_worker 768 624 * @work: kthread_work to queue 769 625 * 770 626 * Queue @work to work processor @task for async execution. @task 771 627 * must have been created with kthread_worker_create(). Returns %true 772 628 * if @work was successfully queued, %false if it was already pending. 629 + * 630 + * Reinitialize the work if it needs to be used by another worker. 631 + * For example, when the worker was stopped and started again. 773 632 */ 774 - bool queue_kthread_work(struct kthread_worker *worker, 633 + bool kthread_queue_work(struct kthread_worker *worker, 775 634 struct kthread_work *work) 776 635 { 777 636 bool ret = false; 778 637 unsigned long flags; 779 638 780 639 spin_lock_irqsave(&worker->lock, flags); 781 - if (list_empty(&work->node)) { 782 - insert_kthread_work(worker, work, &worker->work_list); 640 + if (!queuing_blocked(worker, work)) { 641 + kthread_insert_work(worker, work, &worker->work_list); 783 642 ret = true; 784 643 } 785 644 spin_unlock_irqrestore(&worker->lock, flags); 786 645 return ret; 787 646 } 788 - EXPORT_SYMBOL_GPL(queue_kthread_work); 647 + EXPORT_SYMBOL_GPL(kthread_queue_work); 648 + 649 + /** 650 + * kthread_delayed_work_timer_fn - callback that queues the associated kthread 651 + * delayed work when the timer expires. 652 + * @__data: pointer to the data associated with the timer 653 + * 654 + * The format of the function is defined by struct timer_list. 655 + * It should have been called from irqsafe timer with irq already off. 656 + */ 657 + void kthread_delayed_work_timer_fn(unsigned long __data) 658 + { 659 + struct kthread_delayed_work *dwork = 660 + (struct kthread_delayed_work *)__data; 661 + struct kthread_work *work = &dwork->work; 662 + struct kthread_worker *worker = work->worker; 663 + 664 + /* 665 + * This might happen when a pending work is reinitialized. 666 + * It means that it is used a wrong way. 667 + */ 668 + if (WARN_ON_ONCE(!worker)) 669 + return; 670 + 671 + spin_lock(&worker->lock); 672 + /* Work must not be used with >1 worker, see kthread_queue_work(). */ 673 + WARN_ON_ONCE(work->worker != worker); 674 + 675 + /* Move the work from worker->delayed_work_list. */ 676 + WARN_ON_ONCE(list_empty(&work->node)); 677 + list_del_init(&work->node); 678 + kthread_insert_work(worker, work, &worker->work_list); 679 + 680 + spin_unlock(&worker->lock); 681 + } 682 + EXPORT_SYMBOL(kthread_delayed_work_timer_fn); 683 + 684 + void __kthread_queue_delayed_work(struct kthread_worker *worker, 685 + struct kthread_delayed_work *dwork, 686 + unsigned long delay) 687 + { 688 + struct timer_list *timer = &dwork->timer; 689 + struct kthread_work *work = &dwork->work; 690 + 691 + WARN_ON_ONCE(timer->function != kthread_delayed_work_timer_fn || 692 + timer->data != (unsigned long)dwork); 693 + 694 + /* 695 + * If @delay is 0, queue @dwork->work immediately. This is for 696 + * both optimization and correctness. The earliest @timer can 697 + * expire is on the closest next tick and delayed_work users depend 698 + * on that there's no such delay when @delay is 0. 699 + */ 700 + if (!delay) { 701 + kthread_insert_work(worker, work, &worker->work_list); 702 + return; 703 + } 704 + 705 + /* Be paranoid and try to detect possible races already now. */ 706 + kthread_insert_work_sanity_check(worker, work); 707 + 708 + list_add(&work->node, &worker->delayed_work_list); 709 + work->worker = worker; 710 + timer_stats_timer_set_start_info(&dwork->timer); 711 + timer->expires = jiffies + delay; 712 + add_timer(timer); 713 + } 714 + 715 + /** 716 + * kthread_queue_delayed_work - queue the associated kthread work 717 + * after a delay. 718 + * @worker: target kthread_worker 719 + * @dwork: kthread_delayed_work to queue 720 + * @delay: number of jiffies to wait before queuing 721 + * 722 + * If the work has not been pending it starts a timer that will queue 723 + * the work after the given @delay. If @delay is zero, it queues the 724 + * work immediately. 725 + * 726 + * Return: %false if the @work has already been pending. It means that 727 + * either the timer was running or the work was queued. It returns %true 728 + * otherwise. 729 + */ 730 + bool kthread_queue_delayed_work(struct kthread_worker *worker, 731 + struct kthread_delayed_work *dwork, 732 + unsigned long delay) 733 + { 734 + struct kthread_work *work = &dwork->work; 735 + unsigned long flags; 736 + bool ret = false; 737 + 738 + spin_lock_irqsave(&worker->lock, flags); 739 + 740 + if (!queuing_blocked(worker, work)) { 741 + __kthread_queue_delayed_work(worker, dwork, delay); 742 + ret = true; 743 + } 744 + 745 + spin_unlock_irqrestore(&worker->lock, flags); 746 + return ret; 747 + } 748 + EXPORT_SYMBOL_GPL(kthread_queue_delayed_work); 789 749 790 750 struct kthread_flush_work { 791 751 struct kthread_work work; ··· 904 656 } 905 657 906 658 /** 907 - * flush_kthread_work - flush a kthread_work 659 + * kthread_flush_work - flush a kthread_work 908 660 * @work: work to flush 909 661 * 910 662 * If @work is queued or executing, wait for it to finish execution. 911 663 */ 912 - void flush_kthread_work(struct kthread_work *work) 664 + void kthread_flush_work(struct kthread_work *work) 913 665 { 914 666 struct kthread_flush_work fwork = { 915 667 KTHREAD_WORK_INIT(fwork.work, kthread_flush_work_fn), ··· 918 670 struct kthread_worker *worker; 919 671 bool noop = false; 920 672 921 - retry: 922 673 worker = work->worker; 923 674 if (!worker) 924 675 return; 925 676 926 677 spin_lock_irq(&worker->lock); 927 - if (work->worker != worker) { 928 - spin_unlock_irq(&worker->lock); 929 - goto retry; 930 - } 678 + /* Work must not be used with >1 worker, see kthread_queue_work(). */ 679 + WARN_ON_ONCE(work->worker != worker); 931 680 932 681 if (!list_empty(&work->node)) 933 - insert_kthread_work(worker, &fwork.work, work->node.next); 682 + kthread_insert_work(worker, &fwork.work, work->node.next); 934 683 else if (worker->current_work == work) 935 - insert_kthread_work(worker, &fwork.work, worker->work_list.next); 684 + kthread_insert_work(worker, &fwork.work, 685 + worker->work_list.next); 936 686 else 937 687 noop = true; 938 688 ··· 939 693 if (!noop) 940 694 wait_for_completion(&fwork.done); 941 695 } 942 - EXPORT_SYMBOL_GPL(flush_kthread_work); 696 + EXPORT_SYMBOL_GPL(kthread_flush_work); 697 + 698 + /* 699 + * This function removes the work from the worker queue. Also it makes sure 700 + * that it won't get queued later via the delayed work's timer. 701 + * 702 + * The work might still be in use when this function finishes. See the 703 + * current_work proceed by the worker. 704 + * 705 + * Return: %true if @work was pending and successfully canceled, 706 + * %false if @work was not pending 707 + */ 708 + static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork, 709 + unsigned long *flags) 710 + { 711 + /* Try to cancel the timer if exists. */ 712 + if (is_dwork) { 713 + struct kthread_delayed_work *dwork = 714 + container_of(work, struct kthread_delayed_work, work); 715 + struct kthread_worker *worker = work->worker; 716 + 717 + /* 718 + * del_timer_sync() must be called to make sure that the timer 719 + * callback is not running. The lock must be temporary released 720 + * to avoid a deadlock with the callback. In the meantime, 721 + * any queuing is blocked by setting the canceling counter. 722 + */ 723 + work->canceling++; 724 + spin_unlock_irqrestore(&worker->lock, *flags); 725 + del_timer_sync(&dwork->timer); 726 + spin_lock_irqsave(&worker->lock, *flags); 727 + work->canceling--; 728 + } 729 + 730 + /* 731 + * Try to remove the work from a worker list. It might either 732 + * be from worker->work_list or from worker->delayed_work_list. 733 + */ 734 + if (!list_empty(&work->node)) { 735 + list_del_init(&work->node); 736 + return true; 737 + } 738 + 739 + return false; 740 + } 943 741 944 742 /** 945 - * flush_kthread_worker - flush all current works on a kthread_worker 743 + * kthread_mod_delayed_work - modify delay of or queue a kthread delayed work 744 + * @worker: kthread worker to use 745 + * @dwork: kthread delayed work to queue 746 + * @delay: number of jiffies to wait before queuing 747 + * 748 + * If @dwork is idle, equivalent to kthread_queue_delayed_work(). Otherwise, 749 + * modify @dwork's timer so that it expires after @delay. If @delay is zero, 750 + * @work is guaranteed to be queued immediately. 751 + * 752 + * Return: %true if @dwork was pending and its timer was modified, 753 + * %false otherwise. 754 + * 755 + * A special case is when the work is being canceled in parallel. 756 + * It might be caused either by the real kthread_cancel_delayed_work_sync() 757 + * or yet another kthread_mod_delayed_work() call. We let the other command 758 + * win and return %false here. The caller is supposed to synchronize these 759 + * operations a reasonable way. 760 + * 761 + * This function is safe to call from any context including IRQ handler. 762 + * See __kthread_cancel_work() and kthread_delayed_work_timer_fn() 763 + * for details. 764 + */ 765 + bool kthread_mod_delayed_work(struct kthread_worker *worker, 766 + struct kthread_delayed_work *dwork, 767 + unsigned long delay) 768 + { 769 + struct kthread_work *work = &dwork->work; 770 + unsigned long flags; 771 + int ret = false; 772 + 773 + spin_lock_irqsave(&worker->lock, flags); 774 + 775 + /* Do not bother with canceling when never queued. */ 776 + if (!work->worker) 777 + goto fast_queue; 778 + 779 + /* Work must not be used with >1 worker, see kthread_queue_work() */ 780 + WARN_ON_ONCE(work->worker != worker); 781 + 782 + /* Do not fight with another command that is canceling this work. */ 783 + if (work->canceling) 784 + goto out; 785 + 786 + ret = __kthread_cancel_work(work, true, &flags); 787 + fast_queue: 788 + __kthread_queue_delayed_work(worker, dwork, delay); 789 + out: 790 + spin_unlock_irqrestore(&worker->lock, flags); 791 + return ret; 792 + } 793 + EXPORT_SYMBOL_GPL(kthread_mod_delayed_work); 794 + 795 + static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork) 796 + { 797 + struct kthread_worker *worker = work->worker; 798 + unsigned long flags; 799 + int ret = false; 800 + 801 + if (!worker) 802 + goto out; 803 + 804 + spin_lock_irqsave(&worker->lock, flags); 805 + /* Work must not be used with >1 worker, see kthread_queue_work(). */ 806 + WARN_ON_ONCE(work->worker != worker); 807 + 808 + ret = __kthread_cancel_work(work, is_dwork, &flags); 809 + 810 + if (worker->current_work != work) 811 + goto out_fast; 812 + 813 + /* 814 + * The work is in progress and we need to wait with the lock released. 815 + * In the meantime, block any queuing by setting the canceling counter. 816 + */ 817 + work->canceling++; 818 + spin_unlock_irqrestore(&worker->lock, flags); 819 + kthread_flush_work(work); 820 + spin_lock_irqsave(&worker->lock, flags); 821 + work->canceling--; 822 + 823 + out_fast: 824 + spin_unlock_irqrestore(&worker->lock, flags); 825 + out: 826 + return ret; 827 + } 828 + 829 + /** 830 + * kthread_cancel_work_sync - cancel a kthread work and wait for it to finish 831 + * @work: the kthread work to cancel 832 + * 833 + * Cancel @work and wait for its execution to finish. This function 834 + * can be used even if the work re-queues itself. On return from this 835 + * function, @work is guaranteed to be not pending or executing on any CPU. 836 + * 837 + * kthread_cancel_work_sync(&delayed_work->work) must not be used for 838 + * delayed_work's. Use kthread_cancel_delayed_work_sync() instead. 839 + * 840 + * The caller must ensure that the worker on which @work was last 841 + * queued can't be destroyed before this function returns. 842 + * 843 + * Return: %true if @work was pending, %false otherwise. 844 + */ 845 + bool kthread_cancel_work_sync(struct kthread_work *work) 846 + { 847 + return __kthread_cancel_work_sync(work, false); 848 + } 849 + EXPORT_SYMBOL_GPL(kthread_cancel_work_sync); 850 + 851 + /** 852 + * kthread_cancel_delayed_work_sync - cancel a kthread delayed work and 853 + * wait for it to finish. 854 + * @dwork: the kthread delayed work to cancel 855 + * 856 + * This is kthread_cancel_work_sync() for delayed works. 857 + * 858 + * Return: %true if @dwork was pending, %false otherwise. 859 + */ 860 + bool kthread_cancel_delayed_work_sync(struct kthread_delayed_work *dwork) 861 + { 862 + return __kthread_cancel_work_sync(&dwork->work, true); 863 + } 864 + EXPORT_SYMBOL_GPL(kthread_cancel_delayed_work_sync); 865 + 866 + /** 867 + * kthread_flush_worker - flush all current works on a kthread_worker 946 868 * @worker: worker to flush 947 869 * 948 870 * Wait until all currently executing or pending works on @worker are 949 871 * finished. 950 872 */ 951 - void flush_kthread_worker(struct kthread_worker *worker) 873 + void kthread_flush_worker(struct kthread_worker *worker) 952 874 { 953 875 struct kthread_flush_work fwork = { 954 876 KTHREAD_WORK_INIT(fwork.work, kthread_flush_work_fn), 955 877 COMPLETION_INITIALIZER_ONSTACK(fwork.done), 956 878 }; 957 879 958 - queue_kthread_work(worker, &fwork.work); 880 + kthread_queue_work(worker, &fwork.work); 959 881 wait_for_completion(&fwork.done); 960 882 } 961 - EXPORT_SYMBOL_GPL(flush_kthread_worker); 883 + EXPORT_SYMBOL_GPL(kthread_flush_worker); 884 + 885 + /** 886 + * kthread_destroy_worker - destroy a kthread worker 887 + * @worker: worker to be destroyed 888 + * 889 + * Flush and destroy @worker. The simple flush is enough because the kthread 890 + * worker API is used only in trivial scenarios. There are no multi-step state 891 + * machines needed. 892 + */ 893 + void kthread_destroy_worker(struct kthread_worker *worker) 894 + { 895 + struct task_struct *task; 896 + 897 + task = worker->task; 898 + if (WARN_ON(!task)) 899 + return; 900 + 901 + kthread_flush_worker(worker); 902 + kthread_stop(task); 903 + WARN_ON(!list_empty(&worker->work_list)); 904 + kfree(worker); 905 + } 906 + EXPORT_SYMBOL(kthread_destroy_worker);
+40 -7
kernel/panic.c
··· 71 71 panic_smp_self_stop(); 72 72 } 73 73 74 + /* 75 + * Stop other CPUs in panic. Architecture dependent code may override this 76 + * with more suitable version. For example, if the architecture supports 77 + * crash dump, it should save registers of each stopped CPU and disable 78 + * per-CPU features such as virtualization extensions. 79 + */ 80 + void __weak crash_smp_send_stop(void) 81 + { 82 + static int cpus_stopped; 83 + 84 + /* 85 + * This function can be called twice in panic path, but obviously 86 + * we execute this only once. 87 + */ 88 + if (cpus_stopped) 89 + return; 90 + 91 + /* 92 + * Note smp_send_stop is the usual smp shutdown function, which 93 + * unfortunately means it may not be hardened to work in a panic 94 + * situation. 95 + */ 96 + smp_send_stop(); 97 + cpus_stopped = 1; 98 + } 99 + 74 100 atomic_t panic_cpu = ATOMIC_INIT(PANIC_CPU_INVALID); 75 101 76 102 /* ··· 190 164 if (!_crash_kexec_post_notifiers) { 191 165 printk_nmi_flush_on_panic(); 192 166 __crash_kexec(NULL); 193 - } 194 167 195 - /* 196 - * Note smp_send_stop is the usual smp shutdown function, which 197 - * unfortunately means it may not be hardened to work in a panic 198 - * situation. 199 - */ 200 - smp_send_stop(); 168 + /* 169 + * Note smp_send_stop is the usual smp shutdown function, which 170 + * unfortunately means it may not be hardened to work in a 171 + * panic situation. 172 + */ 173 + smp_send_stop(); 174 + } else { 175 + /* 176 + * If we want to do crash dump after notifier calls and 177 + * kmsg_dump, we will need architecture dependent extra 178 + * works in addition to stopping other CPUs. 179 + */ 180 + crash_smp_send_stop(); 181 + } 201 182 202 183 /* 203 184 * Run any panic handlers, including those that might need to
+2 -1
kernel/ptrace.c
··· 73 73 { 74 74 BUG_ON(!child->ptrace); 75 75 76 + clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE); 77 + 76 78 child->parent = child->real_parent; 77 79 list_del_init(&child->ptrace_entry); 78 80 ··· 491 489 492 490 /* Architecture-specific hardware disable .. */ 493 491 ptrace_disable(child); 494 - clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE); 495 492 496 493 write_lock_irq(&tasklist_lock); 497 494 /*
+14 -10
kernel/relay.c
··· 328 328 329 329 /** 330 330 * wakeup_readers - wake up readers waiting on a channel 331 - * @data: contains the channel buffer 331 + * @work: contains the channel buffer 332 332 * 333 - * This is the timer function used to defer reader waking. 333 + * This is the function used to defer reader waking 334 334 */ 335 - static void wakeup_readers(unsigned long data) 335 + static void wakeup_readers(struct irq_work *work) 336 336 { 337 - struct rchan_buf *buf = (struct rchan_buf *)data; 337 + struct rchan_buf *buf; 338 + 339 + buf = container_of(work, struct rchan_buf, wakeup_work); 338 340 wake_up_interruptible(&buf->read_wait); 339 341 } 340 342 ··· 354 352 if (init) { 355 353 init_waitqueue_head(&buf->read_wait); 356 354 kref_init(&buf->kref); 357 - setup_timer(&buf->timer, wakeup_readers, (unsigned long)buf); 358 - } else 359 - del_timer_sync(&buf->timer); 355 + init_irq_work(&buf->wakeup_work, wakeup_readers); 356 + } else { 357 + irq_work_sync(&buf->wakeup_work); 358 + } 360 359 361 360 buf->subbufs_produced = 0; 362 361 buf->subbufs_consumed = 0; ··· 490 487 static void relay_close_buf(struct rchan_buf *buf) 491 488 { 492 489 buf->finalized = 1; 493 - del_timer_sync(&buf->timer); 490 + irq_work_sync(&buf->wakeup_work); 494 491 buf->chan->cb->remove_buf_file(buf->dentry); 495 492 kref_put(&buf->kref, relay_remove_buf); 496 493 } ··· 757 754 buf->early_bytes += buf->chan->subbuf_size - 758 755 buf->padding[old_subbuf]; 759 756 smp_mb(); 760 - if (waitqueue_active(&buf->read_wait)) 757 + if (waitqueue_active(&buf->read_wait)) { 761 758 /* 762 759 * Calling wake_up_interruptible() from here 763 760 * will deadlock if we happen to be logging 764 761 * from the scheduler (trying to re-grab 765 762 * rq->lock), so defer it. 766 763 */ 767 - mod_timer(&buf->timer, jiffies + 1); 764 + irq_work_queue(&buf->wakeup_work); 765 + } 768 766 } 769 767 770 768 old = buf->data;
+5
kernel/smpboot.c
··· 186 186 kfree(td); 187 187 return PTR_ERR(tsk); 188 188 } 189 + /* 190 + * Park the thread so that it could start right on the CPU 191 + * when it is available. 192 + */ 193 + kthread_park(tsk); 189 194 get_task_struct(tsk); 190 195 *per_cpu_ptr(ht->store, cpu) = tsk; 191 196 if (ht->create) {
+1 -1
kernel/workqueue.c
··· 4261 4261 * This function is called without any synchronization and @task 4262 4262 * could be in any state. Be careful with dereferences. 4263 4263 */ 4264 - worker = probe_kthread_data(task); 4264 + worker = kthread_probe_data(task); 4265 4265 4266 4266 /* 4267 4267 * Carefully copy the associated workqueue's workfn and name. Keep
+1
lib/Makefile
··· 180 180 181 181 obj-$(CONFIG_STACKDEPOT) += stackdepot.o 182 182 KASAN_SANITIZE_stackdepot.o := n 183 + KCOV_INSTRUMENT_stackdepot.o := n 183 184 184 185 libfdt_files = fdt.o fdt_ro.o fdt_wip.o fdt_rw.o fdt_sw.o fdt_strerror.o \ 185 186 fdt_empty_tree.o
+46 -4
lib/bitmap.c
··· 496 496 * ranges. Consecutively set bits are shown as two hyphen-separated 497 497 * decimal numbers, the smallest and largest bit numbers set in 498 498 * the range. 499 + * Optionally each range can be postfixed to denote that only parts of it 500 + * should be set. The range will divided to groups of specific size. 501 + * From each group will be used only defined amount of bits. 502 + * Syntax: range:used_size/group_size 503 + * Example: 0-1023:2/256 ==> 0,1,256,257,512,513,768,769 499 504 * 500 505 * Returns 0 on success, -errno on invalid input strings. 501 506 * Error values: ··· 512 507 int is_user, unsigned long *maskp, 513 508 int nmaskbits) 514 509 { 515 - unsigned a, b; 510 + unsigned int a, b, old_a, old_b; 511 + unsigned int group_size, used_size; 516 512 int c, old_c, totaldigits, ndigits; 517 513 const char __user __force *ubuf = (const char __user __force *)buf; 518 - int at_start, in_range; 514 + int at_start, in_range, in_partial_range; 519 515 520 516 totaldigits = c = 0; 517 + old_a = old_b = 0; 518 + group_size = used_size = 0; 521 519 bitmap_zero(maskp, nmaskbits); 522 520 do { 523 521 at_start = 1; 524 522 in_range = 0; 523 + in_partial_range = 0; 525 524 a = b = 0; 526 525 ndigits = totaldigits; 527 526 ··· 556 547 if ((totaldigits != ndigits) && isspace(old_c)) 557 548 return -EINVAL; 558 549 550 + if (c == '/') { 551 + used_size = a; 552 + at_start = 1; 553 + in_range = 0; 554 + a = b = 0; 555 + continue; 556 + } 557 + 558 + if (c == ':') { 559 + old_a = a; 560 + old_b = b; 561 + at_start = 1; 562 + in_range = 0; 563 + in_partial_range = 1; 564 + a = b = 0; 565 + continue; 566 + } 567 + 559 568 if (c == '-') { 560 569 if (at_start || in_range) 561 570 return -EINVAL; ··· 594 567 } 595 568 if (ndigits == totaldigits) 596 569 continue; 570 + if (in_partial_range) { 571 + group_size = a; 572 + a = old_a; 573 + b = old_b; 574 + old_a = old_b = 0; 575 + } 597 576 /* if no digit is after '-', it's wrong*/ 598 577 if (at_start && in_range) 599 578 return -EINVAL; 600 - if (!(a <= b)) 579 + if (!(a <= b) || !(used_size <= group_size)) 601 580 return -EINVAL; 602 581 if (b >= nmaskbits) 603 582 return -ERANGE; 604 583 while (a <= b) { 605 - set_bit(a, maskp); 584 + if (in_partial_range) { 585 + static int pos_in_group = 1; 586 + 587 + if (pos_in_group <= used_size) 588 + set_bit(a, maskp); 589 + 590 + if (a == b || ++pos_in_group > group_size) 591 + pos_in_group = 1; 592 + } else 593 + set_bit(a, maskp); 606 594 a++; 607 595 } 608 596 } while (buflen && c == ',');
+1 -5
lib/kstrtox.c
··· 48 48 { 49 49 unsigned long long res; 50 50 unsigned int rv; 51 - int overflow; 52 51 53 52 res = 0; 54 53 rv = 0; 55 - overflow = 0; 56 54 while (*s) { 57 55 unsigned int val; 58 56 ··· 69 71 */ 70 72 if (unlikely(res & (~0ull << 60))) { 71 73 if (res > div_u64(ULLONG_MAX - val, base)) 72 - overflow = 1; 74 + rv |= KSTRTOX_OVERFLOW; 73 75 } 74 76 res = res * base + val; 75 77 rv++; 76 78 s++; 77 79 } 78 80 *p = res; 79 - if (overflow) 80 - rv |= KSTRTOX_OVERFLOW; 81 81 return rv; 82 82 } 83 83
+2
lib/strncpy_from_user.c
··· 1 1 #include <linux/compiler.h> 2 2 #include <linux/export.h> 3 3 #include <linux/kasan-checks.h> 4 + #include <linux/thread_info.h> 4 5 #include <linux/uaccess.h> 5 6 #include <linux/kernel.h> 6 7 #include <linux/errno.h> ··· 112 111 long retval; 113 112 114 113 kasan_check_write(dst, count); 114 + check_object_size(dst, count, false); 115 115 user_access_begin(); 116 116 retval = do_strncpy_from_user(dst, src, count, max); 117 117 user_access_end();
+3 -3
mm/bootmem.c
··· 155 155 { 156 156 unsigned long cursor, end; 157 157 158 - kmemleak_free_part(__va(physaddr), size); 158 + kmemleak_free_part_phys(physaddr, size); 159 159 160 160 cursor = PFN_UP(physaddr); 161 161 end = PFN_DOWN(physaddr + size); ··· 399 399 { 400 400 unsigned long start, end; 401 401 402 - kmemleak_free_part(__va(physaddr), size); 402 + kmemleak_free_part_phys(physaddr, size); 403 403 404 404 start = PFN_UP(physaddr); 405 405 end = PFN_DOWN(physaddr + size); ··· 420 420 { 421 421 unsigned long start, end; 422 422 423 - kmemleak_free_part(__va(physaddr), size); 423 + kmemleak_free_part_phys(physaddr, size); 424 424 425 425 start = PFN_UP(physaddr); 426 426 end = PFN_DOWN(physaddr + size);
+1 -1
mm/cma.c
··· 336 336 * kmemleak scans/reads tracked objects for pointers to other 337 337 * objects but this address isn't mapped and accessible 338 338 */ 339 - kmemleak_ignore(phys_to_virt(addr)); 339 + kmemleak_ignore_phys(addr); 340 340 base = addr; 341 341 } 342 342
+47
mm/kmemleak.c
··· 90 90 #include <linux/cache.h> 91 91 #include <linux/percpu.h> 92 92 #include <linux/hardirq.h> 93 + #include <linux/bootmem.h> 94 + #include <linux/pfn.h> 93 95 #include <linux/mmzone.h> 94 96 #include <linux/slab.h> 95 97 #include <linux/thread_info.h> ··· 1122 1120 log_early(KMEMLEAK_NO_SCAN, ptr, 0, 0); 1123 1121 } 1124 1122 EXPORT_SYMBOL(kmemleak_no_scan); 1123 + 1124 + /** 1125 + * kmemleak_alloc_phys - similar to kmemleak_alloc but taking a physical 1126 + * address argument 1127 + */ 1128 + void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count, 1129 + gfp_t gfp) 1130 + { 1131 + if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn) 1132 + kmemleak_alloc(__va(phys), size, min_count, gfp); 1133 + } 1134 + EXPORT_SYMBOL(kmemleak_alloc_phys); 1135 + 1136 + /** 1137 + * kmemleak_free_part_phys - similar to kmemleak_free_part but taking a 1138 + * physical address argument 1139 + */ 1140 + void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size) 1141 + { 1142 + if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn) 1143 + kmemleak_free_part(__va(phys), size); 1144 + } 1145 + EXPORT_SYMBOL(kmemleak_free_part_phys); 1146 + 1147 + /** 1148 + * kmemleak_not_leak_phys - similar to kmemleak_not_leak but taking a physical 1149 + * address argument 1150 + */ 1151 + void __ref kmemleak_not_leak_phys(phys_addr_t phys) 1152 + { 1153 + if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn) 1154 + kmemleak_not_leak(__va(phys)); 1155 + } 1156 + EXPORT_SYMBOL(kmemleak_not_leak_phys); 1157 + 1158 + /** 1159 + * kmemleak_ignore_phys - similar to kmemleak_ignore but taking a physical 1160 + * address argument 1161 + */ 1162 + void __ref kmemleak_ignore_phys(phys_addr_t phys) 1163 + { 1164 + if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn) 1165 + kmemleak_ignore(__va(phys)); 1166 + } 1167 + EXPORT_SYMBOL(kmemleak_ignore_phys); 1125 1168 1126 1169 /* 1127 1170 * Update an object's checksum and return true if it was modified.
+4 -4
mm/memblock.c
··· 723 723 (unsigned long long)base + size - 1, 724 724 (void *)_RET_IP_); 725 725 726 - kmemleak_free_part(__va(base), size); 726 + kmemleak_free_part_phys(base, size); 727 727 return memblock_remove_range(&memblock.reserved, base, size); 728 728 } 729 729 ··· 1152 1152 * The min_count is set to 0 so that memblock allocations are 1153 1153 * never reported as leaks. 1154 1154 */ 1155 - kmemleak_alloc(__va(found), size, 0, 0); 1155 + kmemleak_alloc_phys(found, size, 0, 0); 1156 1156 return found; 1157 1157 } 1158 1158 return 0; ··· 1399 1399 memblock_dbg("%s: [%#016llx-%#016llx] %pF\n", 1400 1400 __func__, (u64)base, (u64)base + size - 1, 1401 1401 (void *)_RET_IP_); 1402 - kmemleak_free_part(__va(base), size); 1402 + kmemleak_free_part_phys(base, size); 1403 1403 memblock_remove_range(&memblock.reserved, base, size); 1404 1404 } 1405 1405 ··· 1419 1419 memblock_dbg("%s: [%#016llx-%#016llx] %pF\n", 1420 1420 __func__, (u64)base, (u64)base + size - 1, 1421 1421 (void *)_RET_IP_); 1422 - kmemleak_free_part(__va(base), size); 1422 + kmemleak_free_part_phys(base, size); 1423 1423 cursor = PFN_UP(base); 1424 1424 end = PFN_DOWN(base + size); 1425 1425
+1 -1
mm/nobootmem.c
··· 84 84 { 85 85 unsigned long cursor, end; 86 86 87 - kmemleak_free_part(__va(addr), size); 87 + kmemleak_free_part_phys(addr, size); 88 88 89 89 cursor = PFN_UP(addr); 90 90 end = PFN_DOWN(addr + size);
-2
net/batman-adv/debugfs.h
··· 20 20 21 21 #include "main.h" 22 22 23 - #include <linux/kconfig.h> 24 - 25 23 struct net_device; 26 24 27 25 #define BATADV_DEBUGFS_SUBDIR "batman_adv"
+245 -93
scripts/checkpatch.pl
··· 54 54 my $spelling_file = "$D/spelling.txt"; 55 55 my $codespell = 0; 56 56 my $codespellfile = "/usr/share/codespell/dictionary.txt"; 57 + my $conststructsfile = "$D/const_structs.checkpatch"; 57 58 my $color = 1; 58 59 my $allow_c99_comments = 1; 59 60 ··· 524 523 ["module_param_array_named", 5], 525 524 ["debugfs_create_(?:file|u8|u16|u32|u64|x8|x16|x32|x64|size_t|atomic_t|bool|blob|regset32|u32_array)", 2], 526 525 ["proc_create(?:_data|)", 2], 527 - ["(?:CLASS|DEVICE|SENSOR)_ATTR", 2], 526 + ["(?:CLASS|DEVICE|SENSOR|SENSOR_DEVICE|IIO_DEVICE)_ATTR", 2], 527 + ["IIO_DEV_ATTR_[A-Z_]+", 1], 528 + ["SENSOR_(?:DEVICE_|)ATTR_2", 2], 529 + ["SENSOR_TEMPLATE(?:_2|)", 3], 530 + ["__ATTR", 2], 528 531 ); 529 532 530 533 #Create a search pattern for all these functions to speed up a loop below ··· 545 540 S_IALLUGO | 546 541 0[0-7][0-7][2367] 547 542 }x; 543 + 544 + our %mode_permission_string_types = ( 545 + "S_IRWXU" => 0700, 546 + "S_IRUSR" => 0400, 547 + "S_IWUSR" => 0200, 548 + "S_IXUSR" => 0100, 549 + "S_IRWXG" => 0070, 550 + "S_IRGRP" => 0040, 551 + "S_IWGRP" => 0020, 552 + "S_IXGRP" => 0010, 553 + "S_IRWXO" => 0007, 554 + "S_IROTH" => 0004, 555 + "S_IWOTH" => 0002, 556 + "S_IXOTH" => 0001, 557 + "S_IRWXUGO" => 0777, 558 + "S_IRUGO" => 0444, 559 + "S_IWUGO" => 0222, 560 + "S_IXUGO" => 0111, 561 + ); 562 + 563 + #Create a search pattern for all these strings to speed up a loop below 564 + our $mode_perms_string_search = ""; 565 + foreach my $entry (keys %mode_permission_string_types) { 566 + $mode_perms_string_search .= '|' if ($mode_perms_string_search ne ""); 567 + $mode_perms_string_search .= $entry; 568 + } 548 569 549 570 our $allowed_asm_includes = qr{(?x: 550 571 irq| ··· 628 597 } 629 598 630 599 $misspellings = join("|", sort keys %spelling_fix) if keys %spelling_fix; 600 + 601 + my $const_structs = ""; 602 + if (open(my $conststructs, '<', $conststructsfile)) { 603 + while (<$conststructs>) { 604 + my $line = $_; 605 + 606 + $line =~ s/\s*\n?$//g; 607 + $line =~ s/^\s*//g; 608 + 609 + next if ($line =~ m/^\s*#/); 610 + next if ($line =~ m/^\s*$/); 611 + if ($line =~ /\s/) { 612 + print("$conststructsfile: '$line' invalid - ignored\n"); 613 + next; 614 + } 615 + 616 + $const_structs .= '|' if ($const_structs ne ""); 617 + $const_structs .= $line; 618 + } 619 + close($conststructsfile); 620 + } else { 621 + warn "No structs that should be const will be found - file '$conststructsfile': $!\n"; 622 + } 631 623 632 624 sub build_types { 633 625 my $mods = "(?x: \n" . join("|\n ", (@modifierList, @modifierListFile)) . "\n)"; ··· 756 702 $camelcase{$1} = 1; 757 703 } 758 704 } 705 + } 706 + 707 + sub is_maintained_obsolete { 708 + my ($filename) = @_; 709 + 710 + return 0 if (!(-e "$root/scripts/get_maintainer.pl")); 711 + 712 + my $status = `perl $root/scripts/get_maintainer.pl --status --nom --nol --nogit --nogit-fallback -f $filename 2>&1`; 713 + 714 + return $status =~ /obsolete/i; 759 715 } 760 716 761 717 my $camelcase_seeded = 0; ··· 2353 2289 } 2354 2290 2355 2291 if ($found_file) { 2292 + if (is_maintained_obsolete($realfile)) { 2293 + WARN("OBSOLETE", 2294 + "$realfile is marked as 'obsolete' in the MAINTAINERS hierarchy. No unnecessary modifications please.\n"); 2295 + } 2356 2296 if ($realfile =~ m@^(?:drivers/net/|net/|drivers/staging/)@) { 2357 2297 $check = 1; 2358 2298 } else { ··· 3005 2937 $rawline =~ m@^\+[ \t]*.+\*\/[ \t]*$@) { #non blank */ 3006 2938 WARN("BLOCK_COMMENT_STYLE", 3007 2939 "Block comments use a trailing */ on a separate line\n" . $herecurr); 2940 + } 2941 + 2942 + # Block comment * alignment 2943 + if ($prevline =~ /$;[ \t]*$/ && #ends in comment 2944 + $line =~ /^\+[ \t]*$;/ && #leading comment 2945 + $rawline =~ /^\+[ \t]*\*/ && #leading * 2946 + (($prevrawline =~ /^\+.*?\/\*/ && #leading /* 2947 + $prevrawline !~ /\*\/[ \t]*$/) || #no trailing */ 2948 + $prevrawline =~ /^\+[ \t]*\*/)) { #leading * 2949 + my $oldindent; 2950 + $prevrawline =~ m@^\+([ \t]*/?)\*@; 2951 + if (defined($1)) { 2952 + $oldindent = expand_tabs($1); 2953 + } else { 2954 + $prevrawline =~ m@^\+(.*/?)\*@; 2955 + $oldindent = expand_tabs($1); 2956 + } 2957 + $rawline =~ m@^\+([ \t]*)\*@; 2958 + my $newindent = $1; 2959 + $newindent = expand_tabs($newindent); 2960 + if (length($oldindent) ne length($newindent)) { 2961 + WARN("BLOCK_COMMENT_STYLE", 2962 + "Block comments should align the * on each line\n" . $hereprev); 2963 + } 3008 2964 } 3009 2965 3010 2966 # check for missing blank lines after struct/union declarations ··· 4757 4665 $has_flow_statement = 1 if ($ctx =~ /\b(goto|return)\b/); 4758 4666 $has_arg_concat = 1 if ($ctx =~ /\#\#/ && $ctx !~ /\#\#\s*(?:__VA_ARGS__|args)\b/); 4759 4667 4760 - $dstat =~ s/^.\s*\#\s*define\s+$Ident(?:\([^\)]*\))?\s*//; 4668 + $dstat =~ s/^.\s*\#\s*define\s+$Ident(\([^\)]*\))?\s*//; 4669 + my $define_args = $1; 4670 + my $define_stmt = $dstat; 4671 + my @def_args = (); 4672 + 4673 + if (defined $define_args && $define_args ne "") { 4674 + $define_args = substr($define_args, 1, length($define_args) - 2); 4675 + $define_args =~ s/\s*//g; 4676 + @def_args = split(",", $define_args); 4677 + } 4678 + 4761 4679 $dstat =~ s/$;//g; 4762 4680 $dstat =~ s/\\\n.//g; 4763 4681 $dstat =~ s/^\s*//s; ··· 4803 4701 ^\[ 4804 4702 }x; 4805 4703 #print "REST<$rest> dstat<$dstat> ctx<$ctx>\n"; 4704 + 4705 + $ctx =~ s/\n*$//; 4706 + my $herectx = $here . "\n"; 4707 + my $stmt_cnt = statement_rawlines($ctx); 4708 + 4709 + for (my $n = 0; $n < $stmt_cnt; $n++) { 4710 + $herectx .= raw_line($linenr, $n) . "\n"; 4711 + } 4712 + 4806 4713 if ($dstat ne '' && 4807 4714 $dstat !~ /^(?:$Ident|-?$Constant),$/ && # 10, // foo(), 4808 4715 $dstat !~ /^(?:$Ident|-?$Constant);$/ && # foo(); ··· 4827 4716 $dstat !~ /^\(\{/ && # ({... 4828 4717 $ctx !~ /^.\s*#\s*define\s+TRACE_(?:SYSTEM|INCLUDE_FILE|INCLUDE_PATH)\b/) 4829 4718 { 4830 - $ctx =~ s/\n*$//; 4831 - my $herectx = $here . "\n"; 4832 - my $cnt = statement_rawlines($ctx); 4833 - 4834 - for (my $n = 0; $n < $cnt; $n++) { 4835 - $herectx .= raw_line($linenr, $n) . "\n"; 4836 - } 4837 4719 4838 4720 if ($dstat =~ /;/) { 4839 4721 ERROR("MULTISTATEMENT_MACRO_USE_DO_WHILE", ··· 4834 4730 } else { 4835 4731 ERROR("COMPLEX_MACRO", 4836 4732 "Macros with complex values should be enclosed in parentheses\n" . "$herectx"); 4733 + } 4734 + 4735 + } 4736 + 4737 + # Make $define_stmt single line, comment-free, etc 4738 + my @stmt_array = split('\n', $define_stmt); 4739 + my $first = 1; 4740 + $define_stmt = ""; 4741 + foreach my $l (@stmt_array) { 4742 + $l =~ s/\\$//; 4743 + if ($first) { 4744 + $define_stmt = $l; 4745 + $first = 0; 4746 + } elsif ($l =~ /^[\+ ]/) { 4747 + $define_stmt .= substr($l, 1); 4748 + } 4749 + } 4750 + $define_stmt =~ s/$;//g; 4751 + $define_stmt =~ s/\s+/ /g; 4752 + $define_stmt = trim($define_stmt); 4753 + 4754 + # check if any macro arguments are reused (ignore '...' and 'type') 4755 + foreach my $arg (@def_args) { 4756 + next if ($arg =~ /\.\.\./); 4757 + next if ($arg =~ /^type$/i); 4758 + my $tmp = $define_stmt; 4759 + $tmp =~ s/\b(typeof|__typeof__|__builtin\w+|typecheck\s*\(\s*$Type\s*,|\#+)\s*\(*\s*$arg\s*\)*\b//g; 4760 + $tmp =~ s/\#+\s*$arg\b//g; 4761 + $tmp =~ s/\b$arg\s*\#\#//g; 4762 + my $use_cnt = $tmp =~ s/\b$arg\b//g; 4763 + if ($use_cnt > 1) { 4764 + CHK("MACRO_ARG_REUSE", 4765 + "Macro argument reuse '$arg' - possible side-effects?\n" . "$herectx"); 4766 + } 4767 + # check if any macro arguments may have other precedence issues 4768 + if ($define_stmt =~ m/($Operators)?\s*\b$arg\b\s*($Operators)?/m && 4769 + ((defined($1) && $1 ne ',') || 4770 + (defined($2) && $2 ne ','))) { 4771 + CHK("MACRO_ARG_PRECEDENCE", 4772 + "Macro argument '$arg' may be better as '($arg)' to avoid precedence issues\n" . "$herectx"); 4837 4773 } 4838 4774 } 4839 4775 ··· 5639 5495 } 5640 5496 5641 5497 # Check for memcpy(foo, bar, ETH_ALEN) that could be ether_addr_copy(foo, bar) 5642 - if ($^V && $^V ge 5.10.0 && 5643 - defined $stat && 5644 - $stat =~ /^\+(?:.*?)\bmemcpy\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\,\s*ETH_ALEN\s*\)/) { 5645 - if (WARN("PREFER_ETHER_ADDR_COPY", 5646 - "Prefer ether_addr_copy() over memcpy() if the Ethernet addresses are __aligned(2)\n" . "$here\n$stat\n") && 5647 - $fix) { 5648 - $fixed[$fixlinenr] =~ s/\bmemcpy\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\,\s*ETH_ALEN\s*\)/ether_addr_copy($2, $7)/; 5649 - } 5650 - } 5498 + # if ($^V && $^V ge 5.10.0 && 5499 + # defined $stat && 5500 + # $stat =~ /^\+(?:.*?)\bmemcpy\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\,\s*ETH_ALEN\s*\)/) { 5501 + # if (WARN("PREFER_ETHER_ADDR_COPY", 5502 + # "Prefer ether_addr_copy() over memcpy() if the Ethernet addresses are __aligned(2)\n" . "$here\n$stat\n") && 5503 + # $fix) { 5504 + # $fixed[$fixlinenr] =~ s/\bmemcpy\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\,\s*ETH_ALEN\s*\)/ether_addr_copy($2, $7)/; 5505 + # } 5506 + # } 5651 5507 5652 5508 # Check for memcmp(foo, bar, ETH_ALEN) that could be ether_addr_equal*(foo, bar) 5653 - if ($^V && $^V ge 5.10.0 && 5654 - defined $stat && 5655 - $stat =~ /^\+(?:.*?)\bmemcmp\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\,\s*ETH_ALEN\s*\)/) { 5656 - WARN("PREFER_ETHER_ADDR_EQUAL", 5657 - "Prefer ether_addr_equal() or ether_addr_equal_unaligned() over memcmp()\n" . "$here\n$stat\n") 5658 - } 5509 + # if ($^V && $^V ge 5.10.0 && 5510 + # defined $stat && 5511 + # $stat =~ /^\+(?:.*?)\bmemcmp\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\,\s*ETH_ALEN\s*\)/) { 5512 + # WARN("PREFER_ETHER_ADDR_EQUAL", 5513 + # "Prefer ether_addr_equal() or ether_addr_equal_unaligned() over memcmp()\n" . "$here\n$stat\n") 5514 + # } 5659 5515 5660 5516 # check for memset(foo, 0x0, ETH_ALEN) that could be eth_zero_addr 5661 5517 # check for memset(foo, 0xFF, ETH_ALEN) that could be eth_broadcast_addr 5662 - if ($^V && $^V ge 5.10.0 && 5663 - defined $stat && 5664 - $stat =~ /^\+(?:.*?)\bmemset\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\,\s*ETH_ALEN\s*\)/) { 5665 - 5666 - my $ms_val = $7; 5667 - 5668 - if ($ms_val =~ /^(?:0x|)0+$/i) { 5669 - if (WARN("PREFER_ETH_ZERO_ADDR", 5670 - "Prefer eth_zero_addr over memset()\n" . "$here\n$stat\n") && 5671 - $fix) { 5672 - $fixed[$fixlinenr] =~ s/\bmemset\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*,\s*ETH_ALEN\s*\)/eth_zero_addr($2)/; 5673 - } 5674 - } elsif ($ms_val =~ /^(?:0xff|255)$/i) { 5675 - if (WARN("PREFER_ETH_BROADCAST_ADDR", 5676 - "Prefer eth_broadcast_addr() over memset()\n" . "$here\n$stat\n") && 5677 - $fix) { 5678 - $fixed[$fixlinenr] =~ s/\bmemset\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*,\s*ETH_ALEN\s*\)/eth_broadcast_addr($2)/; 5679 - } 5680 - } 5681 - } 5518 + # if ($^V && $^V ge 5.10.0 && 5519 + # defined $stat && 5520 + # $stat =~ /^\+(?:.*?)\bmemset\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\,\s*ETH_ALEN\s*\)/) { 5521 + # 5522 + # my $ms_val = $7; 5523 + # 5524 + # if ($ms_val =~ /^(?:0x|)0+$/i) { 5525 + # if (WARN("PREFER_ETH_ZERO_ADDR", 5526 + # "Prefer eth_zero_addr over memset()\n" . "$here\n$stat\n") && 5527 + # $fix) { 5528 + # $fixed[$fixlinenr] =~ s/\bmemset\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*,\s*ETH_ALEN\s*\)/eth_zero_addr($2)/; 5529 + # } 5530 + # } elsif ($ms_val =~ /^(?:0xff|255)$/i) { 5531 + # if (WARN("PREFER_ETH_BROADCAST_ADDR", 5532 + # "Prefer eth_broadcast_addr() over memset()\n" . "$here\n$stat\n") && 5533 + # $fix) { 5534 + # $fixed[$fixlinenr] =~ s/\bmemset\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*,\s*ETH_ALEN\s*\)/eth_broadcast_addr($2)/; 5535 + # } 5536 + # } 5537 + # } 5682 5538 5683 5539 # typecasts on min/max could be min_t/max_t 5684 5540 if ($^V && $^V ge 5.10.0 && ··· 5796 5652 { 5797 5653 WARN("AVOID_EXTERNS", 5798 5654 "externs should be avoided in .c files\n" . $herecurr); 5655 + } 5656 + 5657 + if ($realfile =~ /\.[ch]$/ && defined $stat && 5658 + $stat =~ /^.\s*(?:extern\s+)?$Type\s*$Ident\s*\(\s*([^{]+)\s*\)\s*;/s && 5659 + $1 ne "void") { 5660 + my $args = trim($1); 5661 + while ($args =~ m/\s*($Type\s*(?:$Ident|\(\s*\*\s*$Ident?\s*\)\s*$balanced_parens)?)/g) { 5662 + my $arg = trim($1); 5663 + if ($arg =~ /^$Type$/ && $arg !~ /enum\s+$Ident$/) { 5664 + WARN("FUNCTION_ARGUMENTS", 5665 + "function definition argument '$arg' should also have an identifier name\n" . $herecurr); 5666 + } 5667 + } 5799 5668 } 5800 5669 5801 5670 # checks for new __setup's ··· 6010 5853 } 6011 5854 6012 5855 # check for various structs that are normally const (ops, kgdb, device_tree) 6013 - my $const_structs = qr{ 6014 - acpi_dock_ops| 6015 - address_space_operations| 6016 - backlight_ops| 6017 - block_device_operations| 6018 - dentry_operations| 6019 - dev_pm_ops| 6020 - dma_map_ops| 6021 - extent_io_ops| 6022 - file_lock_operations| 6023 - file_operations| 6024 - hv_ops| 6025 - ide_dma_ops| 6026 - intel_dvo_dev_ops| 6027 - item_operations| 6028 - iwl_ops| 6029 - kgdb_arch| 6030 - kgdb_io| 6031 - kset_uevent_ops| 6032 - lock_manager_operations| 6033 - microcode_ops| 6034 - mtrr_ops| 6035 - neigh_ops| 6036 - nlmsvc_binding| 6037 - of_device_id| 6038 - pci_raw_ops| 6039 - pipe_buf_operations| 6040 - platform_hibernation_ops| 6041 - platform_suspend_ops| 6042 - proto_ops| 6043 - rpc_pipe_ops| 6044 - seq_operations| 6045 - snd_ac97_build_ops| 6046 - soc_pcmcia_socket_ops| 6047 - stacktrace_ops| 6048 - sysfs_ops| 6049 - tty_operations| 6050 - uart_ops| 6051 - usb_mon_operations| 6052 - wd_ops}x; 6053 5856 if ($line !~ /\bconst\b/ && 6054 5857 $line =~ /\bstruct\s+($const_structs)\b/) { 6055 5858 WARN("CONST_STRUCT", ··· 6096 5979 # Mode permission misuses where it seems decimal should be octal 6097 5980 # This uses a shortcut match to avoid unnecessary uses of a slow foreach loop 6098 5981 if ($^V && $^V ge 5.10.0 && 5982 + defined $stat && 6099 5983 $line =~ /$mode_perms_search/) { 6100 5984 foreach my $entry (@mode_permission_funcs) { 6101 5985 my $func = $entry->[0]; 6102 5986 my $arg_pos = $entry->[1]; 5987 + 5988 + my $lc = $stat =~ tr@\n@@; 5989 + $lc = $lc + $linenr; 5990 + my $stat_real = raw_line($linenr, 0); 5991 + for (my $count = $linenr + 1; $count <= $lc; $count++) { 5992 + $stat_real = $stat_real . "\n" . raw_line($count, 0); 5993 + } 6103 5994 6104 5995 my $skip_args = ""; 6105 5996 if ($arg_pos > 1) { 6106 5997 $arg_pos--; 6107 5998 $skip_args = "(?:\\s*$FuncArg\\s*,\\s*){$arg_pos,$arg_pos}"; 6108 5999 } 6109 - my $test = "\\b$func\\s*\\(${skip_args}([\\d]+)\\s*[,\\)]"; 6110 - if ($line =~ /$test/) { 6000 + my $test = "\\b$func\\s*\\(${skip_args}($FuncArg(?:\\|\\s*$FuncArg)*)\\s*[,\\)]"; 6001 + if ($stat =~ /$test/) { 6111 6002 my $val = $1; 6112 6003 $val = $6 if ($skip_args ne ""); 6113 - 6114 - if ($val !~ /^0$/ && 6115 - (($val =~ /^$Int$/ && $val !~ /^$Octal$/) || 6116 - length($val) ne 4)) { 6004 + if (($val =~ /^$Int$/ && $val !~ /^$Octal$/) || 6005 + ($val =~ /^$Octal$/ && length($val) ne 4)) { 6117 6006 ERROR("NON_OCTAL_PERMISSIONS", 6118 - "Use 4 digit octal (0777) not decimal permissions\n" . $herecurr); 6119 - } elsif ($val =~ /^$Octal$/ && (oct($val) & 02)) { 6007 + "Use 4 digit octal (0777) not decimal permissions\n" . "$here\n" . $stat_real); 6008 + } 6009 + if ($val =~ /^$Octal$/ && (oct($val) & 02)) { 6120 6010 ERROR("EXPORTED_WORLD_WRITABLE", 6121 - "Exporting writable files is usually an error. Consider more restrictive permissions.\n" . $herecurr); 6011 + "Exporting writable files is usually an error. Consider more restrictive permissions.\n" . "$here\n" . $stat_real); 6122 6012 } 6123 6013 } 6014 + } 6015 + } 6016 + 6017 + # check for uses of S_<PERMS> that could be octal for readability 6018 + if ($line =~ /\b$mode_perms_string_search\b/) { 6019 + my $val = ""; 6020 + my $oval = ""; 6021 + my $to = 0; 6022 + my $curpos = 0; 6023 + my $lastpos = 0; 6024 + while ($line =~ /\b(($mode_perms_string_search)\b(?:\s*\|\s*)?\s*)/g) { 6025 + $curpos = pos($line); 6026 + my $match = $2; 6027 + my $omatch = $1; 6028 + last if ($lastpos > 0 && ($curpos - length($omatch) != $lastpos)); 6029 + $lastpos = $curpos; 6030 + $to |= $mode_permission_string_types{$match}; 6031 + $val .= '\s*\|\s*' if ($val ne ""); 6032 + $val .= $match; 6033 + $oval .= $omatch; 6034 + } 6035 + $oval =~ s/^\s*\|\s*//; 6036 + $oval =~ s/\s*\|\s*$//; 6037 + my $octal = sprintf("%04o", $to); 6038 + if (WARN("SYMBOLIC_PERMS", 6039 + "Symbolic permissions '$oval' are not preferred. Consider using octal permissions '$octal'.\n" . $herecurr) && 6040 + $fix) { 6041 + $fixed[$fixlinenr] =~ s/$val/$octal/; 6124 6042 } 6125 6043 } 6126 6044
+64
scripts/const_structs.checkpatch
··· 1 + acpi_dock_ops 2 + address_space_operations 3 + backlight_ops 4 + block_device_operations 5 + clk_ops 6 + comedi_lrange 7 + component_ops 8 + dentry_operations 9 + dev_pm_ops 10 + dma_map_ops 11 + driver_info 12 + drm_connector_funcs 13 + drm_encoder_funcs 14 + drm_encoder_helper_funcs 15 + ethtool_ops 16 + extent_io_ops 17 + file_lock_operations 18 + file_operations 19 + hv_ops 20 + ide_dma_ops 21 + ide_port_ops 22 + inode_operations 23 + intel_dvo_dev_ops 24 + irq_domain_ops 25 + item_operations 26 + iwl_cfg 27 + iwl_ops 28 + kgdb_arch 29 + kgdb_io 30 + kset_uevent_ops 31 + lock_manager_operations 32 + machine_desc 33 + microcode_ops 34 + mlxsw_reg_info 35 + mtrr_ops 36 + neigh_ops 37 + net_device_ops 38 + nlmsvc_binding 39 + nvkm_device_chip 40 + of_device_id 41 + pci_raw_ops 42 + pipe_buf_operations 43 + platform_hibernation_ops 44 + platform_suspend_ops 45 + proto_ops 46 + regmap_access_table 47 + rpc_pipe_ops 48 + rtc_class_ops 49 + sd_desc 50 + seq_operations 51 + sirfsoc_padmux 52 + snd_ac97_build_ops 53 + snd_soc_component_driver 54 + soc_pcmcia_socket_ops 55 + stacktrace_ops 56 + sysfs_ops 57 + tty_operations 58 + uart_ops 59 + usb_mon_operations 60 + v4l2_ctrl_ops 61 + v4l2_ioctl_ops 62 + vm_operations_struct 63 + wacom_features 64 + wd_ops
+2 -1
scripts/tags.sh
··· 263 263 -I EXPORT_SYMBOL,EXPORT_SYMBOL_GPL,ACPI_EXPORT_SYMBOL \ 264 264 -I DEFINE_TRACE,EXPORT_TRACEPOINT_SYMBOL,EXPORT_TRACEPOINT_SYMBOL_GPL \ 265 265 -I static,const \ 266 - --extra=+f --c-kinds=+px --langmap=c:+.h "${regex[@]}" 266 + --extra=+fq --c-kinds=+px --fields=+iaS --langmap=c:+.h \ 267 + "${regex[@]}" 267 268 268 269 setup_regex exuberant kconfig 269 270 all_kconfigs | xargs $1 -a \
+1 -1
sound/soc/intel/baytrail/sst-baytrail-ipc.c
··· 338 338 spin_unlock_irqrestore(&sst->spinlock, flags); 339 339 340 340 /* continue to send any remaining messages... */ 341 - queue_kthread_work(&ipc->kworker, &ipc->kwork); 341 + kthread_queue_work(&ipc->kworker, &ipc->kwork); 342 342 343 343 return IRQ_HANDLED; 344 344 }
-1
sound/soc/intel/common/sst-acpi.h
··· 12 12 * 13 13 */ 14 14 15 - #include <linux/kconfig.h> 16 15 #include <linux/stddef.h> 17 16 #include <linux/acpi.h> 18 17
+3 -3
sound/soc/intel/common/sst-ipc.c
··· 111 111 list_add_tail(&msg->list, &ipc->tx_list); 112 112 spin_unlock_irqrestore(&ipc->dsp->spinlock, flags); 113 113 114 - queue_kthread_work(&ipc->kworker, &ipc->kwork); 114 + kthread_queue_work(&ipc->kworker, &ipc->kwork); 115 115 116 116 if (wait) 117 117 return tx_wait_done(ipc, msg, rx_data); ··· 281 281 return -ENOMEM; 282 282 283 283 /* start the IPC message thread */ 284 - init_kthread_worker(&ipc->kworker); 284 + kthread_init_worker(&ipc->kworker); 285 285 ipc->tx_thread = kthread_run(kthread_worker_fn, 286 286 &ipc->kworker, "%s", 287 287 dev_name(ipc->dev)); ··· 292 292 return ret; 293 293 } 294 294 295 - init_kthread_work(&ipc->kwork, ipc_tx_msgs); 295 + kthread_init_work(&ipc->kwork, ipc_tx_msgs); 296 296 return 0; 297 297 } 298 298 EXPORT_SYMBOL_GPL(sst_ipc_init);
+1 -1
sound/soc/intel/haswell/sst-haswell-ipc.c
··· 818 818 spin_unlock_irqrestore(&sst->spinlock, flags); 819 819 820 820 /* continue to send any remaining messages... */ 821 - queue_kthread_work(&ipc->kworker, &ipc->kwork); 821 + kthread_queue_work(&ipc->kworker, &ipc->kwork); 822 822 823 823 return IRQ_HANDLED; 824 824 }
+1 -1
sound/soc/intel/skylake/skl-sst-ipc.c
··· 464 464 skl_ipc_int_enable(dsp); 465 465 466 466 /* continue to send any remaining messages... */ 467 - queue_kthread_work(&ipc->kworker, &ipc->kwork); 467 + kthread_queue_work(&ipc->kworker, &ipc->kwork); 468 468 469 469 return IRQ_HANDLED; 470 470 }
-1
tools/testing/nvdimm/config_check.c
··· 1 - #include <linux/kconfig.h> 2 1 #include <linux/bug.h> 3 2 4 3 void check(void)
+2 -1
tools/testing/radix-tree/Makefile
··· 3 3 LDFLAGS += -lpthread -lurcu 4 4 TARGETS = main 5 5 OFILES = main.o radix-tree.o linux.o test.o tag_check.o find_next_bit.o \ 6 - regression1.o regression2.o regression3.o multiorder.o 6 + regression1.o regression2.o regression3.o multiorder.o \ 7 + iteration_check.o 7 8 8 9 targets: $(TARGETS) 9 10
+180
tools/testing/radix-tree/iteration_check.c
··· 1 + /* 2 + * iteration_check.c: test races having to do with radix tree iteration 3 + * Copyright (c) 2016 Intel Corporation 4 + * Author: Ross Zwisler <ross.zwisler@linux.intel.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + */ 15 + #include <linux/radix-tree.h> 16 + #include <pthread.h> 17 + #include "test.h" 18 + 19 + #define NUM_THREADS 4 20 + #define TAG 0 21 + static pthread_mutex_t tree_lock = PTHREAD_MUTEX_INITIALIZER; 22 + static pthread_t threads[NUM_THREADS]; 23 + RADIX_TREE(tree, GFP_KERNEL); 24 + bool test_complete; 25 + 26 + /* relentlessly fill the tree with tagged entries */ 27 + static void *add_entries_fn(void *arg) 28 + { 29 + int pgoff; 30 + 31 + while (!test_complete) { 32 + for (pgoff = 0; pgoff < 100; pgoff++) { 33 + pthread_mutex_lock(&tree_lock); 34 + if (item_insert(&tree, pgoff) == 0) 35 + item_tag_set(&tree, pgoff, TAG); 36 + pthread_mutex_unlock(&tree_lock); 37 + } 38 + } 39 + 40 + return NULL; 41 + } 42 + 43 + /* 44 + * Iterate over the tagged entries, doing a radix_tree_iter_retry() as we find 45 + * things that have been removed and randomly resetting our iteration to the 46 + * next chunk with radix_tree_iter_next(). Both radix_tree_iter_retry() and 47 + * radix_tree_iter_next() cause radix_tree_next_slot() to be called with a 48 + * NULL 'slot' variable. 49 + */ 50 + static void *tagged_iteration_fn(void *arg) 51 + { 52 + struct radix_tree_iter iter; 53 + void **slot; 54 + 55 + while (!test_complete) { 56 + rcu_read_lock(); 57 + radix_tree_for_each_tagged(slot, &tree, &iter, 0, TAG) { 58 + void *entry; 59 + int i; 60 + 61 + /* busy wait to let removals happen */ 62 + for (i = 0; i < 1000000; i++) 63 + ; 64 + 65 + entry = radix_tree_deref_slot(slot); 66 + if (unlikely(!entry)) 67 + continue; 68 + 69 + if (radix_tree_deref_retry(entry)) { 70 + slot = radix_tree_iter_retry(&iter); 71 + continue; 72 + } 73 + 74 + if (rand() % 50 == 0) 75 + slot = radix_tree_iter_next(&iter); 76 + } 77 + rcu_read_unlock(); 78 + } 79 + 80 + return NULL; 81 + } 82 + 83 + /* 84 + * Iterate over the entries, doing a radix_tree_iter_retry() as we find things 85 + * that have been removed and randomly resetting our iteration to the next 86 + * chunk with radix_tree_iter_next(). Both radix_tree_iter_retry() and 87 + * radix_tree_iter_next() cause radix_tree_next_slot() to be called with a 88 + * NULL 'slot' variable. 89 + */ 90 + static void *untagged_iteration_fn(void *arg) 91 + { 92 + struct radix_tree_iter iter; 93 + void **slot; 94 + 95 + while (!test_complete) { 96 + rcu_read_lock(); 97 + radix_tree_for_each_slot(slot, &tree, &iter, 0) { 98 + void *entry; 99 + int i; 100 + 101 + /* busy wait to let removals happen */ 102 + for (i = 0; i < 1000000; i++) 103 + ; 104 + 105 + entry = radix_tree_deref_slot(slot); 106 + if (unlikely(!entry)) 107 + continue; 108 + 109 + if (radix_tree_deref_retry(entry)) { 110 + slot = radix_tree_iter_retry(&iter); 111 + continue; 112 + } 113 + 114 + if (rand() % 50 == 0) 115 + slot = radix_tree_iter_next(&iter); 116 + } 117 + rcu_read_unlock(); 118 + } 119 + 120 + return NULL; 121 + } 122 + 123 + /* 124 + * Randomly remove entries to help induce radix_tree_iter_retry() calls in the 125 + * two iteration functions. 126 + */ 127 + static void *remove_entries_fn(void *arg) 128 + { 129 + while (!test_complete) { 130 + int pgoff; 131 + 132 + pgoff = rand() % 100; 133 + 134 + pthread_mutex_lock(&tree_lock); 135 + item_delete(&tree, pgoff); 136 + pthread_mutex_unlock(&tree_lock); 137 + } 138 + 139 + return NULL; 140 + } 141 + 142 + /* This is a unit test for a bug found by the syzkaller tester */ 143 + void iteration_test(void) 144 + { 145 + int i; 146 + 147 + printf("Running iteration tests for 10 seconds\n"); 148 + 149 + srand(time(0)); 150 + test_complete = false; 151 + 152 + if (pthread_create(&threads[0], NULL, tagged_iteration_fn, NULL)) { 153 + perror("pthread_create"); 154 + exit(1); 155 + } 156 + if (pthread_create(&threads[1], NULL, untagged_iteration_fn, NULL)) { 157 + perror("pthread_create"); 158 + exit(1); 159 + } 160 + if (pthread_create(&threads[2], NULL, add_entries_fn, NULL)) { 161 + perror("pthread_create"); 162 + exit(1); 163 + } 164 + if (pthread_create(&threads[3], NULL, remove_entries_fn, NULL)) { 165 + perror("pthread_create"); 166 + exit(1); 167 + } 168 + 169 + sleep(10); 170 + test_complete = true; 171 + 172 + for (i = 0; i < NUM_THREADS; i++) { 173 + if (pthread_join(threads[i], NULL)) { 174 + perror("pthread_join"); 175 + exit(1); 176 + } 177 + } 178 + 179 + item_kill_tree(&tree); 180 + }
+1
tools/testing/radix-tree/main.c
··· 332 332 regression1_test(); 333 333 regression2_test(); 334 334 regression3_test(); 335 + iteration_test(); 335 336 single_thread_tests(long_run); 336 337 337 338 sleep(1);
+1 -1
tools/testing/radix-tree/regression1.c
··· 43 43 #include "regression.h" 44 44 45 45 static RADIX_TREE(mt_tree, GFP_KERNEL); 46 - static pthread_mutex_t mt_lock; 46 + static pthread_mutex_t mt_lock = PTHREAD_MUTEX_INITIALIZER; 47 47 48 48 struct page { 49 49 pthread_mutex_t lock;
+1
tools/testing/radix-tree/test.h
··· 27 27 28 28 void tag_check(void); 29 29 void multiorder_checks(void); 30 + void iteration_test(void); 30 31 31 32 struct item * 32 33 item_tag_set(struct radix_tree_root *root, unsigned long index, int tag);