Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v5.2-rc5' into patchwork

Linux 5.2-rc5

There are some media fixes on -rc5, so merge from it at media
devel tree.

* tag 'v5.2-rc5': (210 commits)
Linux 5.2-rc5
x86/microcode, cpuhotplug: Add a microcode loader CPU hotplug callback
Smack: Restore the smackfsdef mount option and add missing prefixes
ftrace: Fix NULL pointer dereference in free_ftrace_func_mapper()
module: Fix livepatch/ftrace module text permissions race
tracing/uprobe: Fix obsolete comment on trace_uprobe_create()
tracing/uprobe: Fix NULL pointer dereference in trace_uprobe_create()
tracing: Make two symbols static
tracing: avoid build warning with HAVE_NOP_MCOUNT
tracing: Fix out-of-range read in trace_stack_print()
gfs2: Fix rounding error in gfs2_iomap_page_prepare
x86/kasan: Fix boot with 5-level paging and KASAN
timekeeping: Repair ktime_get_coarse*() granularity
Revert "ALSA: hda/realtek - Improve the headset mic for Acer Aspire laptops"
mm/devm_memremap_pages: fix final page put race
PCI/P2PDMA: track pgmap references per resource, not globally
lib/genalloc: introduce chunk owners
PCI/P2PDMA: fix the gen_pool_add_virt() failure path
mm/devm_memremap_pages: introduce devm_memunmap_pages
drivers/base/devres: introduce devm_release_action()
...

+2009 -1165
+16
Documentation/arm64/sve.txt
··· 56 56 is to connect to a target process first and then attempt a 57 57 ptrace(PTRACE_GETREGSET, pid, NT_ARM_SVE, &iov). 58 58 59 + * Whenever SVE scalable register values (Zn, Pn, FFR) are exchanged in memory 60 + between userspace and the kernel, the register value is encoded in memory in 61 + an endianness-invariant layout, with bits [(8 * i + 7) : (8 * i)] encoded at 62 + byte offset i from the start of the memory representation. This affects for 63 + example the signal frame (struct sve_context) and ptrace interface 64 + (struct user_sve_header) and associated data. 65 + 66 + Beware that on big-endian systems this results in a different byte order than 67 + for the FPSIMD V-registers, which are stored as single host-endian 128-bit 68 + values, with bits [(127 - 8 * i) : (120 - 8 * i)] of the register encoded at 69 + byte offset i. (struct fpsimd_context, struct user_fpsimd_state). 70 + 59 71 60 72 2. Vector length terminology 61 73 ----------------------------- ··· 135 123 * If the registers are present, the remainder of the record has a vl-dependent 136 124 size and layout. Macros SVE_SIG_* are defined [1] to facilitate access to 137 125 the members. 126 + 127 + * Each scalable register (Zn, Pn, FFR) is stored in an endianness-invariant 128 + layout, with bits [(8 * i + 7) : (8 * i)] stored at byte offset i from the 129 + start of the register's representation in memory. 138 130 139 131 * If the SVE context is too big to fit in sigcontext.__reserved[], then extra 140 132 space is allocated on the stack, an extra_context record is written in
+8 -10
Documentation/block/switching-sched.txt
··· 13 13 14 14 # mount none /sys -t sysfs 15 15 16 - As of the Linux 2.6.10 kernel, it is now possible to change the 17 - IO scheduler for a given block device on the fly (thus making it possible, 18 - for instance, to set the CFQ scheduler for the system default, but 19 - set a specific device to use the deadline or noop schedulers - which 20 - can improve that device's throughput). 16 + It is possible to change the IO scheduler for a given block device on 17 + the fly to select one of mq-deadline, none, bfq, or kyber schedulers - 18 + which can improve that device's throughput. 21 19 22 20 To set a specific scheduler, simply do this: 23 21 ··· 28 30 a "cat /sys/block/DEV/queue/scheduler" - the list of valid names 29 31 will be displayed, with the currently selected scheduler in brackets: 30 32 31 - # cat /sys/block/hda/queue/scheduler 32 - noop deadline [cfq] 33 - # echo deadline > /sys/block/hda/queue/scheduler 34 - # cat /sys/block/hda/queue/scheduler 35 - noop [deadline] cfq 33 + # cat /sys/block/sda/queue/scheduler 34 + [mq-deadline] kyber bfq none 35 + # echo none >/sys/block/sda/queue/scheduler 36 + # cat /sys/block/sda/queue/scheduler 37 + [none] mq-deadline kyber bfq
+7 -89
Documentation/cgroup-v1/blkio-controller.txt
··· 8 8 Plan is to use the same cgroup based management interface for blkio controller 9 9 and based on user options switch IO policies in the background. 10 10 11 - Currently two IO control policies are implemented. First one is proportional 12 - weight time based division of disk policy. It is implemented in CFQ. Hence 13 - this policy takes effect only on leaf nodes when CFQ is being used. The second 14 - one is throttling policy which can be used to specify upper IO rate limits 15 - on devices. This policy is implemented in generic block layer and can be 16 - used on leaf nodes as well as higher level logical devices like device mapper. 11 + One IO control policy is throttling policy which can be used to 12 + specify upper IO rate limits on devices. This policy is implemented in 13 + generic block layer and can be used on leaf nodes as well as higher 14 + level logical devices like device mapper. 17 15 18 16 HOWTO 19 17 ===== 20 - Proportional Weight division of bandwidth 21 - ----------------------------------------- 22 - You can do a very simple testing of running two dd threads in two different 23 - cgroups. Here is what you can do. 24 - 25 - - Enable Block IO controller 26 - CONFIG_BLK_CGROUP=y 27 - 28 - - Enable group scheduling in CFQ 29 - CONFIG_CFQ_GROUP_IOSCHED=y 30 - 31 - - Compile and boot into kernel and mount IO controller (blkio); see 32 - cgroups.txt, Why are cgroups needed?. 33 - 34 - mount -t tmpfs cgroup_root /sys/fs/cgroup 35 - mkdir /sys/fs/cgroup/blkio 36 - mount -t cgroup -o blkio none /sys/fs/cgroup/blkio 37 - 38 - - Create two cgroups 39 - mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2 40 - 41 - - Set weights of group test1 and test2 42 - echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight 43 - echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight 44 - 45 - - Create two same size files (say 512MB each) on same disk (file1, file2) and 46 - launch two dd threads in different cgroup to read those files. 47 - 48 - sync 49 - echo 3 > /proc/sys/vm/drop_caches 50 - 51 - dd if=/mnt/sdb/zerofile1 of=/dev/null & 52 - echo $! > /sys/fs/cgroup/blkio/test1/tasks 53 - cat /sys/fs/cgroup/blkio/test1/tasks 54 - 55 - dd if=/mnt/sdb/zerofile2 of=/dev/null & 56 - echo $! > /sys/fs/cgroup/blkio/test2/tasks 57 - cat /sys/fs/cgroup/blkio/test2/tasks 58 - 59 - - At macro level, first dd should finish first. To get more precise data, keep 60 - on looking at (with the help of script), at blkio.disk_time and 61 - blkio.disk_sectors files of both test1 and test2 groups. This will tell how 62 - much disk time (in milliseconds), each group got and how many sectors each 63 - group dispatched to the disk. We provide fairness in terms of disk time, so 64 - ideally io.disk_time of cgroups should be in proportion to the weight. 65 - 66 18 Throttling/Upper Limit policy 67 19 ----------------------------- 68 20 - Enable Block IO controller ··· 46 94 Hierarchical Cgroups 47 95 ==================== 48 96 49 - Both CFQ and throttling implement hierarchy support; however, 97 + Throttling implements hierarchy support; however, 50 98 throttling's hierarchy support is enabled iff "sane_behavior" is 51 99 enabled from cgroup side, which currently is a development option and 52 100 not publicly available. ··· 59 107 | 60 108 test3 61 109 62 - CFQ by default and throttling with "sane_behavior" will handle the 63 - hierarchy correctly. For details on CFQ hierarchy support, refer to 64 - Documentation/block/cfq-iosched.txt. For throttling, all limits apply 110 + Throttling with "sane_behavior" will handle the 111 + hierarchy correctly. For throttling, all limits apply 65 112 to the whole subtree while all statistics are local to the IOs 66 113 directly generated by tasks in that cgroup. 67 114 ··· 80 129 CONFIG_DEBUG_BLK_CGROUP 81 130 - Debug help. Right now some additional stats file show up in cgroup 82 131 if this option is enabled. 83 - 84 - CONFIG_CFQ_GROUP_IOSCHED 85 - - Enables group scheduling in CFQ. Currently only 1 level of group 86 - creation is allowed. 87 132 88 133 CONFIG_BLK_DEV_THROTTLING 89 134 - Enable block device throttling support in block layer. ··· 291 344 - blkio.reset_stats 292 345 - Writing an int to this file will result in resetting all the stats 293 346 for that cgroup. 294 - 295 - CFQ sysfs tunable 296 - ================= 297 - /sys/block/<disk>/queue/iosched/slice_idle 298 - ------------------------------------------ 299 - On a faster hardware CFQ can be slow, especially with sequential workload. 300 - This happens because CFQ idles on a single queue and single queue might not 301 - drive deeper request queue depths to keep the storage busy. In such scenarios 302 - one can try setting slice_idle=0 and that would switch CFQ to IOPS 303 - (IO operations per second) mode on NCQ supporting hardware. 304 - 305 - That means CFQ will not idle between cfq queues of a cfq group and hence be 306 - able to driver higher queue depth and achieve better throughput. That also 307 - means that cfq provides fairness among groups in terms of IOPS and not in 308 - terms of disk time. 309 - 310 - /sys/block/<disk>/queue/iosched/group_idle 311 - ------------------------------------------ 312 - If one disables idling on individual cfq queues and cfq service trees by 313 - setting slice_idle=0, group_idle kicks in. That means CFQ will still idle 314 - on the group in an attempt to provide fairness among groups. 315 - 316 - By default group_idle is same as slice_idle and does not do anything if 317 - slice_idle is enabled. 318 - 319 - One can experience an overall throughput drop if you have created multiple 320 - groups and put applications in that group which are not driving enough 321 - IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle 322 - on individual groups and throughput should improve.
+13 -9
Documentation/cgroup-v1/hugetlb.txt
··· 32 32 hugetlb.<hugepagesize>.usage_in_bytes # show current usage for "hugepagesize" hugetlb 33 33 hugetlb.<hugepagesize>.failcnt # show the number of allocation failure due to HugeTLB limit 34 34 35 - For a system supporting two hugepage size (16M and 16G) the control 35 + For a system supporting three hugepage sizes (64k, 32M and 1G), the control 36 36 files include: 37 37 38 - hugetlb.16GB.limit_in_bytes 39 - hugetlb.16GB.max_usage_in_bytes 40 - hugetlb.16GB.usage_in_bytes 41 - hugetlb.16GB.failcnt 42 - hugetlb.16MB.limit_in_bytes 43 - hugetlb.16MB.max_usage_in_bytes 44 - hugetlb.16MB.usage_in_bytes 45 - hugetlb.16MB.failcnt 38 + hugetlb.1GB.limit_in_bytes 39 + hugetlb.1GB.max_usage_in_bytes 40 + hugetlb.1GB.usage_in_bytes 41 + hugetlb.1GB.failcnt 42 + hugetlb.64KB.limit_in_bytes 43 + hugetlb.64KB.max_usage_in_bytes 44 + hugetlb.64KB.usage_in_bytes 45 + hugetlb.64KB.failcnt 46 + hugetlb.32MB.limit_in_bytes 47 + hugetlb.32MB.max_usage_in_bytes 48 + hugetlb.32MB.usage_in_bytes 49 + hugetlb.32MB.failcnt
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 2 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Golden Lions 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm64/Makefile
··· 51 51 52 52 KBUILD_CFLAGS += -mgeneral-regs-only $(lseinstr) $(brokengasinst) 53 53 KBUILD_CFLAGS += -fno-asynchronous-unwind-tables 54 - KBUILD_CFLAGS += -Wno-psabi 54 + KBUILD_CFLAGS += $(call cc-disable-warning, psabi) 55 55 KBUILD_AFLAGS += $(lseinstr) $(brokengasinst) 56 56 57 57 KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
+3
arch/arm64/include/asm/tlbflush.h
··· 195 195 unsigned long asid = ASID(vma->vm_mm); 196 196 unsigned long addr; 197 197 198 + start = round_down(start, stride); 199 + end = round_up(end, stride); 200 + 198 201 if ((end - start) >= (MAX_TLBI_OPS * stride)) { 199 202 flush_tlb_mm(vma->vm_mm); 200 203 return;
+7
arch/arm64/include/uapi/asm/kvm.h
··· 260 260 KVM_REG_SIZE_U256 | \ 261 261 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) 262 262 263 + /* 264 + * Register values for KVM_REG_ARM64_SVE_ZREG(), KVM_REG_ARM64_SVE_PREG() and 265 + * KVM_REG_ARM64_SVE_FFR() are represented in memory in an endianness- 266 + * invariant layout which differs from the layout used for the FPSIMD 267 + * V-registers on big-endian systems: see sigcontext.h for more explanation. 268 + */ 269 + 263 270 #define KVM_ARM64_SVE_VQ_MIN __SVE_VQ_MIN 264 271 #define KVM_ARM64_SVE_VQ_MAX __SVE_VQ_MAX 265 272
+4
arch/arm64/include/uapi/asm/ptrace.h
··· 176 176 * FPCR uint32_t FPCR 177 177 * 178 178 * Additional data might be appended in the future. 179 + * 180 + * The Z-, P- and FFR registers are represented in memory in an endianness- 181 + * invariant layout which differs from the layout used for the FPSIMD 182 + * V-registers on big-endian systems: see sigcontext.h for more explanation. 179 183 */ 180 184 181 185 #define SVE_PT_SVE_ZREG_SIZE(vq) __SVE_ZREG_SIZE(vq)
+14
arch/arm64/include/uapi/asm/sigcontext.h
··· 77 77 __uint128_t vregs[32]; 78 78 }; 79 79 80 + /* 81 + * Note: similarly to all other integer fields, each V-register is stored in an 82 + * endianness-dependent format, with the byte at offset i from the start of the 83 + * in-memory representation of the register value containing 84 + * 85 + * bits [(7 + 8 * i) : (8 * i)] of the register on little-endian hosts; or 86 + * bits [(127 - 8 * i) : (120 - 8 * i)] on big-endian hosts. 87 + */ 88 + 80 89 /* ESR_EL1 context */ 81 90 #define ESR_MAGIC 0x45535201 82 91 ··· 213 204 * FFR uint16_t[vq] first-fault status register 214 205 * 215 206 * Additional data might be appended in the future. 207 + * 208 + * Unlike vregs[] in fpsimd_context, each SVE scalable register (Z-, P- or FFR) 209 + * is encoded in memory in an endianness-invariant format, with the byte at 210 + * offset i from the start of the in-memory representation containing bits 211 + * [(7 + 8 * i) : (8 * i)] of the register value. 216 212 */ 217 213 218 214 #define SVE_SIG_ZREG_SIZE(vq) __SVE_ZREG_SIZE(vq)
+33 -9
arch/arm64/kernel/fpsimd.c
··· 39 39 #include <linux/slab.h> 40 40 #include <linux/stddef.h> 41 41 #include <linux/sysctl.h> 42 + #include <linux/swab.h> 42 43 43 44 #include <asm/esr.h> 44 45 #include <asm/fpsimd.h> ··· 353 352 #define ZREG(sve_state, vq, n) ((char *)(sve_state) + \ 354 353 (SVE_SIG_ZREG_OFFSET(vq, n) - SVE_SIG_REGS_OFFSET)) 355 354 355 + #ifdef CONFIG_CPU_BIG_ENDIAN 356 + static __uint128_t arm64_cpu_to_le128(__uint128_t x) 357 + { 358 + u64 a = swab64(x); 359 + u64 b = swab64(x >> 64); 360 + 361 + return ((__uint128_t)a << 64) | b; 362 + } 363 + #else 364 + static __uint128_t arm64_cpu_to_le128(__uint128_t x) 365 + { 366 + return x; 367 + } 368 + #endif 369 + 370 + #define arm64_le128_to_cpu(x) arm64_cpu_to_le128(x) 371 + 356 372 /* 357 373 * Transfer the FPSIMD state in task->thread.uw.fpsimd_state to 358 374 * task->thread.sve_state. ··· 387 369 void *sst = task->thread.sve_state; 388 370 struct user_fpsimd_state const *fst = &task->thread.uw.fpsimd_state; 389 371 unsigned int i; 372 + __uint128_t *p; 390 373 391 374 if (!system_supports_sve()) 392 375 return; 393 376 394 377 vq = sve_vq_from_vl(task->thread.sve_vl); 395 - for (i = 0; i < 32; ++i) 396 - memcpy(ZREG(sst, vq, i), &fst->vregs[i], 397 - sizeof(fst->vregs[i])); 378 + for (i = 0; i < 32; ++i) { 379 + p = (__uint128_t *)ZREG(sst, vq, i); 380 + *p = arm64_cpu_to_le128(fst->vregs[i]); 381 + } 398 382 } 399 383 400 384 /* ··· 415 395 void const *sst = task->thread.sve_state; 416 396 struct user_fpsimd_state *fst = &task->thread.uw.fpsimd_state; 417 397 unsigned int i; 398 + __uint128_t const *p; 418 399 419 400 if (!system_supports_sve()) 420 401 return; 421 402 422 403 vq = sve_vq_from_vl(task->thread.sve_vl); 423 - for (i = 0; i < 32; ++i) 424 - memcpy(&fst->vregs[i], ZREG(sst, vq, i), 425 - sizeof(fst->vregs[i])); 404 + for (i = 0; i < 32; ++i) { 405 + p = (__uint128_t const *)ZREG(sst, vq, i); 406 + fst->vregs[i] = arm64_le128_to_cpu(*p); 407 + } 426 408 } 427 409 428 410 #ifdef CONFIG_ARM64_SVE ··· 513 491 void *sst = task->thread.sve_state; 514 492 struct user_fpsimd_state const *fst = &task->thread.uw.fpsimd_state; 515 493 unsigned int i; 494 + __uint128_t *p; 516 495 517 496 if (!test_tsk_thread_flag(task, TIF_SVE)) 518 497 return; ··· 522 499 523 500 memset(sst, 0, SVE_SIG_REGS_SIZE(vq)); 524 501 525 - for (i = 0; i < 32; ++i) 526 - memcpy(ZREG(sst, vq, i), &fst->vregs[i], 527 - sizeof(fst->vregs[i])); 502 + for (i = 0; i < 32; ++i) { 503 + p = (__uint128_t *)ZREG(sst, vq, i); 504 + *p = arm64_cpu_to_le128(fst->vregs[i]); 505 + } 528 506 } 529 507 530 508 int sve_set_vector_length(struct task_struct *task,
+30
arch/powerpc/include/asm/book3s/64/pgtable.h
··· 876 876 return false; 877 877 } 878 878 879 + static inline int pmd_is_serializing(pmd_t pmd) 880 + { 881 + /* 882 + * If the pmd is undergoing a split, the _PAGE_PRESENT bit is clear 883 + * and _PAGE_INVALID is set (see pmd_present, pmdp_invalidate). 884 + * 885 + * This condition may also occur when flushing a pmd while flushing 886 + * it (see ptep_modify_prot_start), so callers must ensure this 887 + * case is fine as well. 888 + */ 889 + if ((pmd_raw(pmd) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID)) == 890 + cpu_to_be64(_PAGE_INVALID)) 891 + return true; 892 + 893 + return false; 894 + } 895 + 879 896 static inline int pmd_bad(pmd_t pmd) 880 897 { 881 898 if (radix_enabled()) ··· 1109 1092 #define pmd_access_permitted pmd_access_permitted 1110 1093 static inline bool pmd_access_permitted(pmd_t pmd, bool write) 1111 1094 { 1095 + /* 1096 + * pmdp_invalidate sets this combination (which is not caught by 1097 + * !pte_present() check in pte_access_permitted), to prevent 1098 + * lock-free lookups, as part of the serialize_against_pte_lookup() 1099 + * synchronisation. 1100 + * 1101 + * This also catches the case where the PTE's hardware PRESENT bit is 1102 + * cleared while TLB is flushed, which is suboptimal but should not 1103 + * be frequent. 1104 + */ 1105 + if (pmd_is_serializing(pmd)) 1106 + return false; 1107 + 1112 1108 return pte_access_permitted(pmd_pte(pmd), write); 1113 1109 } 1114 1110
+4
arch/powerpc/include/asm/btext.h
··· 13 13 int depth, int pitch); 14 14 extern void btext_setup_display(int width, int height, int depth, int pitch, 15 15 unsigned long address); 16 + #ifdef CONFIG_PPC32 16 17 extern void btext_prepare_BAT(void); 18 + #else 19 + static inline void btext_prepare_BAT(void) { } 20 + #endif 17 21 extern void btext_map(void); 18 22 extern void btext_unmap(void); 19 23
+3
arch/powerpc/include/asm/kexec.h
··· 94 94 return crashing_cpu >= 0; 95 95 } 96 96 97 + void relocate_new_kernel(unsigned long indirection_page, unsigned long reboot_code_buffer, 98 + unsigned long start_address) __noreturn; 99 + 97 100 #ifdef CONFIG_KEXEC_FILE 98 101 extern const struct kexec_file_ops kexec_elf64_ops; 99 102
+3 -1
arch/powerpc/kernel/machine_kexec_32.c
··· 30 30 */ 31 31 void default_machine_kexec(struct kimage *image) 32 32 { 33 - extern const unsigned char relocate_new_kernel[]; 34 33 extern const unsigned int relocate_new_kernel_size; 35 34 unsigned long page_list; 36 35 unsigned long reboot_code_buffer, reboot_code_buffer_phys; ··· 56 57 flush_icache_range(reboot_code_buffer, 57 58 reboot_code_buffer + KEXEC_CONTROL_PAGE_SIZE); 58 59 printk(KERN_INFO "Bye!\n"); 60 + 61 + if (!IS_ENABLED(CONFIG_FSL_BOOKE) && !IS_ENABLED(CONFIG_44x)) 62 + relocate_new_kernel(page_list, reboot_code_buffer_phys, image->start); 59 63 60 64 /* now call it */ 61 65 rnk = (relocate_new_kernel_t) reboot_code_buffer;
+1
arch/powerpc/kernel/prom_init.c
··· 2336 2336 prom_printf("W=%d H=%d LB=%d addr=0x%x\n", 2337 2337 width, height, pitch, addr); 2338 2338 btext_setup_display(width, height, 8, pitch, addr); 2339 + btext_prepare_BAT(); 2339 2340 } 2340 2341 #endif /* CONFIG_PPC_EARLY_DEBUG_BOOTX */ 2341 2342 }
+1 -1
arch/powerpc/kernel/prom_init_check.sh
··· 24 24 WHITELIST="add_reloc_offset __bss_start __bss_stop copy_and_flush 25 25 _end enter_prom $MEM_FUNCS reloc_offset __secondary_hold 26 26 __secondary_hold_acknowledge __secondary_hold_spinloop __start 27 - logo_linux_clut224 27 + logo_linux_clut224 btext_prepare_BAT 28 28 reloc_got2 kernstart_addr memstart_addr linux_banner _stext 29 29 __prom_init_toc_start __prom_init_toc_end btext_setup_display TOC." 30 30
+3
arch/powerpc/mm/book3s64/pgtable.c
··· 112 112 /* 113 113 * This ensures that generic code that rely on IRQ disabling 114 114 * to prevent a parallel THP split work as expected. 115 + * 116 + * Marking the entry with _PAGE_INVALID && ~_PAGE_PRESENT requires 117 + * a special case check in pmd_access_permitted. 115 118 */ 116 119 serialize_against_pte_lookup(vma->vm_mm); 117 120 return __pmd(old_pmd);
+14 -2
arch/powerpc/mm/pgtable.c
··· 368 368 pdshift = PMD_SHIFT; 369 369 pmdp = pmd_offset(&pud, ea); 370 370 pmd = READ_ONCE(*pmdp); 371 + 371 372 /* 372 - * A hugepage collapse is captured by pmd_none, because 373 - * it mark the pmd none and do a hpte invalidate. 373 + * A hugepage collapse is captured by this condition, see 374 + * pmdp_collapse_flush. 374 375 */ 375 376 if (pmd_none(pmd)) 376 377 return NULL; 378 + 379 + #ifdef CONFIG_PPC_BOOK3S_64 380 + /* 381 + * A hugepage split is captured by this condition, see 382 + * pmdp_invalidate. 383 + * 384 + * Huge page modification can be caught here too. 385 + */ 386 + if (pmd_is_serializing(pmd)) 387 + return NULL; 388 + #endif 377 389 378 390 if (pmd_trans_huge(pmd) || pmd_devmap(pmd)) { 379 391 if (is_thp)
+3 -3
arch/x86/include/asm/fpu/internal.h
··· 536 536 struct fpu *fpu = &current->thread.fpu; 537 537 int cpu = smp_processor_id(); 538 538 539 - if (WARN_ON_ONCE(current->mm == NULL)) 539 + if (WARN_ON_ONCE(current->flags & PF_KTHREAD)) 540 540 return; 541 541 542 542 if (!fpregs_state_valid(fpu, cpu)) { ··· 567 567 * otherwise. 568 568 * 569 569 * The FPU context is only stored/restored for a user task and 570 - * ->mm is used to distinguish between kernel and user threads. 570 + * PF_KTHREAD is used to distinguish between kernel and user threads. 571 571 */ 572 572 static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) 573 573 { 574 - if (static_cpu_has(X86_FEATURE_FPU) && current->mm) { 574 + if (static_cpu_has(X86_FEATURE_FPU) && !(current->flags & PF_KTHREAD)) { 575 575 if (!copy_fpregs_to_fpstate(old_fpu)) 576 576 old_fpu->last_cpu = -1; 577 577 else
+3
arch/x86/include/asm/intel-family.h
··· 52 52 53 53 #define INTEL_FAM6_CANNONLAKE_MOBILE 0x66 54 54 55 + #define INTEL_FAM6_ICELAKE_X 0x6A 56 + #define INTEL_FAM6_ICELAKE_XEON_D 0x6C 57 + #define INTEL_FAM6_ICELAKE_DESKTOP 0x7D 55 58 #define INTEL_FAM6_ICELAKE_MOBILE 0x7E 56 59 57 60 /* "Small Core" Processors (Atom) */
+1 -1
arch/x86/kernel/cpu/microcode/core.c
··· 872 872 goto out_ucode_group; 873 873 874 874 register_syscore_ops(&mc_syscore_ops); 875 - cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/microcode:online", 875 + cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:online", 876 876 mc_cpu_online, mc_cpu_down_prep); 877 877 878 878 pr_info("Microcode Update Driver: v%s.", DRIVER_VERSION);
+3
arch/x86/kernel/cpu/resctrl/monitor.c
··· 360 360 struct list_head *head; 361 361 struct rdtgroup *entry; 362 362 363 + if (!is_mbm_local_enabled()) 364 + return; 365 + 363 366 r_mba = &rdt_resources_all[RDT_RESOURCE_MBA]; 364 367 closid = rgrp->closid; 365 368 rmid = rgrp->mon.rmid;
+6 -1
arch/x86/kernel/cpu/resctrl/rdtgroup.c
··· 2534 2534 if (closid_allocated(i) && i != closid) { 2535 2535 mode = rdtgroup_mode_by_closid(i); 2536 2536 if (mode == RDT_MODE_PSEUDO_LOCKSETUP) 2537 - break; 2537 + /* 2538 + * ctrl values for locksetup aren't relevant 2539 + * until the schemata is written, and the mode 2540 + * becomes RDT_MODE_PSEUDO_LOCKED. 2541 + */ 2542 + continue; 2538 2543 /* 2539 2544 * If CDP is active include peer domain's 2540 2545 * usage to ensure there is no overlap
+1 -1
arch/x86/kernel/fpu/core.c
··· 102 102 103 103 kernel_fpu_disable(); 104 104 105 - if (current->mm) { 105 + if (!(current->flags & PF_KTHREAD)) { 106 106 if (!test_thread_flag(TIF_NEED_FPU_LOAD)) { 107 107 set_thread_flag(TIF_NEED_FPU_LOAD); 108 108 /*
+7 -9
arch/x86/kernel/fpu/signal.c
··· 5 5 6 6 #include <linux/compat.h> 7 7 #include <linux/cpu.h> 8 + #include <linux/pagemap.h> 8 9 9 10 #include <asm/fpu/internal.h> 10 11 #include <asm/fpu/signal.h> ··· 61 60 struct xregs_state *xsave = &tsk->thread.fpu.state.xsave; 62 61 struct user_i387_ia32_struct env; 63 62 struct _fpstate_32 __user *fp = buf; 63 + 64 + fpregs_lock(); 65 + if (!test_thread_flag(TIF_NEED_FPU_LOAD)) 66 + copy_fxregs_to_kernel(&tsk->thread.fpu); 67 + fpregs_unlock(); 64 68 65 69 convert_from_fxsr(&env, tsk); 66 70 ··· 195 189 fpregs_unlock(); 196 190 197 191 if (ret) { 198 - int aligned_size; 199 - int nr_pages; 200 - 201 - aligned_size = offset_in_page(buf_fx) + fpu_user_xstate_size; 202 - nr_pages = DIV_ROUND_UP(aligned_size, PAGE_SIZE); 203 - 204 - ret = get_user_pages_unlocked((unsigned long)buf_fx, nr_pages, 205 - NULL, FOLL_WRITE); 206 - if (ret == nr_pages) 192 + if (!fault_in_pages_writeable(buf_fx, fpu_user_xstate_size)) 207 193 goto retry; 208 194 return -EFAULT; 209 195 }
+1 -1
arch/x86/kernel/kgdb.c
··· 758 758 BREAK_INSTR_SIZE); 759 759 bpt->type = BP_POKE_BREAKPOINT; 760 760 761 - return err; 761 + return 0; 762 762 } 763 763 764 764 int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
+1 -1
arch/x86/mm/kasan_init_64.c
··· 199 199 if (!pgtable_l5_enabled()) 200 200 return (p4d_t *)pgd; 201 201 202 - p4d = __pa_nodebug(pgd_val(*pgd)) & PTE_PFN_MASK; 202 + p4d = pgd_val(*pgd) & PTE_PFN_MASK; 203 203 p4d += __START_KERNEL_map - phys_base; 204 204 return (p4d_t *)p4d + p4d_index(addr); 205 205 }
+10 -1
arch/x86/mm/kaslr.c
··· 52 52 } kaslr_regions[] = { 53 53 { &page_offset_base, 0 }, 54 54 { &vmalloc_base, 0 }, 55 - { &vmemmap_base, 1 }, 55 + { &vmemmap_base, 0 }, 56 56 }; 57 57 58 58 /* Get size in bytes used by the memory region */ ··· 78 78 unsigned long rand, memory_tb; 79 79 struct rnd_state rand_state; 80 80 unsigned long remain_entropy; 81 + unsigned long vmemmap_size; 81 82 82 83 vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4; 83 84 vaddr = vaddr_start; ··· 109 108 /* Adapt phyiscal memory region size based on available memory */ 110 109 if (memory_tb < kaslr_regions[0].size_tb) 111 110 kaslr_regions[0].size_tb = memory_tb; 111 + 112 + /* 113 + * Calculate the vmemmap region size in TBs, aligned to a TB 114 + * boundary. 115 + */ 116 + vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) * 117 + sizeof(struct page); 118 + kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT); 112 119 113 120 /* Calculate entropy available between regions */ 114 121 remain_entropy = vaddr_end - vaddr_start;
+1
block/Kconfig
··· 73 73 74 74 config BLK_DEV_ZONED 75 75 bool "Zoned block device support" 76 + select MQ_IOSCHED_DEADLINE 76 77 ---help--- 77 78 Block layer zoned block device support. This option enables 78 79 support for ZAC/ZBC host-managed and host-aware zoned block devices.
+2 -4
block/bfq-cgroup.c
··· 1046 1046 struct cftype bfq_blkcg_legacy_files[] = { 1047 1047 { 1048 1048 .name = "bfq.weight", 1049 - .link_name = "weight", 1050 - .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_SYMLINKED, 1049 + .flags = CFTYPE_NOT_ON_ROOT, 1051 1050 .seq_show = bfq_io_show_weight, 1052 1051 .write_u64 = bfq_io_set_weight_legacy, 1053 1052 }, ··· 1166 1167 struct cftype bfq_blkg_files[] = { 1167 1168 { 1168 1169 .name = "bfq.weight", 1169 - .link_name = "weight", 1170 - .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_SYMLINKED, 1170 + .flags = CFTYPE_NOT_ON_ROOT, 1171 1171 .seq_show = bfq_io_show_weight, 1172 1172 .write = bfq_io_set_weight, 1173 1173 },
+34 -111
block/blk-mq-debugfs.c
··· 821 821 {}, 822 822 }; 823 823 824 - static bool debugfs_create_files(struct dentry *parent, void *data, 824 + static void debugfs_create_files(struct dentry *parent, void *data, 825 825 const struct blk_mq_debugfs_attr *attr) 826 826 { 827 827 if (IS_ERR_OR_NULL(parent)) 828 - return false; 828 + return; 829 829 830 830 d_inode(parent)->i_private = data; 831 831 832 - for (; attr->name; attr++) { 833 - if (!debugfs_create_file(attr->name, attr->mode, parent, 834 - (void *)attr, &blk_mq_debugfs_fops)) 835 - return false; 836 - } 837 - return true; 832 + for (; attr->name; attr++) 833 + debugfs_create_file(attr->name, attr->mode, parent, 834 + (void *)attr, &blk_mq_debugfs_fops); 838 835 } 839 836 840 - int blk_mq_debugfs_register(struct request_queue *q) 837 + void blk_mq_debugfs_register(struct request_queue *q) 841 838 { 842 839 struct blk_mq_hw_ctx *hctx; 843 840 int i; 844 841 845 - if (!blk_debugfs_root) 846 - return -ENOENT; 847 - 848 842 q->debugfs_dir = debugfs_create_dir(kobject_name(q->kobj.parent), 849 843 blk_debugfs_root); 850 - if (!q->debugfs_dir) 851 - return -ENOMEM; 852 844 853 - if (!debugfs_create_files(q->debugfs_dir, q, 854 - blk_mq_debugfs_queue_attrs)) 855 - goto err; 845 + debugfs_create_files(q->debugfs_dir, q, blk_mq_debugfs_queue_attrs); 856 846 857 847 /* 858 848 * blk_mq_init_sched() attempted to do this already, but q->debugfs_dir ··· 854 864 855 865 /* Similarly, blk_mq_init_hctx() couldn't do this previously. */ 856 866 queue_for_each_hw_ctx(q, hctx, i) { 857 - if (!hctx->debugfs_dir && blk_mq_debugfs_register_hctx(q, hctx)) 858 - goto err; 859 - if (q->elevator && !hctx->sched_debugfs_dir && 860 - blk_mq_debugfs_register_sched_hctx(q, hctx)) 861 - goto err; 867 + if (!hctx->debugfs_dir) 868 + blk_mq_debugfs_register_hctx(q, hctx); 869 + if (q->elevator && !hctx->sched_debugfs_dir) 870 + blk_mq_debugfs_register_sched_hctx(q, hctx); 862 871 } 863 872 864 873 if (q->rq_qos) { ··· 868 879 rqos = rqos->next; 869 880 } 870 881 } 871 - 872 - return 0; 873 - 874 - err: 875 - blk_mq_debugfs_unregister(q); 876 - return -ENOMEM; 877 882 } 878 883 879 884 void blk_mq_debugfs_unregister(struct request_queue *q) ··· 877 894 q->debugfs_dir = NULL; 878 895 } 879 896 880 - static int blk_mq_debugfs_register_ctx(struct blk_mq_hw_ctx *hctx, 881 - struct blk_mq_ctx *ctx) 897 + static void blk_mq_debugfs_register_ctx(struct blk_mq_hw_ctx *hctx, 898 + struct blk_mq_ctx *ctx) 882 899 { 883 900 struct dentry *ctx_dir; 884 901 char name[20]; 885 902 886 903 snprintf(name, sizeof(name), "cpu%u", ctx->cpu); 887 904 ctx_dir = debugfs_create_dir(name, hctx->debugfs_dir); 888 - if (!ctx_dir) 889 - return -ENOMEM; 890 905 891 - if (!debugfs_create_files(ctx_dir, ctx, blk_mq_debugfs_ctx_attrs)) 892 - return -ENOMEM; 893 - 894 - return 0; 906 + debugfs_create_files(ctx_dir, ctx, blk_mq_debugfs_ctx_attrs); 895 907 } 896 908 897 - int blk_mq_debugfs_register_hctx(struct request_queue *q, 898 - struct blk_mq_hw_ctx *hctx) 909 + void blk_mq_debugfs_register_hctx(struct request_queue *q, 910 + struct blk_mq_hw_ctx *hctx) 899 911 { 900 912 struct blk_mq_ctx *ctx; 901 913 char name[20]; 902 914 int i; 903 915 904 - if (!q->debugfs_dir) 905 - return -ENOENT; 906 - 907 916 snprintf(name, sizeof(name), "hctx%u", hctx->queue_num); 908 917 hctx->debugfs_dir = debugfs_create_dir(name, q->debugfs_dir); 909 - if (!hctx->debugfs_dir) 910 - return -ENOMEM; 911 918 912 - if (!debugfs_create_files(hctx->debugfs_dir, hctx, 913 - blk_mq_debugfs_hctx_attrs)) 914 - goto err; 919 + debugfs_create_files(hctx->debugfs_dir, hctx, blk_mq_debugfs_hctx_attrs); 915 920 916 - hctx_for_each_ctx(hctx, ctx, i) { 917 - if (blk_mq_debugfs_register_ctx(hctx, ctx)) 918 - goto err; 919 - } 920 - 921 - return 0; 922 - 923 - err: 924 - blk_mq_debugfs_unregister_hctx(hctx); 925 - return -ENOMEM; 921 + hctx_for_each_ctx(hctx, ctx, i) 922 + blk_mq_debugfs_register_ctx(hctx, ctx); 926 923 } 927 924 928 925 void blk_mq_debugfs_unregister_hctx(struct blk_mq_hw_ctx *hctx) ··· 912 949 hctx->debugfs_dir = NULL; 913 950 } 914 951 915 - int blk_mq_debugfs_register_hctxs(struct request_queue *q) 952 + void blk_mq_debugfs_register_hctxs(struct request_queue *q) 916 953 { 917 954 struct blk_mq_hw_ctx *hctx; 918 955 int i; 919 956 920 - queue_for_each_hw_ctx(q, hctx, i) { 921 - if (blk_mq_debugfs_register_hctx(q, hctx)) 922 - return -ENOMEM; 923 - } 924 - 925 - return 0; 957 + queue_for_each_hw_ctx(q, hctx, i) 958 + blk_mq_debugfs_register_hctx(q, hctx); 926 959 } 927 960 928 961 void blk_mq_debugfs_unregister_hctxs(struct request_queue *q) ··· 930 971 blk_mq_debugfs_unregister_hctx(hctx); 931 972 } 932 973 933 - int blk_mq_debugfs_register_sched(struct request_queue *q) 974 + void blk_mq_debugfs_register_sched(struct request_queue *q) 934 975 { 935 976 struct elevator_type *e = q->elevator->type; 936 977 937 - if (!q->debugfs_dir) 938 - return -ENOENT; 939 - 940 978 if (!e->queue_debugfs_attrs) 941 - return 0; 979 + return; 942 980 943 981 q->sched_debugfs_dir = debugfs_create_dir("sched", q->debugfs_dir); 944 - if (!q->sched_debugfs_dir) 945 - return -ENOMEM; 946 982 947 - if (!debugfs_create_files(q->sched_debugfs_dir, q, 948 - e->queue_debugfs_attrs)) 949 - goto err; 950 - 951 - return 0; 952 - 953 - err: 954 - blk_mq_debugfs_unregister_sched(q); 955 - return -ENOMEM; 983 + debugfs_create_files(q->sched_debugfs_dir, q, e->queue_debugfs_attrs); 956 984 } 957 985 958 986 void blk_mq_debugfs_unregister_sched(struct request_queue *q) ··· 954 1008 rqos->debugfs_dir = NULL; 955 1009 } 956 1010 957 - int blk_mq_debugfs_register_rqos(struct rq_qos *rqos) 1011 + void blk_mq_debugfs_register_rqos(struct rq_qos *rqos) 958 1012 { 959 1013 struct request_queue *q = rqos->q; 960 1014 const char *dir_name = rq_qos_id_to_name(rqos->id); 961 1015 962 - if (!q->debugfs_dir) 963 - return -ENOENT; 964 - 965 1016 if (rqos->debugfs_dir || !rqos->ops->debugfs_attrs) 966 - return 0; 1017 + return; 967 1018 968 - if (!q->rqos_debugfs_dir) { 1019 + if (!q->rqos_debugfs_dir) 969 1020 q->rqos_debugfs_dir = debugfs_create_dir("rqos", 970 1021 q->debugfs_dir); 971 - if (!q->rqos_debugfs_dir) 972 - return -ENOMEM; 973 - } 974 1022 975 1023 rqos->debugfs_dir = debugfs_create_dir(dir_name, 976 1024 rqos->q->rqos_debugfs_dir); 977 - if (!rqos->debugfs_dir) 978 - return -ENOMEM; 979 1025 980 - if (!debugfs_create_files(rqos->debugfs_dir, rqos, 981 - rqos->ops->debugfs_attrs)) 982 - goto err; 983 - return 0; 984 - err: 985 - blk_mq_debugfs_unregister_rqos(rqos); 986 - return -ENOMEM; 1026 + debugfs_create_files(rqos->debugfs_dir, rqos, rqos->ops->debugfs_attrs); 987 1027 } 988 1028 989 1029 void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q) ··· 978 1046 q->rqos_debugfs_dir = NULL; 979 1047 } 980 1048 981 - int blk_mq_debugfs_register_sched_hctx(struct request_queue *q, 982 - struct blk_mq_hw_ctx *hctx) 1049 + void blk_mq_debugfs_register_sched_hctx(struct request_queue *q, 1050 + struct blk_mq_hw_ctx *hctx) 983 1051 { 984 1052 struct elevator_type *e = q->elevator->type; 985 1053 986 - if (!hctx->debugfs_dir) 987 - return -ENOENT; 988 - 989 1054 if (!e->hctx_debugfs_attrs) 990 - return 0; 1055 + return; 991 1056 992 1057 hctx->sched_debugfs_dir = debugfs_create_dir("sched", 993 1058 hctx->debugfs_dir); 994 - if (!hctx->sched_debugfs_dir) 995 - return -ENOMEM; 996 - 997 - if (!debugfs_create_files(hctx->sched_debugfs_dir, hctx, 998 - e->hctx_debugfs_attrs)) 999 - return -ENOMEM; 1000 - 1001 - return 0; 1059 + debugfs_create_files(hctx->sched_debugfs_dir, hctx, 1060 + e->hctx_debugfs_attrs); 1002 1061 } 1003 1062 1004 1063 void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx)
+15 -21
block/blk-mq-debugfs.h
··· 18 18 int __blk_mq_debugfs_rq_show(struct seq_file *m, struct request *rq); 19 19 int blk_mq_debugfs_rq_show(struct seq_file *m, void *v); 20 20 21 - int blk_mq_debugfs_register(struct request_queue *q); 21 + void blk_mq_debugfs_register(struct request_queue *q); 22 22 void blk_mq_debugfs_unregister(struct request_queue *q); 23 - int blk_mq_debugfs_register_hctx(struct request_queue *q, 24 - struct blk_mq_hw_ctx *hctx); 23 + void blk_mq_debugfs_register_hctx(struct request_queue *q, 24 + struct blk_mq_hw_ctx *hctx); 25 25 void blk_mq_debugfs_unregister_hctx(struct blk_mq_hw_ctx *hctx); 26 - int blk_mq_debugfs_register_hctxs(struct request_queue *q); 26 + void blk_mq_debugfs_register_hctxs(struct request_queue *q); 27 27 void blk_mq_debugfs_unregister_hctxs(struct request_queue *q); 28 28 29 - int blk_mq_debugfs_register_sched(struct request_queue *q); 29 + void blk_mq_debugfs_register_sched(struct request_queue *q); 30 30 void blk_mq_debugfs_unregister_sched(struct request_queue *q); 31 - int blk_mq_debugfs_register_sched_hctx(struct request_queue *q, 31 + void blk_mq_debugfs_register_sched_hctx(struct request_queue *q, 32 32 struct blk_mq_hw_ctx *hctx); 33 33 void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx); 34 34 35 - int blk_mq_debugfs_register_rqos(struct rq_qos *rqos); 35 + void blk_mq_debugfs_register_rqos(struct rq_qos *rqos); 36 36 void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos); 37 37 void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q); 38 38 #else 39 - static inline int blk_mq_debugfs_register(struct request_queue *q) 39 + static inline void blk_mq_debugfs_register(struct request_queue *q) 40 40 { 41 - return 0; 42 41 } 43 42 44 43 static inline void blk_mq_debugfs_unregister(struct request_queue *q) 45 44 { 46 45 } 47 46 48 - static inline int blk_mq_debugfs_register_hctx(struct request_queue *q, 49 - struct blk_mq_hw_ctx *hctx) 47 + static inline void blk_mq_debugfs_register_hctx(struct request_queue *q, 48 + struct blk_mq_hw_ctx *hctx) 50 49 { 51 - return 0; 52 50 } 53 51 54 52 static inline void blk_mq_debugfs_unregister_hctx(struct blk_mq_hw_ctx *hctx) 55 53 { 56 54 } 57 55 58 - static inline int blk_mq_debugfs_register_hctxs(struct request_queue *q) 56 + static inline void blk_mq_debugfs_register_hctxs(struct request_queue *q) 59 57 { 60 - return 0; 61 58 } 62 59 63 60 static inline void blk_mq_debugfs_unregister_hctxs(struct request_queue *q) 64 61 { 65 62 } 66 63 67 - static inline int blk_mq_debugfs_register_sched(struct request_queue *q) 64 + static inline void blk_mq_debugfs_register_sched(struct request_queue *q) 68 65 { 69 - return 0; 70 66 } 71 67 72 68 static inline void blk_mq_debugfs_unregister_sched(struct request_queue *q) 73 69 { 74 70 } 75 71 76 - static inline int blk_mq_debugfs_register_sched_hctx(struct request_queue *q, 77 - struct blk_mq_hw_ctx *hctx) 72 + static inline void blk_mq_debugfs_register_sched_hctx(struct request_queue *q, 73 + struct blk_mq_hw_ctx *hctx) 78 74 { 79 - return 0; 80 75 } 81 76 82 77 static inline void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx) 83 78 { 84 79 } 85 80 86 - static inline int blk_mq_debugfs_register_rqos(struct rq_qos *rqos) 81 + static inline void blk_mq_debugfs_register_rqos(struct rq_qos *rqos) 87 82 { 88 - return 0; 89 83 } 90 84 91 85 static inline void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos)
-1
block/blk-mq-sched.c
··· 555 555 int i; 556 556 557 557 lockdep_assert_held(&q->sysfs_lock); 558 - WARN_ON(!q->elevator); 559 558 560 559 queue_for_each_hw_ctx(q, hctx, i) { 561 560 if (hctx->sched_tags)
+6 -3
drivers/ata/libata-core.c
··· 4460 4460 { "ST3320[68]13AS", "SD1[5-9]", ATA_HORKAGE_NONCQ | 4461 4461 ATA_HORKAGE_FIRMWARE_WARN }, 4462 4462 4463 - /* drives which fail FPDMA_AA activation (some may freeze afterwards) */ 4464 - { "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA }, 4465 - { "ST1000LM024 HN-M101MBB", "2BA30001", ATA_HORKAGE_BROKEN_FPDMA_AA }, 4463 + /* drives which fail FPDMA_AA activation (some may freeze afterwards) 4464 + the ST disks also have LPM issues */ 4465 + { "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA | 4466 + ATA_HORKAGE_NOLPM, }, 4467 + { "ST1000LM024 HN-M101MBB", "2BA30001", ATA_HORKAGE_BROKEN_FPDMA_AA | 4468 + ATA_HORKAGE_NOLPM, }, 4466 4469 { "VB0250EAVER", "HPG7", ATA_HORKAGE_BROKEN_FPDMA_AA }, 4467 4470 4468 4471 /* Blacklist entries taken from Silicon Image 3124/3132
+23 -1
drivers/base/devres.c
··· 755 755 756 756 WARN_ON(devres_destroy(dev, devm_action_release, devm_action_match, 757 757 &devres)); 758 - 759 758 } 760 759 EXPORT_SYMBOL_GPL(devm_remove_action); 760 + 761 + /** 762 + * devm_release_action() - release previously added custom action 763 + * @dev: Device that owns the action 764 + * @action: Function implementing the action 765 + * @data: Pointer to data passed to @action implementation 766 + * 767 + * Releases and removes instance of @action previously added by 768 + * devm_add_action(). Both action and data should match one of the 769 + * existing entries. 770 + */ 771 + void devm_release_action(struct device *dev, void (*action)(void *), void *data) 772 + { 773 + struct action_devres devres = { 774 + .data = data, 775 + .action = action, 776 + }; 777 + 778 + WARN_ON(devres_release(dev, devm_action_release, devm_action_match, 779 + &devres)); 780 + 781 + } 782 + EXPORT_SYMBOL_GPL(devm_release_action); 761 783 762 784 /* 763 785 * Managed kmalloc/kfree
-4
drivers/block/null_blk_zoned.c
··· 74 74 struct nullb_device *dev = nullb->dev; 75 75 unsigned int zno, nrz = 0; 76 76 77 - if (!dev->zoned) 78 - /* Not a zoned null device */ 79 - return -EOPNOTSUPP; 80 - 81 77 zno = null_zone_no(dev, sector); 82 78 if (zno < dev->nr_zones) { 83 79 nrz = min_t(unsigned int, *nr_zones, dev->nr_zones - zno);
+1 -1
drivers/block/ps3vram.c
··· 767 767 strlcpy(gendisk->disk_name, DEVICE_NAME, sizeof(gendisk->disk_name)); 768 768 set_capacity(gendisk, priv->size >> 9); 769 769 770 - dev_info(&dev->core, "%s: Using %lu MiB of GPU memory\n", 770 + dev_info(&dev->core, "%s: Using %llu MiB of GPU memory\n", 771 771 gendisk->disk_name, get_capacity(gendisk) >> 11); 772 772 773 773 device_add_disk(&dev->core, gendisk, NULL);
+4 -4
drivers/clocksource/arm_arch_timer.c
··· 149 149 return val; 150 150 } 151 151 152 - static u64 arch_counter_get_cntpct_stable(void) 152 + static notrace u64 arch_counter_get_cntpct_stable(void) 153 153 { 154 154 return __arch_counter_get_cntpct_stable(); 155 155 } 156 156 157 - static u64 arch_counter_get_cntpct(void) 157 + static notrace u64 arch_counter_get_cntpct(void) 158 158 { 159 159 return __arch_counter_get_cntpct(); 160 160 } 161 161 162 - static u64 arch_counter_get_cntvct_stable(void) 162 + static notrace u64 arch_counter_get_cntvct_stable(void) 163 163 { 164 164 return __arch_counter_get_cntvct_stable(); 165 165 } 166 166 167 - static u64 arch_counter_get_cntvct(void) 167 + static notrace u64 arch_counter_get_cntvct(void) 168 168 { 169 169 return __arch_counter_get_cntvct(); 170 170 }
+1 -1
drivers/clocksource/timer-ti-dm.c
··· 896 896 return ret; 897 897 } 898 898 899 - const static struct omap_dm_timer_ops dmtimer_ops = { 899 + static const struct omap_dm_timer_ops dmtimer_ops = { 900 900 .request_by_node = omap_dm_timer_request_by_node, 901 901 .request_specific = omap_dm_timer_request_specific, 902 902 .request = omap_dm_timer_request,
+3 -10
drivers/dax/device.c
··· 27 27 complete(&dev_dax->cmp); 28 28 } 29 29 30 - static void dev_dax_percpu_exit(void *data) 30 + static void dev_dax_percpu_exit(struct percpu_ref *ref) 31 31 { 32 - struct percpu_ref *ref = data; 33 32 struct dev_dax *dev_dax = ref_to_dev_dax(ref); 34 33 35 34 dev_dbg(&dev_dax->dev, "%s\n", __func__); ··· 465 466 if (rc) 466 467 return rc; 467 468 468 - rc = devm_add_action_or_reset(dev, dev_dax_percpu_exit, &dev_dax->ref); 469 - if (rc) 470 - return rc; 471 - 472 469 dev_dax->pgmap.ref = &dev_dax->ref; 473 470 dev_dax->pgmap.kill = dev_dax_percpu_kill; 471 + dev_dax->pgmap.cleanup = dev_dax_percpu_exit; 474 472 addr = devm_memremap_pages(dev, &dev_dax->pgmap); 475 - if (IS_ERR(addr)) { 476 - devm_remove_action(dev, dev_dax_percpu_exit, &dev_dax->ref); 477 - percpu_ref_exit(&dev_dax->ref); 473 + if (IS_ERR(addr)) 478 474 return PTR_ERR(addr); 479 - } 480 475 481 476 inode = dax_inode(dax_dev); 482 477 cdev = inode->i_cdev;
+2 -1
drivers/gpio/gpio-pca953x.c
··· 305 305 .volatile_reg = pca953x_volatile_register, 306 306 307 307 .cache_type = REGCACHE_RBTREE, 308 - .max_register = 0x7f, 308 + /* REVISIT: should be 0x7f but some 24 bit chips use REG_ADDR_AI */ 309 + .max_register = 0xff, 309 310 }; 310 311 311 312 static u8 pca953x_recalc_addr(struct pca953x_chip *chip, int reg, int off,
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
··· 2492 2492 2493 2493 int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version) 2494 2494 { 2495 - int r = -EINVAL; 2495 + int r; 2496 2496 2497 2497 if (adev->powerplay.pp_funcs && adev->powerplay.pp_funcs->load_firmware) { 2498 2498 r = adev->powerplay.pp_funcs->load_firmware(adev->powerplay.pp_handle); ··· 2502 2502 } 2503 2503 *smu_version = adev->pm.fw_version; 2504 2504 } 2505 - return r; 2505 + return 0; 2506 2506 } 2507 2507 2508 2508 int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h
··· 172 172 { 173 173 struct amdgpu_ras *ras = amdgpu_ras_get_context(adev); 174 174 175 + if (block >= AMDGPU_RAS_BLOCK_COUNT) 176 + return 0; 175 177 return ras && (ras->supported & (1 << block)); 176 178 } 177 179
+3 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
··· 594 594 int amdgpu_vcn_enc_ring_test_ring(struct amdgpu_ring *ring) 595 595 { 596 596 struct amdgpu_device *adev = ring->adev; 597 - uint32_t rptr = amdgpu_ring_get_rptr(ring); 597 + uint32_t rptr; 598 598 unsigned i; 599 599 int r; 600 600 601 601 r = amdgpu_ring_alloc(ring, 16); 602 602 if (r) 603 603 return r; 604 + 605 + rptr = amdgpu_ring_get_rptr(ring); 604 606 605 607 amdgpu_ring_write(ring, VCN_ENC_CMD_END); 606 608 amdgpu_ring_commit(ring);
+4 -1
drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
··· 170 170 static int uvd_v6_0_enc_ring_test_ring(struct amdgpu_ring *ring) 171 171 { 172 172 struct amdgpu_device *adev = ring->adev; 173 - uint32_t rptr = amdgpu_ring_get_rptr(ring); 173 + uint32_t rptr; 174 174 unsigned i; 175 175 int r; 176 176 177 177 r = amdgpu_ring_alloc(ring, 16); 178 178 if (r) 179 179 return r; 180 + 181 + rptr = amdgpu_ring_get_rptr(ring); 182 + 180 183 amdgpu_ring_write(ring, HEVC_ENC_CMD_END); 181 184 amdgpu_ring_commit(ring); 182 185
+4 -1
drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
··· 175 175 static int uvd_v7_0_enc_ring_test_ring(struct amdgpu_ring *ring) 176 176 { 177 177 struct amdgpu_device *adev = ring->adev; 178 - uint32_t rptr = amdgpu_ring_get_rptr(ring); 178 + uint32_t rptr; 179 179 unsigned i; 180 180 int r; 181 181 ··· 185 185 r = amdgpu_ring_alloc(ring, 16); 186 186 if (r) 187 187 return r; 188 + 189 + rptr = amdgpu_ring_get_rptr(ring); 190 + 188 191 amdgpu_ring_write(ring, HEVC_ENC_CMD_END); 189 192 amdgpu_ring_commit(ring); 190 193
+47 -8
drivers/gpu/drm/drm_edid.c
··· 1570 1570 } 1571 1571 } 1572 1572 1573 + /* Get override or firmware EDID */ 1574 + static struct edid *drm_get_override_edid(struct drm_connector *connector) 1575 + { 1576 + struct edid *override = NULL; 1577 + 1578 + if (connector->override_edid) 1579 + override = drm_edid_duplicate(connector->edid_blob_ptr->data); 1580 + 1581 + if (!override) 1582 + override = drm_load_edid_firmware(connector); 1583 + 1584 + return IS_ERR(override) ? NULL : override; 1585 + } 1586 + 1587 + /** 1588 + * drm_add_override_edid_modes - add modes from override/firmware EDID 1589 + * @connector: connector we're probing 1590 + * 1591 + * Add modes from the override/firmware EDID, if available. Only to be used from 1592 + * drm_helper_probe_single_connector_modes() as a fallback for when DDC probe 1593 + * failed during drm_get_edid() and caused the override/firmware EDID to be 1594 + * skipped. 1595 + * 1596 + * Return: The number of modes added or 0 if we couldn't find any. 1597 + */ 1598 + int drm_add_override_edid_modes(struct drm_connector *connector) 1599 + { 1600 + struct edid *override; 1601 + int num_modes = 0; 1602 + 1603 + override = drm_get_override_edid(connector); 1604 + if (override) { 1605 + drm_connector_update_edid_property(connector, override); 1606 + num_modes = drm_add_edid_modes(connector, override); 1607 + kfree(override); 1608 + 1609 + DRM_DEBUG_KMS("[CONNECTOR:%d:%s] adding %d modes via fallback override/firmware EDID\n", 1610 + connector->base.id, connector->name, num_modes); 1611 + } 1612 + 1613 + return num_modes; 1614 + } 1615 + EXPORT_SYMBOL(drm_add_override_edid_modes); 1616 + 1573 1617 /** 1574 1618 * drm_do_get_edid - get EDID data using a custom EDID block read function 1575 1619 * @connector: connector we're probing ··· 1641 1597 { 1642 1598 int i, j = 0, valid_extensions = 0; 1643 1599 u8 *edid, *new; 1644 - struct edid *override = NULL; 1600 + struct edid *override; 1645 1601 1646 - if (connector->override_edid) 1647 - override = drm_edid_duplicate(connector->edid_blob_ptr->data); 1648 - 1649 - if (!override) 1650 - override = drm_load_edid_firmware(connector); 1651 - 1652 - if (!IS_ERR_OR_NULL(override)) 1602 + override = drm_get_override_edid(connector); 1603 + if (override) 1653 1604 return override; 1654 1605 1655 1606 if ((edid = kmalloc(EDID_LENGTH, GFP_KERNEL)) == NULL)
+2 -1
drivers/gpu/drm/drm_gem_shmem_helper.c
··· 255 255 if (obj->import_attach) 256 256 shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf); 257 257 else 258 - shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL); 258 + shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, 259 + VM_MAP, pgprot_writecombine(PAGE_KERNEL)); 259 260 260 261 if (!shmem->vaddr) { 261 262 DRM_DEBUG_KMS("Failed to vmap pages\n");
+32
drivers/gpu/drm/drm_panel_orientation_quirks.c
··· 42 42 .orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP, 43 43 }; 44 44 45 + static const struct drm_dmi_panel_orientation_data gpd_micropc = { 46 + .width = 720, 47 + .height = 1280, 48 + .bios_dates = (const char * const []){ "04/26/2019", 49 + NULL }, 50 + .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP, 51 + }; 52 + 45 53 static const struct drm_dmi_panel_orientation_data gpd_pocket = { 46 54 .width = 1200, 47 55 .height = 1920, 48 56 .bios_dates = (const char * const []){ "05/26/2017", "06/28/2017", 49 57 "07/05/2017", "08/07/2017", NULL }, 58 + .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP, 59 + }; 60 + 61 + static const struct drm_dmi_panel_orientation_data gpd_pocket2 = { 62 + .width = 1200, 63 + .height = 1920, 64 + .bios_dates = (const char * const []){ "06/28/2018", "08/28/2018", 65 + "12/07/2018", NULL }, 50 66 .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP, 51 67 }; 52 68 ··· 115 99 DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100HAN"), 116 100 }, 117 101 .driver_data = (void *)&asus_t100ha, 102 + }, { /* GPD MicroPC (generic strings, also match on bios date) */ 103 + .matches = { 104 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"), 105 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"), 106 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Default string"), 107 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"), 108 + }, 109 + .driver_data = (void *)&gpd_micropc, 118 110 }, { /* 119 111 * GPD Pocket, note that the the DMI data is less generic then 120 112 * it seems, devices with a board-vendor of "AMI Corporation" ··· 136 112 DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"), 137 113 }, 138 114 .driver_data = (void *)&gpd_pocket, 115 + }, { /* GPD Pocket 2 (generic strings, also match on bios date) */ 116 + .matches = { 117 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"), 118 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"), 119 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Default string"), 120 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"), 121 + }, 122 + .driver_data = (void *)&gpd_pocket2, 139 123 }, { /* GPD Win (same note on DMI match as GPD Pocket) */ 140 124 .matches = { 141 125 DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+7
drivers/gpu/drm/drm_probe_helper.c
··· 479 479 480 480 count = (*connector_funcs->get_modes)(connector); 481 481 482 + /* 483 + * Fallback for when DDC probe failed in drm_get_edid() and thus skipped 484 + * override/firmware EDID. 485 + */ 486 + if (count == 0 && connector->status == connector_status_connected) 487 + count = drm_add_override_edid_modes(connector); 488 + 482 489 if (count == 0 && connector->status == connector_status_connected) 483 490 count = drm_add_modes_noedid(connector, 1024, 768); 484 491 count += drm_helper_probe_add_cmdline_mode(connector);
+1
drivers/gpu/drm/i915/i915_perf.c
··· 3005 3005 static bool gen10_is_valid_mux_addr(struct drm_i915_private *dev_priv, u32 addr) 3006 3006 { 3007 3007 return gen8_is_valid_mux_addr(dev_priv, addr) || 3008 + addr == i915_mmio_reg_offset(GEN10_NOA_WRITE_HIGH) || 3008 3009 (addr >= i915_mmio_reg_offset(OA_PERFCNT3_LO) && 3009 3010 addr <= i915_mmio_reg_offset(OA_PERFCNT4_HI)); 3010 3011 }
+1
drivers/gpu/drm/i915/i915_reg.h
··· 1062 1062 1063 1063 #define NOA_DATA _MMIO(0x986C) 1064 1064 #define NOA_WRITE _MMIO(0x9888) 1065 + #define GEN10_NOA_WRITE_HIGH _MMIO(0x9884) 1065 1066 1066 1067 #define _GEN7_PIPEA_DE_LOAD_SL 0x70068 1067 1068 #define _GEN7_PIPEB_DE_LOAD_SL 0x71068
+18
drivers/gpu/drm/i915/intel_csr.c
··· 303 303 u32 dmc_offset = CSR_DEFAULT_FW_OFFSET, readcount = 0, nbytes; 304 304 u32 i; 305 305 u32 *dmc_payload; 306 + size_t fsize; 306 307 307 308 if (!fw) 308 309 return NULL; 310 + 311 + fsize = sizeof(struct intel_css_header) + 312 + sizeof(struct intel_package_header) + 313 + sizeof(struct intel_dmc_header); 314 + if (fsize > fw->size) 315 + goto error_truncated; 309 316 310 317 /* Extract CSS Header information*/ 311 318 css_header = (struct intel_css_header *)fw->data; ··· 373 366 /* Convert dmc_offset into number of bytes. By default it is in dwords*/ 374 367 dmc_offset *= 4; 375 368 readcount += dmc_offset; 369 + fsize += dmc_offset; 370 + if (fsize > fw->size) 371 + goto error_truncated; 376 372 377 373 /* Extract dmc_header information. */ 378 374 dmc_header = (struct intel_dmc_header *)&fw->data[readcount]; ··· 407 397 408 398 /* fw_size is in dwords, so multiplied by 4 to convert into bytes. */ 409 399 nbytes = dmc_header->fw_size * 4; 400 + fsize += nbytes; 401 + if (fsize > fw->size) 402 + goto error_truncated; 403 + 410 404 if (nbytes > csr->max_fw_size) { 411 405 DRM_ERROR("DMC FW too big (%u bytes)\n", nbytes); 412 406 return NULL; ··· 424 410 } 425 411 426 412 return memcpy(dmc_payload, &fw->data[readcount], nbytes); 413 + 414 + error_truncated: 415 + DRM_ERROR("Truncated DMC firmware, rejecting.\n"); 416 + return NULL; 427 417 } 428 418 429 419 static void intel_csr_runtime_pm_get(struct drm_i915_private *dev_priv)
+9 -5
drivers/gpu/drm/i915/intel_display.c
··· 2432 2432 * main surface. 2433 2433 */ 2434 2434 static const struct drm_format_info ccs_formats[] = { 2435 - { .format = DRM_FORMAT_XRGB8888, .depth = 24, .num_planes = 2, .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, }, 2436 - { .format = DRM_FORMAT_XBGR8888, .depth = 24, .num_planes = 2, .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, }, 2437 - { .format = DRM_FORMAT_ARGB8888, .depth = 32, .num_planes = 2, .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, }, 2438 - { .format = DRM_FORMAT_ABGR8888, .depth = 32, .num_planes = 2, .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, }, 2435 + { .format = DRM_FORMAT_XRGB8888, .depth = 24, .num_planes = 2, 2436 + .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, }, 2437 + { .format = DRM_FORMAT_XBGR8888, .depth = 24, .num_planes = 2, 2438 + .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, }, 2439 + { .format = DRM_FORMAT_ARGB8888, .depth = 32, .num_planes = 2, 2440 + .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, .has_alpha = true, }, 2441 + { .format = DRM_FORMAT_ABGR8888, .depth = 32, .num_planes = 2, 2442 + .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, .has_alpha = true, }, 2439 2443 }; 2440 2444 2441 2445 static const struct drm_format_info * ··· 11946 11942 return 0; 11947 11943 } 11948 11944 11949 - static bool intel_fuzzy_clock_check(int clock1, int clock2) 11945 + bool intel_fuzzy_clock_check(int clock1, int clock2) 11950 11946 { 11951 11947 int diff; 11952 11948
+1
drivers/gpu/drm/i915/intel_drv.h
··· 1742 1742 const struct dpll *dpll); 1743 1743 void vlv_force_pll_off(struct drm_i915_private *dev_priv, enum pipe pipe); 1744 1744 int lpt_get_iclkip(struct drm_i915_private *dev_priv); 1745 + bool intel_fuzzy_clock_check(int clock1, int clock2); 1745 1746 1746 1747 /* modesetting asserts */ 1747 1748 void assert_panel_unlocked(struct drm_i915_private *dev_priv,
+11
drivers/gpu/drm/i915/intel_dsi_vbt.c
··· 853 853 if (mipi_config->target_burst_mode_freq) { 854 854 u32 bitrate = intel_dsi_bitrate(intel_dsi); 855 855 856 + /* 857 + * Sometimes the VBT contains a slightly lower clock, 858 + * then the bitrate we have calculated, in this case 859 + * just replace it with the calculated bitrate. 860 + */ 861 + if (mipi_config->target_burst_mode_freq < bitrate && 862 + intel_fuzzy_clock_check( 863 + mipi_config->target_burst_mode_freq, 864 + bitrate)) 865 + mipi_config->target_burst_mode_freq = bitrate; 866 + 856 867 if (mipi_config->target_burst_mode_freq < bitrate) { 857 868 DRM_ERROR("Burst mode freq is less than computed\n"); 858 869 return false;
+47 -11
drivers/gpu/drm/i915/intel_sdvo.c
··· 916 916 return intel_sdvo_set_value(intel_sdvo, SDVO_CMD_SET_COLORIMETRY, &mode, 1); 917 917 } 918 918 919 + static bool intel_sdvo_set_audio_state(struct intel_sdvo *intel_sdvo, 920 + u8 audio_state) 921 + { 922 + return intel_sdvo_set_value(intel_sdvo, SDVO_CMD_SET_AUDIO_STAT, 923 + &audio_state, 1); 924 + } 925 + 919 926 #if 0 920 927 static void intel_sdvo_dump_hdmi_buf(struct intel_sdvo *intel_sdvo) 921 928 { ··· 1494 1487 else 1495 1488 sdvox |= SDVO_PIPE_SEL(crtc->pipe); 1496 1489 1497 - if (crtc_state->has_audio) { 1498 - WARN_ON_ONCE(INTEL_GEN(dev_priv) < 4); 1499 - sdvox |= SDVO_AUDIO_ENABLE; 1500 - } 1501 - 1502 1490 if (INTEL_GEN(dev_priv) >= 4) { 1503 1491 /* done in crtc_mode_set as the dpll_md reg must be written early */ 1504 1492 } else if (IS_I945G(dev_priv) || IS_I945GM(dev_priv) || ··· 1637 1635 if (sdvox & HDMI_COLOR_RANGE_16_235) 1638 1636 pipe_config->limited_color_range = true; 1639 1637 1640 - if (sdvox & SDVO_AUDIO_ENABLE) 1641 - pipe_config->has_audio = true; 1638 + if (intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_AUDIO_STAT, 1639 + &val, 1)) { 1640 + u8 mask = SDVO_AUDIO_ELD_VALID | SDVO_AUDIO_PRESENCE_DETECT; 1641 + 1642 + if ((val & mask) == mask) 1643 + pipe_config->has_audio = true; 1644 + } 1642 1645 1643 1646 if (intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_ENCODE, 1644 1647 &val, 1)) { ··· 1654 1647 intel_sdvo_get_avi_infoframe(intel_sdvo, pipe_config); 1655 1648 } 1656 1649 1650 + static void intel_sdvo_disable_audio(struct intel_sdvo *intel_sdvo) 1651 + { 1652 + intel_sdvo_set_audio_state(intel_sdvo, 0); 1653 + } 1654 + 1655 + static void intel_sdvo_enable_audio(struct intel_sdvo *intel_sdvo, 1656 + const struct intel_crtc_state *crtc_state, 1657 + const struct drm_connector_state *conn_state) 1658 + { 1659 + const struct drm_display_mode *adjusted_mode = 1660 + &crtc_state->base.adjusted_mode; 1661 + struct drm_connector *connector = conn_state->connector; 1662 + u8 *eld = connector->eld; 1663 + 1664 + eld[6] = drm_av_sync_delay(connector, adjusted_mode) / 2; 1665 + 1666 + intel_sdvo_set_audio_state(intel_sdvo, 0); 1667 + 1668 + intel_sdvo_write_infoframe(intel_sdvo, SDVO_HBUF_INDEX_ELD, 1669 + SDVO_HBUF_TX_DISABLED, 1670 + eld, drm_eld_size(eld)); 1671 + 1672 + intel_sdvo_set_audio_state(intel_sdvo, SDVO_AUDIO_ELD_VALID | 1673 + SDVO_AUDIO_PRESENCE_DETECT); 1674 + } 1675 + 1657 1676 static void intel_disable_sdvo(struct intel_encoder *encoder, 1658 1677 const struct intel_crtc_state *old_crtc_state, 1659 1678 const struct drm_connector_state *conn_state) ··· 1688 1655 struct intel_sdvo *intel_sdvo = to_sdvo(encoder); 1689 1656 struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc); 1690 1657 u32 temp; 1658 + 1659 + if (old_crtc_state->has_audio) 1660 + intel_sdvo_disable_audio(intel_sdvo); 1691 1661 1692 1662 intel_sdvo_set_active_outputs(intel_sdvo, 0); 1693 1663 if (0) ··· 1777 1741 intel_sdvo_set_encoder_power_state(intel_sdvo, 1778 1742 DRM_MODE_DPMS_ON); 1779 1743 intel_sdvo_set_active_outputs(intel_sdvo, intel_sdvo->attached_output); 1744 + 1745 + if (pipe_config->has_audio) 1746 + intel_sdvo_enable_audio(intel_sdvo, pipe_config, conn_state); 1780 1747 } 1781 1748 1782 1749 static enum drm_mode_status ··· 2642 2603 intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device) 2643 2604 { 2644 2605 struct drm_encoder *encoder = &intel_sdvo->base.base; 2645 - struct drm_i915_private *dev_priv = to_i915(encoder->dev); 2646 2606 struct drm_connector *connector; 2647 2607 struct intel_encoder *intel_encoder = to_intel_encoder(encoder); 2648 2608 struct intel_connector *intel_connector; ··· 2678 2640 encoder->encoder_type = DRM_MODE_ENCODER_TMDS; 2679 2641 connector->connector_type = DRM_MODE_CONNECTOR_DVID; 2680 2642 2681 - /* gen3 doesn't do the hdmi bits in the SDVO register */ 2682 - if (INTEL_GEN(dev_priv) >= 4 && 2683 - intel_sdvo_is_hdmi_connector(intel_sdvo, device)) { 2643 + if (intel_sdvo_is_hdmi_connector(intel_sdvo, device)) { 2684 2644 connector->connector_type = DRM_MODE_CONNECTOR_HDMIA; 2685 2645 intel_sdvo_connector->is_hdmi = true; 2686 2646 }
+3
drivers/gpu/drm/i915/intel_sdvo_regs.h
··· 707 707 #define SDVO_CMD_GET_AUDIO_ENCRYPT_PREFER 0x90 708 708 #define SDVO_CMD_SET_AUDIO_STAT 0x91 709 709 #define SDVO_CMD_GET_AUDIO_STAT 0x92 710 + #define SDVO_AUDIO_ELD_VALID (1 << 0) 711 + #define SDVO_AUDIO_PRESENCE_DETECT (1 << 1) 712 + #define SDVO_AUDIO_CP_READY (1 << 2) 710 713 #define SDVO_CMD_SET_HBUF_INDEX 0x93 711 714 #define SDVO_HBUF_INDEX_ELD 0 712 715 #define SDVO_HBUF_INDEX_AVI_IF 1
+6 -24
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
··· 90 90 static void mtk_drm_crtc_destroy(struct drm_crtc *crtc) 91 91 { 92 92 struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); 93 - int i; 94 - 95 - for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) 96 - clk_unprepare(mtk_crtc->ddp_comp[i]->clk); 97 93 98 94 mtk_disp_mutex_put(mtk_crtc->mutex); 99 95 ··· 182 186 183 187 DRM_DEBUG_DRIVER("%s\n", __func__); 184 188 for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) { 185 - ret = clk_enable(mtk_crtc->ddp_comp[i]->clk); 189 + ret = clk_prepare_enable(mtk_crtc->ddp_comp[i]->clk); 186 190 if (ret) { 187 191 DRM_ERROR("Failed to enable clock %d: %d\n", i, ret); 188 192 goto err; ··· 192 196 return 0; 193 197 err: 194 198 while (--i >= 0) 195 - clk_disable(mtk_crtc->ddp_comp[i]->clk); 199 + clk_disable_unprepare(mtk_crtc->ddp_comp[i]->clk); 196 200 return ret; 197 201 } 198 202 ··· 202 206 203 207 DRM_DEBUG_DRIVER("%s\n", __func__); 204 208 for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) 205 - clk_disable(mtk_crtc->ddp_comp[i]->clk); 209 + clk_disable_unprepare(mtk_crtc->ddp_comp[i]->clk); 206 210 } 207 211 208 212 static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc) ··· 573 577 if (!comp) { 574 578 dev_err(dev, "Component %pOF not initialized\n", node); 575 579 ret = -ENODEV; 576 - goto unprepare; 577 - } 578 - 579 - ret = clk_prepare(comp->clk); 580 - if (ret) { 581 - dev_err(dev, 582 - "Failed to prepare clock for component %pOF: %d\n", 583 - node, ret); 584 - goto unprepare; 580 + return ret; 585 581 } 586 582 587 583 mtk_crtc->ddp_comp[i] = comp; ··· 591 603 ret = mtk_plane_init(drm_dev, &mtk_crtc->planes[zpos], 592 604 BIT(pipe), type); 593 605 if (ret) 594 - goto unprepare; 606 + return ret; 595 607 } 596 608 597 609 ret = mtk_drm_crtc_init(drm_dev, mtk_crtc, &mtk_crtc->planes[0], 598 610 mtk_crtc->layer_nr > 1 ? &mtk_crtc->planes[1] : 599 611 NULL, pipe); 600 612 if (ret < 0) 601 - goto unprepare; 613 + return ret; 602 614 drm_mode_crtc_set_gamma_size(&mtk_crtc->base, MTK_LUT_SIZE); 603 615 drm_crtc_enable_color_mgmt(&mtk_crtc->base, 0, false, MTK_LUT_SIZE); 604 616 priv->num_pipes++; 605 617 606 618 return 0; 607 - 608 - unprepare: 609 - while (--i >= 0) 610 - clk_unprepare(mtk_crtc->ddp_comp[i]->clk); 611 - 612 - return ret; 613 619 }
+3 -5
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 303 303 static void mtk_drm_kms_deinit(struct drm_device *drm) 304 304 { 305 305 drm_kms_helper_poll_fini(drm); 306 + drm_atomic_helper_shutdown(drm); 306 307 307 308 component_unbind_all(drm->dev, drm); 308 309 drm_mode_config_cleanup(drm); ··· 390 389 struct mtk_drm_private *private = dev_get_drvdata(dev); 391 390 392 391 drm_dev_unregister(private->drm); 392 + mtk_drm_kms_deinit(private->drm); 393 393 drm_dev_put(private->drm); 394 + private->num_pipes = 0; 394 395 private->drm = NULL; 395 396 } 396 397 ··· 563 560 static int mtk_drm_remove(struct platform_device *pdev) 564 561 { 565 562 struct mtk_drm_private *private = platform_get_drvdata(pdev); 566 - struct drm_device *drm = private->drm; 567 563 int i; 568 - 569 - drm_dev_unregister(drm); 570 - mtk_drm_kms_deinit(drm); 571 - drm_dev_put(drm); 572 564 573 565 component_master_del(&pdev->dev, &mtk_drm_ops); 574 566 pm_runtime_disable(&pdev->dev);
+6 -1
drivers/gpu/drm/mediatek/mtk_drm_gem.c
··· 136 136 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap(). 137 137 */ 138 138 vma->vm_flags &= ~VM_PFNMAP; 139 - vma->vm_pgoff = 0; 140 139 141 140 ret = dma_mmap_attrs(priv->dma_dev, vma, mtk_gem->cookie, 142 141 mtk_gem->dma_addr, obj->size, mtk_gem->dma_attrs); ··· 166 167 return ret; 167 168 168 169 obj = vma->vm_private_data; 170 + 171 + /* 172 + * Set vm_pgoff (used as a fake buffer offset by DRM) to 0 and map the 173 + * whole buffer from the start. 174 + */ 175 + vma->vm_pgoff = 0; 169 176 170 177 return mtk_drm_gem_object_mmap(obj, vma); 171 178 }
+11 -1
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 622 622 if (--dsi->refcount != 0) 623 623 return; 624 624 625 + /* 626 + * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since 627 + * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(), 628 + * which needs irq for vblank, and mtk_dsi_stop() will disable irq. 629 + * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(), 630 + * after dsi is fully set. 631 + */ 632 + mtk_dsi_stop(dsi); 633 + 625 634 if (!mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500)) { 626 635 if (dsi->panel) { 627 636 if (drm_panel_unprepare(dsi->panel)) { ··· 697 688 } 698 689 } 699 690 700 - mtk_dsi_stop(dsi); 701 691 mtk_dsi_poweroff(dsi); 702 692 703 693 dsi->enabled = false; ··· 844 836 /* Skip connector cleanup if creation was delegated to the bridge */ 845 837 if (dsi->conn.dev) 846 838 drm_connector_cleanup(&dsi->conn); 839 + if (dsi->panel) 840 + drm_panel_detach(dsi->panel); 847 841 } 848 842 849 843 static void mtk_dsi_ddp_start(struct mtk_ddp_comp *comp)
+2 -4
drivers/gpu/drm/meson/meson_crtc.c
··· 107 107 priv->io_base + _REG(VPP_OUT_H_V_SIZE)); 108 108 109 109 drm_crtc_vblank_on(crtc); 110 - 111 - priv->viu.osd1_enabled = true; 112 110 } 113 111 114 112 static void meson_crtc_atomic_enable(struct drm_crtc *crtc, ··· 135 137 priv->io_base + _REG(VPP_MISC)); 136 138 137 139 drm_crtc_vblank_on(crtc); 138 - 139 - priv->viu.osd1_enabled = true; 140 140 } 141 141 142 142 static void meson_g12a_crtc_atomic_disable(struct drm_crtc *crtc, ··· 252 256 writel_relaxed(priv->viu.osb_blend1_size, 253 257 priv->io_base + 254 258 _REG(VIU_OSD_BLEND_BLEND1_SIZE)); 259 + writel_bits_relaxed(3 << 8, 3 << 8, 260 + priv->io_base + _REG(OSD1_BLEND_SRC_CTRL)); 255 261 } 256 262 257 263 static void meson_crtc_enable_vd1(struct meson_drm *priv)
+5 -3
drivers/gpu/drm/meson/meson_plane.c
··· 305 305 meson_plane->enabled = true; 306 306 } 307 307 308 + priv->viu.osd1_enabled = true; 309 + 308 310 spin_unlock_irqrestore(&priv->drm->event_lock, flags); 309 311 } 310 312 ··· 318 316 319 317 /* Disable OSD1 */ 320 318 if (meson_vpu_is_compatible(priv, "amlogic,meson-g12a-vpu")) 321 - writel_bits_relaxed(BIT(0) | BIT(21), 0, 322 - priv->io_base + _REG(VIU_OSD1_CTRL_STAT)); 319 + writel_bits_relaxed(3 << 8, 0, 320 + priv->io_base + _REG(OSD1_BLEND_SRC_CTRL)); 323 321 else 324 322 writel_bits_relaxed(VPP_OSD1_POSTBLEND, 0, 325 323 priv->io_base + _REG(VPP_MISC)); 326 324 327 325 meson_plane->enabled = false; 328 - 326 + priv->viu.osd1_enabled = false; 329 327 } 330 328 331 329 static const struct drm_plane_helper_funcs meson_plane_helper_funcs = {
+11 -2
drivers/gpu/drm/meson/meson_vclk.c
··· 503 503 504 504 /* G12A HDMI PLL Needs specific parameters for 5.4GHz */ 505 505 if (m >= 0xf7) { 506 - regmap_write(priv->hhi, HHI_HDMI_PLL_CNTL4, 0xea68dc00); 507 - regmap_write(priv->hhi, HHI_HDMI_PLL_CNTL5, 0x65771290); 506 + if (frac < 0x10000) { 507 + regmap_write(priv->hhi, HHI_HDMI_PLL_CNTL4, 508 + 0x6a685c00); 509 + regmap_write(priv->hhi, HHI_HDMI_PLL_CNTL5, 510 + 0x11551293); 511 + } else { 512 + regmap_write(priv->hhi, HHI_HDMI_PLL_CNTL4, 513 + 0xea68dc00); 514 + regmap_write(priv->hhi, HHI_HDMI_PLL_CNTL5, 515 + 0x65771290); 516 + } 508 517 regmap_write(priv->hhi, HHI_HDMI_PLL_CNTL6, 0x39272000); 509 518 regmap_write(priv->hhi, HHI_HDMI_PLL_CNTL7, 0x55540000); 510 519 } else {
+1 -2
drivers/gpu/drm/meson/meson_viu.c
··· 405 405 0 << 16 | 406 406 1, 407 407 priv->io_base + _REG(VIU_OSD_BLEND_CTRL)); 408 - writel_relaxed(3 << 8 | 409 - 1 << 20, 408 + writel_relaxed(1 << 20, 410 409 priv->io_base + _REG(OSD1_BLEND_SRC_CTRL)); 411 410 writel_relaxed(1 << 20, 412 411 priv->io_base + _REG(OSD2_BLEND_SRC_CTRL));
+1
drivers/gpu/drm/panfrost/Kconfig
··· 10 10 select IOMMU_IO_PGTABLE_LPAE 11 11 select DRM_GEM_SHMEM_HELPER 12 12 select PM_DEVFREQ 13 + select DEVFREQ_GOV_SIMPLE_ONDEMAND 13 14 help 14 15 DRM driver for ARM Mali Midgard (T6xx, T7xx, T8xx) and 15 16 Bifrost (G3x, G5x, G7x) GPUs.
+12 -1
drivers/gpu/drm/panfrost/panfrost_devfreq.c
··· 140 140 return 0; 141 141 142 142 ret = dev_pm_opp_of_add_table(&pfdev->pdev->dev); 143 - if (ret) 143 + if (ret == -ENODEV) /* Optional, continue without devfreq */ 144 + return 0; 145 + else if (ret) 144 146 return ret; 145 147 146 148 panfrost_devfreq_reset(pfdev); ··· 172 170 { 173 171 int i; 174 172 173 + if (!pfdev->devfreq.devfreq) 174 + return; 175 + 175 176 panfrost_devfreq_reset(pfdev); 176 177 for (i = 0; i < NUM_JOB_SLOTS; i++) 177 178 pfdev->devfreq.slot[i].busy = false; ··· 184 179 185 180 void panfrost_devfreq_suspend(struct panfrost_device *pfdev) 186 181 { 182 + if (!pfdev->devfreq.devfreq) 183 + return; 184 + 187 185 devfreq_suspend_device(pfdev->devfreq.devfreq); 188 186 } 189 187 ··· 195 187 struct panfrost_devfreq_slot *devfreq_slot = &pfdev->devfreq.slot[slot]; 196 188 ktime_t now; 197 189 ktime_t last; 190 + 191 + if (!pfdev->devfreq.devfreq) 192 + return; 198 193 199 194 now = ktime_get(); 200 195 last = pfdev->devfreq.slot[slot].time_last_update;
+8 -3
drivers/hid/hid-a4tech.c
··· 35 35 { 36 36 struct a4tech_sc *a4 = hid_get_drvdata(hdev); 37 37 38 - if (usage->type == EV_REL && usage->code == REL_WHEEL) 38 + if (usage->type == EV_REL && usage->code == REL_WHEEL_HI_RES) { 39 39 set_bit(REL_HWHEEL, *bit); 40 + set_bit(REL_HWHEEL_HI_RES, *bit); 41 + } 40 42 41 43 if ((a4->quirks & A4_2WHEEL_MOUSE_HACK_7) && usage->hid == 0x00090007) 42 44 return -1; ··· 59 57 input = field->hidinput->input; 60 58 61 59 if (a4->quirks & A4_2WHEEL_MOUSE_HACK_B8) { 62 - if (usage->type == EV_REL && usage->code == REL_WHEEL) { 60 + if (usage->type == EV_REL && usage->code == REL_WHEEL_HI_RES) { 63 61 a4->delayed_value = value; 64 62 return 1; 65 63 } ··· 67 65 if (usage->hid == 0x000100b8) { 68 66 input_event(input, EV_REL, value ? REL_HWHEEL : 69 67 REL_WHEEL, a4->delayed_value); 68 + input_event(input, EV_REL, value ? REL_HWHEEL_HI_RES : 69 + REL_WHEEL_HI_RES, a4->delayed_value * 120); 70 70 return 1; 71 71 } 72 72 } ··· 78 74 return 1; 79 75 } 80 76 81 - if (usage->code == REL_WHEEL && a4->hw_wheel) { 77 + if (usage->code == REL_WHEEL_HI_RES && a4->hw_wheel) { 82 78 input_event(input, usage->type, REL_HWHEEL, value); 79 + input_event(input, usage->type, REL_HWHEEL_HI_RES, value * 120); 83 80 return 1; 84 81 } 85 82
+3 -13
drivers/hid/hid-core.c
··· 27 27 #include <linux/vmalloc.h> 28 28 #include <linux/sched.h> 29 29 #include <linux/semaphore.h> 30 - #include <linux/async.h> 31 30 32 31 #include <linux/hid.h> 33 32 #include <linux/hiddev.h> ··· 1310 1311 u32 hid_field_extract(const struct hid_device *hid, u8 *report, 1311 1312 unsigned offset, unsigned n) 1312 1313 { 1313 - if (n > 256) { 1314 - hid_warn(hid, "hid_field_extract() called with n (%d) > 256! (%s)\n", 1314 + if (n > 32) { 1315 + hid_warn(hid, "hid_field_extract() called with n (%d) > 32! (%s)\n", 1315 1316 n, current->comm); 1316 - n = 256; 1317 + n = 32; 1317 1318 } 1318 1319 1319 1320 return __extract(report, offset, n); ··· 2360 2361 * is converted to allow more than 20 bytes as the device name? */ 2361 2362 dev_set_name(&hdev->dev, "%04X:%04X:%04X.%04X", hdev->bus, 2362 2363 hdev->vendor, hdev->product, atomic_inc_return(&id)); 2363 - 2364 - /* 2365 - * Try loading the module for the device before the add, so that we do 2366 - * not first have hid-generic binding only to have it replaced 2367 - * immediately afterwards with a specialized driver. 2368 - */ 2369 - if (!current_is_async()) 2370 - request_module("hid:b%04Xg%04Xv%08Xp%08X", hdev->bus, 2371 - hdev->group, hdev->vendor, hdev->product); 2372 2364 2373 2365 hid_debug_register(hdev, dev_name(&hdev->dev)); 2374 2366 ret = device_add(&hdev->dev);
+2
drivers/hid/hid-hyperv.c
··· 606 606 } 607 607 608 608 MODULE_LICENSE("GPL"); 609 + MODULE_DESCRIPTION("Microsoft Hyper-V Synthetic HID Driver"); 610 + 609 611 module_init(mousevsc_init); 610 612 module_exit(mousevsc_exit);
+1
drivers/hid/hid-ids.h
··· 1086 1086 #define USB_DEVICE_ID_SYNAPTICS_HD 0x0ac3 1087 1087 #define USB_DEVICE_ID_SYNAPTICS_QUAD_HD 0x1ac3 1088 1088 #define USB_DEVICE_ID_SYNAPTICS_TP_V103 0x5710 1089 + #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5 0x81a7 1089 1090 1090 1091 #define USB_VENDOR_ID_TEXAS_INSTRUMENTS 0x2047 1091 1092 #define USB_DEVICE_ID_TEXAS_INSTRUMENTS_LENOVO_YOGA 0x0855
+35 -15
drivers/hid/hid-logitech-dj.c
··· 113 113 recvr_type_dj, 114 114 recvr_type_hidpp, 115 115 recvr_type_gaming_hidpp, 116 + recvr_type_mouse_only, 116 117 recvr_type_27mhz, 117 118 recvr_type_bluetooth, 118 119 }; ··· 865 864 schedule_work(&djrcv_dev->work); 866 865 } 867 866 868 - static void logi_hidpp_dev_conn_notif_equad(struct hidpp_event *hidpp_report, 867 + static void logi_hidpp_dev_conn_notif_equad(struct hid_device *hdev, 868 + struct hidpp_event *hidpp_report, 869 869 struct dj_workitem *workitem) 870 870 { 871 + struct dj_receiver_dev *djrcv_dev = hid_get_drvdata(hdev); 872 + 871 873 workitem->type = WORKITEM_TYPE_PAIRED; 872 874 workitem->device_type = hidpp_report->params[HIDPP_PARAM_DEVICE_INFO] & 873 875 HIDPP_DEVICE_TYPE_MASK; ··· 884 880 break; 885 881 case REPORT_TYPE_MOUSE: 886 882 workitem->reports_supported |= STD_MOUSE | HIDPP; 883 + if (djrcv_dev->type == recvr_type_mouse_only) 884 + workitem->reports_supported |= MULTIMEDIA; 887 885 break; 888 886 } 889 887 } ··· 929 923 case 0x01: 930 924 device_type = "Bluetooth"; 931 925 /* Bluetooth connect packet contents is the same as (e)QUAD */ 932 - logi_hidpp_dev_conn_notif_equad(hidpp_report, &workitem); 926 + logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem); 933 927 if (!(hidpp_report->params[HIDPP_PARAM_DEVICE_INFO] & 934 928 HIDPP_MANUFACTURER_MASK)) { 935 929 hid_info(hdev, "Non Logitech device connected on slot %d\n", ··· 943 937 break; 944 938 case 0x03: 945 939 device_type = "QUAD or eQUAD"; 946 - logi_hidpp_dev_conn_notif_equad(hidpp_report, &workitem); 940 + logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem); 947 941 break; 948 942 case 0x04: 949 943 device_type = "eQUAD step 4 DJ"; 950 - logi_hidpp_dev_conn_notif_equad(hidpp_report, &workitem); 944 + logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem); 951 945 break; 952 946 case 0x05: 953 947 device_type = "DFU Lite"; 954 948 break; 955 949 case 0x06: 956 950 device_type = "eQUAD step 4 Lite"; 957 - logi_hidpp_dev_conn_notif_equad(hidpp_report, &workitem); 951 + logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem); 958 952 break; 959 953 case 0x07: 960 954 device_type = "eQUAD step 4 Gaming"; ··· 964 958 break; 965 959 case 0x0a: 966 960 device_type = "eQUAD nano Lite"; 967 - logi_hidpp_dev_conn_notif_equad(hidpp_report, &workitem); 961 + logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem); 968 962 break; 969 963 case 0x0c: 970 964 device_type = "eQUAD Lightspeed"; 971 - logi_hidpp_dev_conn_notif_equad(hidpp_report, &workitem); 965 + logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem); 972 966 workitem.reports_supported |= STD_KEYBOARD; 973 967 break; 974 968 } ··· 1319 1313 if (djdev->reports_supported & STD_MOUSE) { 1320 1314 dbg_hid("%s: sending a mouse descriptor, reports_supported: %llx\n", 1321 1315 __func__, djdev->reports_supported); 1322 - if (djdev->dj_receiver_dev->type == recvr_type_gaming_hidpp) 1316 + if (djdev->dj_receiver_dev->type == recvr_type_gaming_hidpp || 1317 + djdev->dj_receiver_dev->type == recvr_type_mouse_only) 1323 1318 rdcat(rdesc, &rsize, mse_high_res_descriptor, 1324 1319 sizeof(mse_high_res_descriptor)); 1325 1320 else if (djdev->dj_receiver_dev->type == recvr_type_27mhz) ··· 1563 1556 data[0] = data[1]; 1564 1557 data[1] = 0; 1565 1558 } 1566 - /* The 27 MHz mouse-only receiver sends unnumbered mouse data */ 1559 + /* 1560 + * Mouse-only receivers send unnumbered mouse data. The 27 MHz 1561 + * receiver uses 6 byte packets, the nano receiver 8 bytes. 1562 + */ 1567 1563 if (djrcv_dev->unnumbered_application == HID_GD_MOUSE && 1568 - size == 6) { 1569 - u8 mouse_report[7]; 1564 + size <= 8) { 1565 + u8 mouse_report[9]; 1570 1566 1571 1567 /* Prepend report id */ 1572 1568 mouse_report[0] = REPORT_TYPE_MOUSE; 1573 - memcpy(mouse_report + 1, data, 6); 1574 - logi_dj_recv_forward_input_report(hdev, mouse_report, 7); 1569 + memcpy(mouse_report + 1, data, size); 1570 + logi_dj_recv_forward_input_report(hdev, mouse_report, 1571 + size + 1); 1575 1572 } 1576 1573 1577 1574 return false; ··· 1646 1635 case recvr_type_dj: no_dj_interfaces = 3; break; 1647 1636 case recvr_type_hidpp: no_dj_interfaces = 2; break; 1648 1637 case recvr_type_gaming_hidpp: no_dj_interfaces = 3; break; 1638 + case recvr_type_mouse_only: no_dj_interfaces = 2; break; 1649 1639 case recvr_type_27mhz: no_dj_interfaces = 2; break; 1650 1640 case recvr_type_bluetooth: no_dj_interfaces = 2; break; 1651 1641 } ··· 1820 1808 {HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1821 1809 USB_DEVICE_ID_LOGITECH_UNIFYING_RECEIVER_2), 1822 1810 .driver_data = recvr_type_dj}, 1823 - { /* Logitech Nano (non DJ) receiver */ 1811 + { /* Logitech Nano mouse only receiver */ 1824 1812 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1825 1813 USB_DEVICE_ID_LOGITECH_NANO_RECEIVER), 1826 - .driver_data = recvr_type_hidpp}, 1814 + .driver_data = recvr_type_mouse_only}, 1827 1815 { /* Logitech Nano (non DJ) receiver */ 1828 1816 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1829 1817 USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_2), ··· 1847 1835 { /* Logitech MX5000 HID++ / bluetooth receiver mouse intf. */ 1848 1836 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1849 1837 0xc70a), 1838 + .driver_data = recvr_type_bluetooth}, 1839 + { /* Logitech MX5500 HID++ / bluetooth receiver keyboard intf. */ 1840 + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1841 + 0xc71b), 1842 + .driver_data = recvr_type_bluetooth}, 1843 + { /* Logitech MX5500 HID++ / bluetooth receiver mouse intf. */ 1844 + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1845 + 0xc71c), 1850 1846 .driver_data = recvr_type_bluetooth}, 1851 1847 {} 1852 1848 };
+9
drivers/hid/hid-logitech-hidpp.c
··· 3728 3728 { /* Keyboard MX5000 (Bluetooth-receiver in HID proxy mode) */ 3729 3729 LDJ_DEVICE(0xb305), 3730 3730 .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS }, 3731 + { /* Keyboard MX5500 (Bluetooth-receiver in HID proxy mode) */ 3732 + LDJ_DEVICE(0xb30b), 3733 + .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS }, 3731 3734 3732 3735 { LDJ_DEVICE(HID_ANY_ID) }, 3733 3736 ··· 3743 3740 { /* Keyboard MX3200 (Y-RAV80) */ 3744 3741 L27MHZ_DEVICE(0x005c), 3745 3742 .driver_data = HIDPP_QUIRK_KBD_ZOOM_WHEEL }, 3743 + { /* S510 Media Remote */ 3744 + L27MHZ_DEVICE(0x00fe), 3745 + .driver_data = HIDPP_QUIRK_KBD_SCROLL_WHEEL }, 3746 3746 3747 3747 { L27MHZ_DEVICE(HID_ANY_ID) }, 3748 3748 ··· 3761 3755 3762 3756 { /* MX5000 keyboard over Bluetooth */ 3763 3757 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb305), 3758 + .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS }, 3759 + { /* MX5500 keyboard over Bluetooth */ 3760 + HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb30b), 3764 3761 .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS }, 3765 3762 {} 3766 3763 };
+7
drivers/hid/hid-multitouch.c
··· 637 637 if (*target != DEFAULT_TRUE && 638 638 *target != DEFAULT_FALSE && 639 639 *target != DEFAULT_ZERO) { 640 + if (usage->contactid == DEFAULT_ZERO || 641 + usage->x == DEFAULT_ZERO || 642 + usage->y == DEFAULT_ZERO) { 643 + hid_dbg(hdev, 644 + "ignoring duplicate usage on incomplete"); 645 + return; 646 + } 640 647 usage = mt_allocate_usage(hdev, application); 641 648 if (!usage) 642 649 return;
+14 -1
drivers/hid/hid-rmi.c
··· 35 35 /* device flags */ 36 36 #define RMI_DEVICE BIT(0) 37 37 #define RMI_DEVICE_HAS_PHYS_BUTTONS BIT(1) 38 + #define RMI_DEVICE_OUTPUT_SET_REPORT BIT(2) 38 39 39 40 /* 40 41 * retrieve the ctrl registers ··· 164 163 165 164 static int rmi_write_report(struct hid_device *hdev, u8 *report, int len) 166 165 { 166 + struct rmi_data *data = hid_get_drvdata(hdev); 167 167 int ret; 168 168 169 - ret = hid_hw_output_report(hdev, (void *)report, len); 169 + if (data->device_flags & RMI_DEVICE_OUTPUT_SET_REPORT) { 170 + /* 171 + * Talk to device by using SET_REPORT requests instead. 172 + */ 173 + ret = hid_hw_raw_request(hdev, report[0], report, 174 + len, HID_OUTPUT_REPORT, HID_REQ_SET_REPORT); 175 + } else { 176 + ret = hid_hw_output_report(hdev, (void *)report, len); 177 + } 178 + 170 179 if (ret < 0) { 171 180 dev_err(&hdev->dev, "failed to write hid report (%d)\n", ret); 172 181 return ret; ··· 758 747 .driver_data = RMI_DEVICE_HAS_PHYS_BUTTONS }, 759 748 { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_COVER) }, 760 749 { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_REZEL) }, 750 + { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5), 751 + .driver_data = RMI_DEVICE_OUTPUT_SET_REPORT }, 761 752 { HID_DEVICE(HID_BUS_ANY, HID_GROUP_RMI, HID_ANY_ID, HID_ANY_ID) }, 762 753 { } 763 754 };
+8
drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
··· 354 354 }, 355 355 .driver_data = (void *)&sipodev_desc 356 356 }, 357 + { 358 + .ident = "iBall Aer3", 359 + .matches = { 360 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "iBall"), 361 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Aer3"), 362 + }, 363 + .driver_data = (void *)&sipodev_desc 364 + }, 357 365 { } /* Terminate list */ 358 366 }; 359 367
+50 -23
drivers/hid/wacom_wac.c
··· 1232 1232 /* Add back in missing bits of ID for non-USI pens */ 1233 1233 wacom->id[0] |= (wacom->serial[0] >> 32) & 0xFFFFF; 1234 1234 } 1235 - wacom->tool[0] = wacom_intuos_get_tool_type(wacom_intuos_id_mangle(wacom->id[0])); 1236 1235 1237 1236 for (i = 0; i < pen_frames; i++) { 1238 1237 unsigned char *frame = &data[i*pen_frame_len + 1]; 1239 1238 bool valid = frame[0] & 0x80; 1240 1239 bool prox = frame[0] & 0x40; 1241 1240 bool range = frame[0] & 0x20; 1241 + bool invert = frame[0] & 0x10; 1242 1242 1243 1243 if (!valid) 1244 1244 continue; ··· 1247 1247 wacom->shared->stylus_in_proximity = false; 1248 1248 wacom_exit_report(wacom); 1249 1249 input_sync(pen_input); 1250 + 1251 + wacom->tool[0] = 0; 1252 + wacom->id[0] = 0; 1253 + wacom->serial[0] = 0; 1250 1254 return; 1251 1255 } 1256 + 1252 1257 if (range) { 1258 + if (!wacom->tool[0]) { /* first in range */ 1259 + /* Going into range select tool */ 1260 + if (invert) 1261 + wacom->tool[0] = BTN_TOOL_RUBBER; 1262 + else if (wacom->id[0]) 1263 + wacom->tool[0] = wacom_intuos_get_tool_type(wacom->id[0]); 1264 + else 1265 + wacom->tool[0] = BTN_TOOL_PEN; 1266 + } 1267 + 1253 1268 input_report_abs(pen_input, ABS_X, get_unaligned_le16(&frame[1])); 1254 1269 input_report_abs(pen_input, ABS_Y, get_unaligned_le16(&frame[3])); 1255 1270 ··· 1286 1271 get_unaligned_le16(&frame[11])); 1287 1272 } 1288 1273 } 1289 - input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5])); 1290 - if (wacom->features.type == INTUOSP2_BT) { 1291 - input_report_abs(pen_input, ABS_DISTANCE, 1292 - range ? frame[13] : wacom->features.distance_max); 1293 - } else { 1294 - input_report_abs(pen_input, ABS_DISTANCE, 1295 - range ? frame[7] : wacom->features.distance_max); 1274 + 1275 + if (wacom->tool[0]) { 1276 + input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5])); 1277 + if (wacom->features.type == INTUOSP2_BT) { 1278 + input_report_abs(pen_input, ABS_DISTANCE, 1279 + range ? frame[13] : wacom->features.distance_max); 1280 + } else { 1281 + input_report_abs(pen_input, ABS_DISTANCE, 1282 + range ? frame[7] : wacom->features.distance_max); 1283 + } 1284 + 1285 + input_report_key(pen_input, BTN_TOUCH, frame[0] & 0x09); 1286 + input_report_key(pen_input, BTN_STYLUS, frame[0] & 0x02); 1287 + input_report_key(pen_input, BTN_STYLUS2, frame[0] & 0x04); 1288 + 1289 + input_report_key(pen_input, wacom->tool[0], prox); 1290 + input_event(pen_input, EV_MSC, MSC_SERIAL, wacom->serial[0]); 1291 + input_report_abs(pen_input, ABS_MISC, 1292 + wacom_intuos_id_mangle(wacom->id[0])); /* report tool id */ 1296 1293 } 1297 - 1298 - input_report_key(pen_input, BTN_TOUCH, frame[0] & 0x01); 1299 - input_report_key(pen_input, BTN_STYLUS, frame[0] & 0x02); 1300 - input_report_key(pen_input, BTN_STYLUS2, frame[0] & 0x04); 1301 - 1302 - input_report_key(pen_input, wacom->tool[0], prox); 1303 - input_event(pen_input, EV_MSC, MSC_SERIAL, wacom->serial[0]); 1304 - input_report_abs(pen_input, ABS_MISC, 1305 - wacom_intuos_id_mangle(wacom->id[0])); /* report tool id */ 1306 1294 1307 1295 wacom->shared->stylus_in_proximity = prox; 1308 1296 ··· 1367 1349 if (wacom->num_contacts_left <= 0) { 1368 1350 wacom->num_contacts_left = 0; 1369 1351 wacom->shared->touch_down = wacom_wac_finger_count_touches(wacom); 1352 + input_sync(touch_input); 1370 1353 } 1371 1354 } 1372 1355 1373 - input_report_switch(touch_input, SW_MUTE_DEVICE, !(data[281] >> 7)); 1374 - input_sync(touch_input); 1356 + if (wacom->num_contacts_left == 0) { 1357 + // Be careful that we don't accidentally call input_sync with 1358 + // only a partial set of fingers of processed 1359 + input_report_switch(touch_input, SW_MUTE_DEVICE, !(data[281] >> 7)); 1360 + input_sync(touch_input); 1361 + } 1362 + 1375 1363 } 1376 1364 1377 1365 static void wacom_intuos_pro2_bt_pad(struct wacom_wac *wacom) ··· 1385 1361 struct input_dev *pad_input = wacom->pad_input; 1386 1362 unsigned char *data = wacom->data; 1387 1363 1388 - int buttons = (data[282] << 1) | ((data[281] >> 6) & 0x01); 1364 + int buttons = data[282] | ((data[281] & 0x40) << 2); 1389 1365 int ring = data[285] & 0x7F; 1390 1366 bool ringstatus = data[285] & 0x80; 1391 1367 bool prox = buttons || ringstatus; ··· 3834 3810 static bool wacom_is_led_toggled(struct wacom *wacom, int button_count, 3835 3811 int mask, int group) 3836 3812 { 3837 - int button_per_group; 3813 + int group_button; 3838 3814 3839 3815 /* 3840 3816 * 21UX2 has LED group 1 to the left and LED group 0 ··· 3844 3820 if (wacom->wacom_wac.features.type == WACOM_21UX2) 3845 3821 group = 1 - group; 3846 3822 3847 - button_per_group = button_count/wacom->led.count; 3823 + group_button = group * (button_count/wacom->led.count); 3848 3824 3849 - return mask & (1 << (group * button_per_group)); 3825 + if (wacom->wacom_wac.features.type == INTUOSP2_BT) 3826 + group_button = 8; 3827 + 3828 + return mask & (1 << group_button); 3850 3829 } 3851 3830 3852 3831 static void wacom_update_led(struct wacom *wacom, int button_count, int mask,
+1
drivers/i2c/busses/i2c-acorn.c
··· 81 81 82 82 static struct i2c_adapter ioc_ops = { 83 83 .nr = 0, 84 + .name = "ioc", 84 85 .algo_data = &ioc_data, 85 86 }; 86 87
+1 -2
drivers/i2c/busses/i2c-pca-platform.c
··· 21 21 #include <linux/platform_device.h> 22 22 #include <linux/i2c-algo-pca.h> 23 23 #include <linux/platform_data/i2c-pca-platform.h> 24 - #include <linux/gpio.h> 25 24 #include <linux/gpio/consumer.h> 26 25 #include <linux/io.h> 27 26 #include <linux/of.h> ··· 172 173 i2c->adap.dev.parent = &pdev->dev; 173 174 i2c->adap.dev.of_node = np; 174 175 175 - i2c->gpio = devm_gpiod_get_optional(&pdev->dev, "reset-gpios", GPIOD_OUT_LOW); 176 + i2c->gpio = devm_gpiod_get_optional(&pdev->dev, "reset", GPIOD_OUT_LOW); 176 177 if (IS_ERR(i2c->gpio)) 177 178 return PTR_ERR(i2c->gpio); 178 179
+12 -3
drivers/iommu/arm-smmu.c
··· 47 47 48 48 #include "arm-smmu-regs.h" 49 49 50 + /* 51 + * Apparently, some Qualcomm arm64 platforms which appear to expose their SMMU 52 + * global register space are still, in fact, using a hypervisor to mediate it 53 + * by trapping and emulating register accesses. Sadly, some deployed versions 54 + * of said trapping code have bugs wherein they go horribly wrong for stores 55 + * using r31 (i.e. XZR/WZR) as the source register. 56 + */ 57 + #define QCOM_DUMMY_VAL -1 58 + 50 59 #define ARM_MMU500_ACTLR_CPRE (1 << 1) 51 60 52 61 #define ARM_MMU500_ACR_CACHE_LOCK (1 << 26) ··· 420 411 { 421 412 unsigned int spin_cnt, delay; 422 413 423 - writel_relaxed(0, sync); 414 + writel_relaxed(QCOM_DUMMY_VAL, sync); 424 415 for (delay = 1; delay < TLB_LOOP_TIMEOUT; delay *= 2) { 425 416 for (spin_cnt = TLB_SPIN_COUNT; spin_cnt > 0; spin_cnt--) { 426 417 if (!(readl_relaxed(status) & sTLBGSTATUS_GSACTIVE)) ··· 1760 1751 } 1761 1752 1762 1753 /* Invalidate the TLB, just in case */ 1763 - writel_relaxed(0, gr0_base + ARM_SMMU_GR0_TLBIALLH); 1764 - writel_relaxed(0, gr0_base + ARM_SMMU_GR0_TLBIALLNSNH); 1754 + writel_relaxed(QCOM_DUMMY_VAL, gr0_base + ARM_SMMU_GR0_TLBIALLH); 1755 + writel_relaxed(QCOM_DUMMY_VAL, gr0_base + ARM_SMMU_GR0_TLBIALLNSNH); 1765 1756 1766 1757 reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0); 1767 1758
+4 -3
drivers/iommu/intel-iommu.c
··· 2504 2504 } 2505 2505 } 2506 2506 2507 + spin_lock(&iommu->lock); 2507 2508 spin_lock_irqsave(&device_domain_lock, flags); 2508 2509 if (dev) 2509 2510 found = find_domain(dev); ··· 2520 2519 2521 2520 if (found) { 2522 2521 spin_unlock_irqrestore(&device_domain_lock, flags); 2522 + spin_unlock(&iommu->lock); 2523 2523 free_devinfo_mem(info); 2524 2524 /* Caller must free the original domain */ 2525 2525 return found; 2526 2526 } 2527 2527 2528 - spin_lock(&iommu->lock); 2529 2528 ret = domain_attach_iommu(domain, iommu); 2530 - spin_unlock(&iommu->lock); 2531 - 2532 2529 if (ret) { 2533 2530 spin_unlock_irqrestore(&device_domain_lock, flags); 2531 + spin_unlock(&iommu->lock); 2534 2532 free_devinfo_mem(info); 2535 2533 return NULL; 2536 2534 } ··· 2539 2539 if (dev) 2540 2540 dev->archdata.iommu = info; 2541 2541 spin_unlock_irqrestore(&device_domain_lock, flags); 2542 + spin_unlock(&iommu->lock); 2542 2543 2543 2544 /* PASID table is mandatory for a PCI device in scalable mode. */ 2544 2545 if (dev && dev_is_pci(dev) && sm_supported(iommu)) {
+1 -1
drivers/iommu/intel-pasid.c
··· 389 389 */ 390 390 static inline void pasid_set_page_snoop(struct pasid_entry *pe, bool value) 391 391 { 392 - pasid_set_bits(&pe->val[1], 1 << 23, value); 392 + pasid_set_bits(&pe->val[1], 1 << 23, value << 23); 393 393 } 394 394 395 395 /*
+1 -1
drivers/iommu/iommu.c
··· 329 329 type = "unmanaged\n"; 330 330 break; 331 331 case IOMMU_DOMAIN_DMA: 332 - type = "DMA"; 332 + type = "DMA\n"; 333 333 break; 334 334 } 335 335 }
+13 -3
drivers/md/bcache/bset.c
··· 887 887 struct bset *i = bset_tree_last(b)->data; 888 888 struct bkey *m, *prev = NULL; 889 889 struct btree_iter iter; 890 + struct bkey preceding_key_on_stack = ZERO_KEY; 891 + struct bkey *preceding_key_p = &preceding_key_on_stack; 890 892 891 893 BUG_ON(b->ops->is_extents && !KEY_SIZE(k)); 892 894 893 - m = bch_btree_iter_init(b, &iter, b->ops->is_extents 894 - ? PRECEDING_KEY(&START_KEY(k)) 895 - : PRECEDING_KEY(k)); 895 + /* 896 + * If k has preceding key, preceding_key_p will be set to address 897 + * of k's preceding key; otherwise preceding_key_p will be set 898 + * to NULL inside preceding_key(). 899 + */ 900 + if (b->ops->is_extents) 901 + preceding_key(&START_KEY(k), &preceding_key_p); 902 + else 903 + preceding_key(k, &preceding_key_p); 904 + 905 + m = bch_btree_iter_init(b, &iter, preceding_key_p); 896 906 897 907 if (b->ops->insert_fixup(b, k, &iter, replace_key)) 898 908 return status;
+20 -14
drivers/md/bcache/bset.h
··· 434 434 return __bch_cut_back(where, k); 435 435 } 436 436 437 - #define PRECEDING_KEY(_k) \ 438 - ({ \ 439 - struct bkey *_ret = NULL; \ 440 - \ 441 - if (KEY_INODE(_k) || KEY_OFFSET(_k)) { \ 442 - _ret = &KEY(KEY_INODE(_k), KEY_OFFSET(_k), 0); \ 443 - \ 444 - if (!_ret->low) \ 445 - _ret->high--; \ 446 - _ret->low--; \ 447 - } \ 448 - \ 449 - _ret; \ 450 - }) 437 + /* 438 + * Pointer '*preceding_key_p' points to a memory object to store preceding 439 + * key of k. If the preceding key does not exist, set '*preceding_key_p' to 440 + * NULL. So the caller of preceding_key() needs to take care of memory 441 + * which '*preceding_key_p' pointed to before calling preceding_key(). 442 + * Currently the only caller of preceding_key() is bch_btree_insert_key(), 443 + * and it points to an on-stack variable, so the memory release is handled 444 + * by stackframe itself. 445 + */ 446 + static inline void preceding_key(struct bkey *k, struct bkey **preceding_key_p) 447 + { 448 + if (KEY_INODE(k) || KEY_OFFSET(k)) { 449 + (**preceding_key_p) = KEY(KEY_INODE(k), KEY_OFFSET(k), 0); 450 + if (!(*preceding_key_p)->low) 451 + (*preceding_key_p)->high--; 452 + (*preceding_key_p)->low--; 453 + } else { 454 + (*preceding_key_p) = NULL; 455 + } 456 + } 451 457 452 458 static inline bool bch_ptr_invalid(struct btree_keys *b, const struct bkey *k) 453 459 {
+6 -1
drivers/md/bcache/sysfs.c
··· 431 431 bch_writeback_queue(dc); 432 432 } 433 433 434 + /* 435 + * Only set BCACHE_DEV_WB_RUNNING when cached device attached to 436 + * a cache set, otherwise it doesn't make sense. 437 + */ 434 438 if (attr == &sysfs_writeback_percent) 435 - if (!test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) 439 + if ((dc->disk.c != NULL) && 440 + (!test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags))) 436 441 schedule_delayed_work(&dc->writeback_rate_update, 437 442 dc->writeback_rate_update_seconds * HZ); 438 443
+1 -1
drivers/media/dvb-core/dvb_frontend.c
··· 905 905 "DVB: adapter %i frontend %u frequency limits undefined - fix the driver\n", 906 906 fe->dvb->num, fe->id); 907 907 908 - dprintk("frequency interval: tuner: %u...%u, frontend: %u...%u", 908 + dev_dbg(fe->dvb->device, "frequency interval: tuner: %u...%u, frontend: %u...%u", 909 909 tuner_min, tuner_max, frontend_min, frontend_max); 910 910 911 911 /* If the standard is for satellite, convert frequencies to kHz */
+2 -2
drivers/media/platform/qcom/venus/hfi_helper.h
··· 560 560 561 561 struct hfi_capabilities { 562 562 u32 num_capabilities; 563 - struct hfi_capability *data; 563 + struct hfi_capability data[]; 564 564 }; 565 565 566 566 #define HFI_DEBUG_MSG_LOW 0x01 ··· 717 717 718 718 struct hfi_profile_level_supported { 719 719 u32 profile_count; 720 - struct hfi_profile_level *profile_level; 720 + struct hfi_profile_level profile_level[]; 721 721 }; 722 722 723 723 struct hfi_quality_vs_speed {
+13 -4
drivers/nvdimm/pmem.c
··· 303 303 NULL, 304 304 }; 305 305 306 - static void pmem_release_queue(void *q) 306 + static void __pmem_release_queue(struct percpu_ref *ref) 307 307 { 308 + struct request_queue *q; 309 + 310 + q = container_of(ref, typeof(*q), q_usage_counter); 308 311 blk_cleanup_queue(q); 312 + } 313 + 314 + static void pmem_release_queue(void *ref) 315 + { 316 + __pmem_release_queue(ref); 309 317 } 310 318 311 319 static void pmem_freeze_queue(struct percpu_ref *ref) ··· 407 399 if (!q) 408 400 return -ENOMEM; 409 401 410 - if (devm_add_action_or_reset(dev, pmem_release_queue, q)) 411 - return -ENOMEM; 412 - 413 402 pmem->pfn_flags = PFN_DEV; 414 403 pmem->pgmap.ref = &q->q_usage_counter; 415 404 pmem->pgmap.kill = pmem_freeze_queue; 405 + pmem->pgmap.cleanup = __pmem_release_queue; 416 406 if (is_nd_pfn(dev)) { 417 407 if (setup_pagemap_fsdax(dev, &pmem->pgmap)) 418 408 return -ENOMEM; ··· 431 425 pmem->pfn_flags |= PFN_MAP; 432 426 memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res)); 433 427 } else { 428 + if (devm_add_action_or_reset(dev, pmem_release_queue, 429 + &q->q_usage_counter)) 430 + return -ENOMEM; 434 431 addr = devm_memremap(dev, pmem->phys_addr, 435 432 pmem->size, ARCH_MEMREMAP_PMEM); 436 433 memcpy(&bb_res, &nsio->res, sizeof(bb_res));
+73 -44
drivers/pci/p2pdma.c
··· 20 20 #include <linux/seq_buf.h> 21 21 22 22 struct pci_p2pdma { 23 - struct percpu_ref devmap_ref; 24 - struct completion devmap_ref_done; 25 23 struct gen_pool *pool; 26 24 bool p2pmem_published; 25 + }; 26 + 27 + struct p2pdma_pagemap { 28 + struct dev_pagemap pgmap; 29 + struct percpu_ref ref; 30 + struct completion ref_done; 27 31 }; 28 32 29 33 static ssize_t size_show(struct device *dev, struct device_attribute *attr, ··· 78 74 .name = "p2pmem", 79 75 }; 80 76 77 + static struct p2pdma_pagemap *to_p2p_pgmap(struct percpu_ref *ref) 78 + { 79 + return container_of(ref, struct p2pdma_pagemap, ref); 80 + } 81 + 81 82 static void pci_p2pdma_percpu_release(struct percpu_ref *ref) 82 83 { 83 - struct pci_p2pdma *p2p = 84 - container_of(ref, struct pci_p2pdma, devmap_ref); 84 + struct p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(ref); 85 85 86 - complete_all(&p2p->devmap_ref_done); 86 + complete(&p2p_pgmap->ref_done); 87 87 } 88 88 89 89 static void pci_p2pdma_percpu_kill(struct percpu_ref *ref) 90 90 { 91 - /* 92 - * pci_p2pdma_add_resource() may be called multiple times 93 - * by a driver and may register the percpu_kill devm action multiple 94 - * times. We only want the first action to actually kill the 95 - * percpu_ref. 96 - */ 97 - if (percpu_ref_is_dying(ref)) 98 - return; 99 - 100 91 percpu_ref_kill(ref); 92 + } 93 + 94 + static void pci_p2pdma_percpu_cleanup(struct percpu_ref *ref) 95 + { 96 + struct p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(ref); 97 + 98 + wait_for_completion(&p2p_pgmap->ref_done); 99 + percpu_ref_exit(&p2p_pgmap->ref); 101 100 } 102 101 103 102 static void pci_p2pdma_release(void *data) 104 103 { 105 104 struct pci_dev *pdev = data; 105 + struct pci_p2pdma *p2pdma = pdev->p2pdma; 106 106 107 - if (!pdev->p2pdma) 107 + if (!p2pdma) 108 108 return; 109 109 110 - wait_for_completion(&pdev->p2pdma->devmap_ref_done); 111 - percpu_ref_exit(&pdev->p2pdma->devmap_ref); 112 - 113 - gen_pool_destroy(pdev->p2pdma->pool); 114 - sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); 110 + /* Flush and disable pci_alloc_p2p_mem() */ 115 111 pdev->p2pdma = NULL; 112 + synchronize_rcu(); 113 + 114 + gen_pool_destroy(p2pdma->pool); 115 + sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); 116 116 } 117 117 118 118 static int pci_p2pdma_setup(struct pci_dev *pdev) ··· 131 123 p2p->pool = gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev)); 132 124 if (!p2p->pool) 133 125 goto out; 134 - 135 - init_completion(&p2p->devmap_ref_done); 136 - error = percpu_ref_init(&p2p->devmap_ref, 137 - pci_p2pdma_percpu_release, 0, GFP_KERNEL); 138 - if (error) 139 - goto out_pool_destroy; 140 126 141 127 error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev); 142 128 if (error) ··· 165 163 int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, 166 164 u64 offset) 167 165 { 166 + struct p2pdma_pagemap *p2p_pgmap; 168 167 struct dev_pagemap *pgmap; 169 168 void *addr; 170 169 int error; ··· 188 185 return error; 189 186 } 190 187 191 - pgmap = devm_kzalloc(&pdev->dev, sizeof(*pgmap), GFP_KERNEL); 192 - if (!pgmap) 188 + p2p_pgmap = devm_kzalloc(&pdev->dev, sizeof(*p2p_pgmap), GFP_KERNEL); 189 + if (!p2p_pgmap) 193 190 return -ENOMEM; 191 + 192 + init_completion(&p2p_pgmap->ref_done); 193 + error = percpu_ref_init(&p2p_pgmap->ref, 194 + pci_p2pdma_percpu_release, 0, GFP_KERNEL); 195 + if (error) 196 + goto pgmap_free; 197 + 198 + pgmap = &p2p_pgmap->pgmap; 194 199 195 200 pgmap->res.start = pci_resource_start(pdev, bar) + offset; 196 201 pgmap->res.end = pgmap->res.start + size - 1; 197 202 pgmap->res.flags = pci_resource_flags(pdev, bar); 198 - pgmap->ref = &pdev->p2pdma->devmap_ref; 203 + pgmap->ref = &p2p_pgmap->ref; 199 204 pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; 200 205 pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) - 201 206 pci_resource_start(pdev, bar); 202 207 pgmap->kill = pci_p2pdma_percpu_kill; 208 + pgmap->cleanup = pci_p2pdma_percpu_cleanup; 203 209 204 210 addr = devm_memremap_pages(&pdev->dev, pgmap); 205 211 if (IS_ERR(addr)) { ··· 216 204 goto pgmap_free; 217 205 } 218 206 219 - error = gen_pool_add_virt(pdev->p2pdma->pool, (unsigned long)addr, 207 + error = gen_pool_add_owner(pdev->p2pdma->pool, (unsigned long)addr, 220 208 pci_bus_address(pdev, bar) + offset, 221 - resource_size(&pgmap->res), dev_to_node(&pdev->dev)); 209 + resource_size(&pgmap->res), dev_to_node(&pdev->dev), 210 + &p2p_pgmap->ref); 222 211 if (error) 223 - goto pgmap_free; 212 + goto pages_free; 224 213 225 214 pci_info(pdev, "added peer-to-peer DMA memory %pR\n", 226 215 &pgmap->res); 227 216 228 217 return 0; 229 218 219 + pages_free: 220 + devm_memunmap_pages(&pdev->dev, pgmap); 230 221 pgmap_free: 231 - devm_kfree(&pdev->dev, pgmap); 222 + devm_kfree(&pdev->dev, p2p_pgmap); 232 223 return error; 233 224 } 234 225 EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource); ··· 600 585 */ 601 586 void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size) 602 587 { 603 - void *ret; 588 + void *ret = NULL; 589 + struct percpu_ref *ref; 604 590 591 + /* 592 + * Pairs with synchronize_rcu() in pci_p2pdma_release() to 593 + * ensure pdev->p2pdma is non-NULL for the duration of the 594 + * read-lock. 595 + */ 596 + rcu_read_lock(); 605 597 if (unlikely(!pdev->p2pdma)) 606 - return NULL; 598 + goto out; 607 599 608 - if (unlikely(!percpu_ref_tryget_live(&pdev->p2pdma->devmap_ref))) 609 - return NULL; 600 + ret = (void *)gen_pool_alloc_owner(pdev->p2pdma->pool, size, 601 + (void **) &ref); 602 + if (!ret) 603 + goto out; 610 604 611 - ret = (void *)gen_pool_alloc(pdev->p2pdma->pool, size); 612 - 613 - if (unlikely(!ret)) 614 - percpu_ref_put(&pdev->p2pdma->devmap_ref); 615 - 605 + if (unlikely(!percpu_ref_tryget_live(ref))) { 606 + gen_pool_free(pdev->p2pdma->pool, (unsigned long) ret, size); 607 + ret = NULL; 608 + goto out; 609 + } 610 + out: 611 + rcu_read_unlock(); 616 612 return ret; 617 613 } 618 614 EXPORT_SYMBOL_GPL(pci_alloc_p2pmem); ··· 636 610 */ 637 611 void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size) 638 612 { 639 - gen_pool_free(pdev->p2pdma->pool, (uintptr_t)addr, size); 640 - percpu_ref_put(&pdev->p2pdma->devmap_ref); 613 + struct percpu_ref *ref; 614 + 615 + gen_pool_free_owner(pdev->p2pdma->pool, (uintptr_t)addr, size, 616 + (void **) &ref); 617 + percpu_ref_put(ref); 641 618 } 642 619 EXPORT_SYMBOL_GPL(pci_free_p2pmem); 643 620
+1
drivers/platform/mellanox/mlxreg-hotplug.c
··· 694 694 695 695 /* Clean interrupts setup. */ 696 696 mlxreg_hotplug_unset_irq(priv); 697 + devm_free_irq(&pdev->dev, priv->irq, priv); 697 698 698 699 return 0; 699 700 }
+8
drivers/platform/x86/asus-nb-wmi.c
··· 65 65 66 66 static struct quirk_entry quirk_asus_unknown = { 67 67 .wapf = 0, 68 + .wmi_backlight_set_devstate = true, 68 69 }; 69 70 70 71 static struct quirk_entry quirk_asus_q500a = { 71 72 .i8042_filter = asus_q500a_i8042_filter, 73 + .wmi_backlight_set_devstate = true, 72 74 }; 73 75 74 76 /* ··· 81 79 static struct quirk_entry quirk_asus_x55u = { 82 80 .wapf = 4, 83 81 .wmi_backlight_power = true, 82 + .wmi_backlight_set_devstate = true, 84 83 .no_display_toggle = true, 85 84 }; 86 85 87 86 static struct quirk_entry quirk_asus_wapf4 = { 88 87 .wapf = 4, 88 + .wmi_backlight_set_devstate = true, 89 89 }; 90 90 91 91 static struct quirk_entry quirk_asus_x200ca = { 92 92 .wapf = 2, 93 + .wmi_backlight_set_devstate = true, 93 94 }; 94 95 95 96 static struct quirk_entry quirk_asus_ux303ub = { 96 97 .wmi_backlight_native = true, 98 + .wmi_backlight_set_devstate = true, 97 99 }; 98 100 99 101 static struct quirk_entry quirk_asus_x550lb = { 102 + .wmi_backlight_set_devstate = true, 100 103 .xusb2pr = 0x01D9, 101 104 }; 102 105 103 106 static struct quirk_entry quirk_asus_forceals = { 107 + .wmi_backlight_set_devstate = true, 104 108 .wmi_force_als_set = true, 105 109 }; 106 110
+1 -1
drivers/platform/x86/asus-wmi.c
··· 2146 2146 err = asus_wmi_backlight_init(asus); 2147 2147 if (err && err != -ENODEV) 2148 2148 goto fail_backlight; 2149 - } else 2149 + } else if (asus->driver->quirks->wmi_backlight_set_devstate) 2150 2150 err = asus_wmi_set_devstate(ASUS_WMI_DEVID_BACKLIGHT, 2, NULL); 2151 2151 2152 2152 if (asus_wmi_has_fnlock_key(asus)) {
+1
drivers/platform/x86/asus-wmi.h
··· 31 31 bool store_backlight_power; 32 32 bool wmi_backlight_power; 33 33 bool wmi_backlight_native; 34 + bool wmi_backlight_set_devstate; 34 35 bool wmi_force_als_set; 35 36 int wapf; 36 37 /*
+14 -2
drivers/platform/x86/intel-vbtn.c
··· 76 76 struct platform_device *device = context; 77 77 struct intel_vbtn_priv *priv = dev_get_drvdata(&device->dev); 78 78 unsigned int val = !(event & 1); /* Even=press, Odd=release */ 79 - const struct key_entry *ke_rel; 79 + const struct key_entry *ke, *ke_rel; 80 80 bool autorelease; 81 81 82 82 if (priv->wakeup_mode) { 83 - if (sparse_keymap_entry_from_scancode(priv->input_dev, event)) { 83 + ke = sparse_keymap_entry_from_scancode(priv->input_dev, event); 84 + if (ke) { 84 85 pm_wakeup_hard_event(&device->dev); 86 + 87 + /* 88 + * Switch events like tablet mode will wake the device 89 + * and report the new switch position to the input 90 + * subsystem. 91 + */ 92 + if (ke->type == KE_SW) 93 + sparse_keymap_report_event(priv->input_dev, 94 + event, 95 + val, 96 + 0); 85 97 return; 86 98 } 87 99 goto out_unknown;
+1 -1
drivers/platform/x86/mlx-platform.c
··· 2032 2032 2033 2033 for (i = 0; i < ARRAY_SIZE(mlxplat_mux_data); i++) { 2034 2034 priv->pdev_mux[i] = platform_device_register_resndata( 2035 - &mlxplat_dev->dev, 2035 + &priv->pdev_i2c->dev, 2036 2036 "i2c-mux-reg", i, NULL, 2037 2037 0, &mlxplat_mux_data[i], 2038 2038 sizeof(mlxplat_mux_data[i]));
+42 -38
drivers/ras/cec.c
··· 2 2 #include <linux/mm.h> 3 3 #include <linux/gfp.h> 4 4 #include <linux/kernel.h> 5 + #include <linux/workqueue.h> 5 6 6 7 #include <asm/mce.h> 7 8 ··· 124 123 /* Amount of errors after which we offline */ 125 124 static unsigned int count_threshold = COUNT_MASK; 126 125 127 - /* 128 - * The timer "decays" element count each timer_interval which is 24hrs by 129 - * default. 130 - */ 131 - 132 - #define CEC_TIMER_DEFAULT_INTERVAL 24 * 60 * 60 /* 24 hrs */ 133 - #define CEC_TIMER_MIN_INTERVAL 1 * 60 * 60 /* 1h */ 134 - #define CEC_TIMER_MAX_INTERVAL 30 * 24 * 60 * 60 /* one month */ 135 - static struct timer_list cec_timer; 136 - static u64 timer_interval = CEC_TIMER_DEFAULT_INTERVAL; 126 + /* Each element "decays" each decay_interval which is 24hrs by default. */ 127 + #define CEC_DECAY_DEFAULT_INTERVAL 24 * 60 * 60 /* 24 hrs */ 128 + #define CEC_DECAY_MIN_INTERVAL 1 * 60 * 60 /* 1h */ 129 + #define CEC_DECAY_MAX_INTERVAL 30 * 24 * 60 * 60 /* one month */ 130 + static struct delayed_work cec_work; 131 + static u64 decay_interval = CEC_DECAY_DEFAULT_INTERVAL; 137 132 138 133 /* 139 134 * Decrement decay value. We're using DECAY_BITS bits to denote decay of an ··· 157 160 /* 158 161 * @interval in seconds 159 162 */ 160 - static void cec_mod_timer(struct timer_list *t, unsigned long interval) 163 + static void cec_mod_work(unsigned long interval) 161 164 { 162 165 unsigned long iv; 163 166 164 - iv = interval * HZ + jiffies; 165 - 166 - mod_timer(t, round_jiffies(iv)); 167 + iv = interval * HZ; 168 + mod_delayed_work(system_wq, &cec_work, round_jiffies(iv)); 167 169 } 168 170 169 - static void cec_timer_fn(struct timer_list *unused) 171 + static void cec_work_fn(struct work_struct *work) 170 172 { 173 + mutex_lock(&ce_mutex); 171 174 do_spring_cleaning(&ce_arr); 175 + mutex_unlock(&ce_mutex); 172 176 173 - cec_mod_timer(&cec_timer, timer_interval); 177 + cec_mod_work(decay_interval); 174 178 } 175 179 176 180 /* ··· 181 183 */ 182 184 static int __find_elem(struct ce_array *ca, u64 pfn, unsigned int *to) 183 185 { 186 + int min = 0, max = ca->n - 1; 184 187 u64 this_pfn; 185 - int min = 0, max = ca->n; 186 188 187 - while (min < max) { 188 - int tmp = (max + min) >> 1; 189 + while (min <= max) { 190 + int i = (min + max) >> 1; 189 191 190 - this_pfn = PFN(ca->array[tmp]); 192 + this_pfn = PFN(ca->array[i]); 191 193 192 194 if (this_pfn < pfn) 193 - min = tmp + 1; 195 + min = i + 1; 194 196 else if (this_pfn > pfn) 195 - max = tmp; 196 - else { 197 - min = tmp; 198 - break; 197 + max = i - 1; 198 + else if (this_pfn == pfn) { 199 + if (to) 200 + *to = i; 201 + 202 + return i; 199 203 } 200 204 } 201 205 206 + /* 207 + * When the loop terminates without finding @pfn, min has the index of 208 + * the element slot where the new @pfn should be inserted. The loop 209 + * terminates when min > max, which means the min index points to the 210 + * bigger element while the max index to the smaller element, in-between 211 + * which the new @pfn belongs to. 212 + * 213 + * For more details, see exercise 1, Section 6.2.1 in TAOCP, vol. 3. 214 + */ 202 215 if (to) 203 216 *to = min; 204 - 205 - this_pfn = PFN(ca->array[min]); 206 - 207 - if (this_pfn == pfn) 208 - return min; 209 217 210 218 return -ENOKEY; 211 219 } ··· 378 374 { 379 375 *(u64 *)data = val; 380 376 381 - if (val < CEC_TIMER_MIN_INTERVAL) 377 + if (val < CEC_DECAY_MIN_INTERVAL) 382 378 return -EINVAL; 383 379 384 - if (val > CEC_TIMER_MAX_INTERVAL) 380 + if (val > CEC_DECAY_MAX_INTERVAL) 385 381 return -EINVAL; 386 382 387 - timer_interval = val; 383 + decay_interval = val; 388 384 389 - cec_mod_timer(&cec_timer, timer_interval); 385 + cec_mod_work(decay_interval); 390 386 return 0; 391 387 } 392 388 DEFINE_DEBUGFS_ATTRIBUTE(decay_interval_ops, u64_get, decay_interval_set, "%lld\n"); ··· 430 426 431 427 seq_printf(m, "Flags: 0x%x\n", ca->flags); 432 428 433 - seq_printf(m, "Timer interval: %lld seconds\n", timer_interval); 429 + seq_printf(m, "Decay interval: %lld seconds\n", decay_interval); 434 430 seq_printf(m, "Decays: %lld\n", ca->decays_done); 435 431 436 432 seq_printf(m, "Action threshold: %d\n", count_threshold); ··· 476 472 } 477 473 478 474 decay = debugfs_create_file("decay_interval", S_IRUSR | S_IWUSR, d, 479 - &timer_interval, &decay_interval_ops); 475 + &decay_interval, &decay_interval_ops); 480 476 if (!decay) { 481 477 pr_warn("Error creating decay_interval debugfs node!\n"); 482 478 goto err; ··· 512 508 if (create_debugfs_nodes()) 513 509 return; 514 510 515 - timer_setup(&cec_timer, cec_timer_fn, 0); 516 - cec_mod_timer(&cec_timer, CEC_TIMER_DEFAULT_INTERVAL); 511 + INIT_DELAYED_WORK(&cec_work, cec_work_fn); 512 + schedule_delayed_work(&cec_work, CEC_DECAY_DEFAULT_INTERVAL); 517 513 518 514 pr_info("Correctable Errors collector initialized.\n"); 519 515 }
+3 -3
drivers/regulator/tps6507x-regulator.c
··· 403 403 /* common for all regulators */ 404 404 tps->mfd = tps6507x_dev; 405 405 406 - for (i = 0; i < TPS6507X_NUM_REGULATOR; i++, info++, init_data++) { 406 + for (i = 0; i < TPS6507X_NUM_REGULATOR; i++, info++) { 407 407 /* Register the regulators */ 408 408 tps->info[i] = info; 409 - if (init_data && init_data->driver_data) { 409 + if (init_data && init_data[i].driver_data) { 410 410 struct tps6507x_reg_platform_data *data = 411 - init_data->driver_data; 411 + init_data[i].driver_data; 412 412 info->defdcdc_default = data->defdcdc_default; 413 413 } 414 414
+6 -1
drivers/scsi/hpsa.c
··· 4940 4940 curr_sg->reserved[0] = 0; 4941 4941 curr_sg->reserved[1] = 0; 4942 4942 curr_sg->reserved[2] = 0; 4943 - curr_sg->chain_indicator = 0x80; 4943 + curr_sg->chain_indicator = IOACCEL2_CHAIN; 4944 4944 4945 4945 curr_sg = h->ioaccel2_cmd_sg_list[c->cmdindex]; 4946 4946 } ··· 4956 4956 curr_sg->chain_indicator = 0; 4957 4957 curr_sg++; 4958 4958 } 4959 + 4960 + /* 4961 + * Set the last s/g element bit 4962 + */ 4963 + (curr_sg - 1)->chain_indicator = IOACCEL2_LAST_SG; 4959 4964 4960 4965 switch (cmd->sc_data_direction) { 4961 4966 case DMA_TO_DEVICE:
+1
drivers/scsi/hpsa_cmd.h
··· 517 517 u8 reserved[3]; 518 518 u8 chain_indicator; 519 519 #define IOACCEL2_CHAIN 0x80 520 + #define IOACCEL2_LAST_SG 0x40 520 521 }; 521 522 522 523 /*
+1 -1
drivers/spi/spi-bitbang.c
··· 406 406 if (ret) 407 407 spi_master_put(master); 408 408 409 - return 0; 409 + return ret; 410 410 } 411 411 EXPORT_SYMBOL_GPL(spi_bitbang_start); 412 412
+1 -1
drivers/spi/spi-fsl-spi.c
··· 428 428 } 429 429 430 430 m->status = status; 431 - spi_finalize_current_message(master); 432 431 433 432 if (status || !cs_change) { 434 433 ndelay(nsecs); ··· 435 436 } 436 437 437 438 fsl_spi_setup_transfer(spi, NULL); 439 + spi_finalize_current_message(master); 438 440 return 0; 439 441 } 440 442
+8 -3
drivers/spi/spi.c
··· 1181 1181 if (msg->status && ctlr->handle_err) 1182 1182 ctlr->handle_err(ctlr, msg); 1183 1183 1184 - spi_finalize_current_message(ctlr); 1185 - 1186 1184 spi_res_release(ctlr, msg); 1185 + 1186 + spi_finalize_current_message(ctlr); 1187 1187 1188 1188 return ret; 1189 1189 } ··· 1307 1307 ret = ctlr->prepare_transfer_hardware(ctlr); 1308 1308 if (ret) { 1309 1309 dev_err(&ctlr->dev, 1310 - "failed to prepare transfer hardware\n"); 1310 + "failed to prepare transfer hardware: %d\n", 1311 + ret); 1311 1312 1312 1313 if (ctlr->auto_runtime_pm) 1313 1314 pm_runtime_put(ctlr->dev.parent); 1315 + 1316 + ctlr->cur_msg->status = ret; 1317 + spi_finalize_current_message(ctlr); 1318 + 1314 1319 mutex_unlock(&ctlr->io_mutex); 1315 1320 return; 1316 1321 }
+3
drivers/usb/core/quirks.c
··· 215 215 /* Cherry Stream G230 2.0 (G85-231) and 3.0 (G85-232) */ 216 216 { USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME }, 217 217 218 + /* Logitech HD Webcam C270 */ 219 + { USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME }, 220 + 218 221 /* Logitech HD Pro Webcams C920, C920-C, C925e and C930e */ 219 222 { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT }, 220 223 { USB_DEVICE(0x046d, 0x0841), .driver_info = USB_QUIRK_DELAY_INIT },
+16 -8
drivers/usb/dwc2/gadget.c
··· 835 835 * with corresponding information based on transfer data. 836 836 */ 837 837 static void dwc2_gadget_config_nonisoc_xfer_ddma(struct dwc2_hsotg_ep *hs_ep, 838 - struct usb_request *ureq, 839 - unsigned int offset, 838 + dma_addr_t dma_buff, 840 839 unsigned int len) 841 840 { 841 + struct usb_request *ureq = NULL; 842 842 struct dwc2_dma_desc *desc = hs_ep->desc_list; 843 843 struct scatterlist *sg; 844 844 int i; 845 845 u8 desc_count = 0; 846 846 847 + if (hs_ep->req) 848 + ureq = &hs_ep->req->req; 849 + 847 850 /* non-DMA sg buffer */ 848 - if (!ureq->num_sgs) { 851 + if (!ureq || !ureq->num_sgs) { 849 852 dwc2_gadget_fill_nonisoc_xfer_ddma_one(hs_ep, &desc, 850 - ureq->dma + offset, len, true); 853 + dma_buff, len, true); 851 854 return; 852 855 } 853 856 ··· 1138 1135 offset = ureq->actual; 1139 1136 1140 1137 /* Fill DDMA chain entries */ 1141 - dwc2_gadget_config_nonisoc_xfer_ddma(hs_ep, ureq, offset, 1138 + dwc2_gadget_config_nonisoc_xfer_ddma(hs_ep, ureq->dma + offset, 1142 1139 length); 1143 1140 1144 1141 /* write descriptor chain address to control register */ ··· 2040 2037 dev_dbg(hsotg->dev, "Receiving zero-length packet on ep%d\n", 2041 2038 index); 2042 2039 if (using_desc_dma(hsotg)) { 2040 + /* Not specific buffer needed for ep0 ZLP */ 2041 + dma_addr_t dma = hs_ep->desc_list_dma; 2042 + 2043 2043 if (!index) 2044 2044 dwc2_gadget_set_ep0_desc_chain(hsotg, hs_ep); 2045 2045 2046 - /* Not specific buffer needed for ep0 ZLP */ 2047 - dwc2_gadget_fill_nonisoc_xfer_ddma_one(hs_ep, &hs_ep->desc_list, 2048 - hs_ep->desc_list_dma, 0, true); 2046 + dwc2_gadget_config_nonisoc_xfer_ddma(hs_ep, dma, 0); 2049 2047 } else { 2050 2048 dwc2_writel(hsotg, DXEPTSIZ_MC(1) | DXEPTSIZ_PKTCNT(1) | 2051 2049 DXEPTSIZ_XFERSIZE(0), ··· 2420 2416 else if (hs_ep->isochronous && hs_ep->interval > 1) 2421 2417 dwc2_gadget_incr_frame_num(hs_ep); 2422 2418 } 2419 + 2420 + /* Set actual frame number for completed transfers */ 2421 + if (!using_desc_dma(hsotg) && hs_ep->isochronous) 2422 + req->frame_number = hsotg->frame_number; 2423 2423 2424 2424 dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, result); 2425 2425 }
+24 -15
drivers/usb/dwc2/hcd.c
··· 2480 2480 return; 2481 2481 2482 2482 /* Restore urb->transfer_buffer from the end of the allocated area */ 2483 - memcpy(&stored_xfer_buffer, urb->transfer_buffer + 2484 - urb->transfer_buffer_length, sizeof(urb->transfer_buffer)); 2483 + memcpy(&stored_xfer_buffer, 2484 + PTR_ALIGN(urb->transfer_buffer + urb->transfer_buffer_length, 2485 + dma_get_cache_alignment()), 2486 + sizeof(urb->transfer_buffer)); 2485 2487 2486 2488 if (usb_urb_dir_in(urb)) { 2487 2489 if (usb_pipeisoc(urb->pipe)) ··· 2515 2513 * DMA 2516 2514 */ 2517 2515 kmalloc_size = urb->transfer_buffer_length + 2516 + (dma_get_cache_alignment() - 1) + 2518 2517 sizeof(urb->transfer_buffer); 2519 2518 2520 2519 kmalloc_ptr = kmalloc(kmalloc_size, mem_flags); ··· 2526 2523 * Position value of original urb->transfer_buffer pointer to the end 2527 2524 * of allocation for later referencing 2528 2525 */ 2529 - memcpy(kmalloc_ptr + urb->transfer_buffer_length, 2526 + memcpy(PTR_ALIGN(kmalloc_ptr + urb->transfer_buffer_length, 2527 + dma_get_cache_alignment()), 2530 2528 &urb->transfer_buffer, sizeof(urb->transfer_buffer)); 2531 2529 2532 2530 if (usb_urb_dir_out(urb)) ··· 2612 2608 chan->dev_addr = dwc2_hcd_get_dev_addr(&urb->pipe_info); 2613 2609 chan->ep_num = dwc2_hcd_get_ep_num(&urb->pipe_info); 2614 2610 chan->speed = qh->dev_speed; 2615 - chan->max_packet = dwc2_max_packet(qh->maxp); 2611 + chan->max_packet = qh->maxp; 2616 2612 2617 2613 chan->xfer_started = 0; 2618 2614 chan->halt_status = DWC2_HC_XFER_NO_HALT_STATUS; ··· 2690 2686 * This value may be modified when the transfer is started 2691 2687 * to reflect the actual transfer length 2692 2688 */ 2693 - chan->multi_count = dwc2_hb_mult(qh->maxp); 2689 + chan->multi_count = qh->maxp_mult; 2694 2690 2695 2691 if (hsotg->params.dma_desc_enable) { 2696 2692 chan->desc_list_addr = qh->desc_list_dma; ··· 3810 3806 3811 3807 static void dwc2_hcd_urb_set_pipeinfo(struct dwc2_hsotg *hsotg, 3812 3808 struct dwc2_hcd_urb *urb, u8 dev_addr, 3813 - u8 ep_num, u8 ep_type, u8 ep_dir, u16 mps) 3809 + u8 ep_num, u8 ep_type, u8 ep_dir, 3810 + u16 maxp, u16 maxp_mult) 3814 3811 { 3815 3812 if (dbg_perio() || 3816 3813 ep_type == USB_ENDPOINT_XFER_BULK || 3817 3814 ep_type == USB_ENDPOINT_XFER_CONTROL) 3818 3815 dev_vdbg(hsotg->dev, 3819 - "addr=%d, ep_num=%d, ep_dir=%1x, ep_type=%1x, mps=%d\n", 3820 - dev_addr, ep_num, ep_dir, ep_type, mps); 3816 + "addr=%d, ep_num=%d, ep_dir=%1x, ep_type=%1x, maxp=%d (%d mult)\n", 3817 + dev_addr, ep_num, ep_dir, ep_type, maxp, maxp_mult); 3821 3818 urb->pipe_info.dev_addr = dev_addr; 3822 3819 urb->pipe_info.ep_num = ep_num; 3823 3820 urb->pipe_info.pipe_type = ep_type; 3824 3821 urb->pipe_info.pipe_dir = ep_dir; 3825 - urb->pipe_info.mps = mps; 3822 + urb->pipe_info.maxp = maxp; 3823 + urb->pipe_info.maxp_mult = maxp_mult; 3826 3824 } 3827 3825 3828 3826 /* ··· 3915 3909 dwc2_hcd_is_pipe_in(&urb->pipe_info) ? 3916 3910 "IN" : "OUT"); 3917 3911 dev_dbg(hsotg->dev, 3918 - " Max packet size: %d\n", 3919 - dwc2_hcd_get_mps(&urb->pipe_info)); 3912 + " Max packet size: %d (%d mult)\n", 3913 + dwc2_hcd_get_maxp(&urb->pipe_info), 3914 + dwc2_hcd_get_maxp_mult(&urb->pipe_info)); 3920 3915 dev_dbg(hsotg->dev, 3921 3916 " transfer_buffer: %p\n", 3922 3917 urb->buf); ··· 4517 4510 } 4518 4511 4519 4512 dev_vdbg(hsotg->dev, " Speed: %s\n", speed); 4520 - dev_vdbg(hsotg->dev, " Max packet size: %d\n", 4521 - usb_maxpacket(urb->dev, urb->pipe, usb_pipeout(urb->pipe))); 4513 + dev_vdbg(hsotg->dev, " Max packet size: %d (%d mult)\n", 4514 + usb_endpoint_maxp(&urb->ep->desc), 4515 + usb_endpoint_maxp_mult(&urb->ep->desc)); 4516 + 4522 4517 dev_vdbg(hsotg->dev, " Data buffer length: %d\n", 4523 4518 urb->transfer_buffer_length); 4524 4519 dev_vdbg(hsotg->dev, " Transfer buffer: %p, Transfer DMA: %08lx\n", ··· 4603 4594 dwc2_hcd_urb_set_pipeinfo(hsotg, dwc2_urb, usb_pipedevice(urb->pipe), 4604 4595 usb_pipeendpoint(urb->pipe), ep_type, 4605 4596 usb_pipein(urb->pipe), 4606 - usb_maxpacket(urb->dev, urb->pipe, 4607 - !(usb_pipein(urb->pipe)))); 4597 + usb_endpoint_maxp(&ep->desc), 4598 + usb_endpoint_maxp_mult(&ep->desc)); 4608 4599 4609 4600 buf = urb->transfer_buffer; 4610 4601
+11 -9
drivers/usb/dwc2/hcd.h
··· 171 171 u8 ep_num; 172 172 u8 pipe_type; 173 173 u8 pipe_dir; 174 - u16 mps; 174 + u16 maxp; 175 + u16 maxp_mult; 175 176 }; 176 177 177 178 struct dwc2_hcd_iso_packet_desc { ··· 265 264 * - USB_ENDPOINT_XFER_ISOC 266 265 * @ep_is_in: Endpoint direction 267 266 * @maxp: Value from wMaxPacketSize field of Endpoint Descriptor 267 + * @maxp_mult: Multiplier for maxp 268 268 * @dev_speed: Device speed. One of the following values: 269 269 * - USB_SPEED_LOW 270 270 * - USB_SPEED_FULL ··· 342 340 u8 ep_type; 343 341 u8 ep_is_in; 344 342 u16 maxp; 343 + u16 maxp_mult; 345 344 u8 dev_speed; 346 345 u8 data_toggle; 347 346 u8 ping_state; ··· 506 503 return pipe->pipe_type; 507 504 } 508 505 509 - static inline u16 dwc2_hcd_get_mps(struct dwc2_hcd_pipe_info *pipe) 506 + static inline u16 dwc2_hcd_get_maxp(struct dwc2_hcd_pipe_info *pipe) 510 507 { 511 - return pipe->mps; 508 + return pipe->maxp; 509 + } 510 + 511 + static inline u16 dwc2_hcd_get_maxp_mult(struct dwc2_hcd_pipe_info *pipe) 512 + { 513 + return pipe->maxp_mult; 512 514 } 513 515 514 516 static inline u8 dwc2_hcd_get_dev_addr(struct dwc2_hcd_pipe_info *pipe) ··· 627 619 628 620 static inline bool dbg_perio(void) { return false; } 629 621 #endif 630 - 631 - /* High bandwidth multiplier as encoded in highspeed endpoint descriptors */ 632 - #define dwc2_hb_mult(wmaxpacketsize) (1 + (((wmaxpacketsize) >> 11) & 0x03)) 633 - 634 - /* Packet size for any kind of endpoint descriptor */ 635 - #define dwc2_max_packet(wmaxpacketsize) ((wmaxpacketsize) & 0x07ff) 636 622 637 623 /* 638 624 * Returns true if frame1 index is greater than frame2 index. The comparison
+3 -2
drivers/usb/dwc2/hcd_intr.c
··· 1617 1617 1618 1618 dev_err(hsotg->dev, " Speed: %s\n", speed); 1619 1619 1620 - dev_err(hsotg->dev, " Max packet size: %d\n", 1621 - dwc2_hcd_get_mps(&urb->pipe_info)); 1620 + dev_err(hsotg->dev, " Max packet size: %d (mult %d)\n", 1621 + dwc2_hcd_get_maxp(&urb->pipe_info), 1622 + dwc2_hcd_get_maxp_mult(&urb->pipe_info)); 1622 1623 dev_err(hsotg->dev, " Data buffer length: %d\n", urb->length); 1623 1624 dev_err(hsotg->dev, " Transfer buffer: %p, Transfer DMA: %08lx\n", 1624 1625 urb->buf, (unsigned long)urb->dma);
+6 -4
drivers/usb/dwc2/hcd_queue.c
··· 708 708 static int dwc2_uframe_schedule_split(struct dwc2_hsotg *hsotg, 709 709 struct dwc2_qh *qh) 710 710 { 711 - int bytecount = dwc2_hb_mult(qh->maxp) * dwc2_max_packet(qh->maxp); 711 + int bytecount = qh->maxp_mult * qh->maxp; 712 712 int ls_search_slice; 713 713 int err = 0; 714 714 int host_interval_in_sched; ··· 1332 1332 u32 max_channel_xfer_size; 1333 1333 int status = 0; 1334 1334 1335 - max_xfer_size = dwc2_max_packet(qh->maxp) * dwc2_hb_mult(qh->maxp); 1335 + max_xfer_size = qh->maxp * qh->maxp_mult; 1336 1336 max_channel_xfer_size = hsotg->params.max_transfer_size; 1337 1337 1338 1338 if (max_xfer_size > max_channel_xfer_size) { ··· 1517 1517 u32 prtspd = (hprt & HPRT0_SPD_MASK) >> HPRT0_SPD_SHIFT; 1518 1518 bool do_split = (prtspd == HPRT0_SPD_HIGH_SPEED && 1519 1519 dev_speed != USB_SPEED_HIGH); 1520 - int maxp = dwc2_hcd_get_mps(&urb->pipe_info); 1521 - int bytecount = dwc2_hb_mult(maxp) * dwc2_max_packet(maxp); 1520 + int maxp = dwc2_hcd_get_maxp(&urb->pipe_info); 1521 + int maxp_mult = dwc2_hcd_get_maxp_mult(&urb->pipe_info); 1522 + int bytecount = maxp_mult * maxp; 1522 1523 char *speed, *type; 1523 1524 1524 1525 /* Initialize QH */ ··· 1532 1531 1533 1532 qh->data_toggle = DWC2_HC_PID_DATA0; 1534 1533 qh->maxp = maxp; 1534 + qh->maxp_mult = maxp_mult; 1535 1535 INIT_LIST_HEAD(&qh->qtd_list); 1536 1536 INIT_LIST_HEAD(&qh->qh_list_entry); 1537 1537
+5
drivers/usb/gadget/udc/fusb300_udc.c
··· 1342 1342 static int fusb300_remove(struct platform_device *pdev) 1343 1343 { 1344 1344 struct fusb300 *fusb300 = platform_get_drvdata(pdev); 1345 + int i; 1345 1346 1346 1347 usb_del_gadget_udc(&fusb300->gadget); 1347 1348 iounmap(fusb300->reg); 1348 1349 free_irq(platform_get_irq(pdev, 0), fusb300); 1349 1350 1350 1351 fusb300_free_request(&fusb300->ep[0]->ep, fusb300->ep0_req); 1352 + for (i = 0; i < FUSB300_MAX_NUM_EP; i++) 1353 + kfree(fusb300->ep[i]); 1351 1354 kfree(fusb300); 1352 1355 1353 1356 return 0; ··· 1494 1491 if (fusb300->ep0_req) 1495 1492 fusb300_free_request(&fusb300->ep[0]->ep, 1496 1493 fusb300->ep0_req); 1494 + for (i = 0; i < FUSB300_MAX_NUM_EP; i++) 1495 + kfree(fusb300->ep[i]); 1497 1496 kfree(fusb300); 1498 1497 } 1499 1498 if (reg)
+3 -4
drivers/usb/gadget/udc/lpc32xx_udc.c
··· 937 937 dma_addr_t dma; 938 938 struct lpc32xx_usbd_dd_gad *dd; 939 939 940 - dd = (struct lpc32xx_usbd_dd_gad *) dma_pool_alloc( 941 - udc->dd_cache, (GFP_KERNEL | GFP_DMA), &dma); 940 + dd = dma_pool_alloc(udc->dd_cache, GFP_ATOMIC | GFP_DMA, &dma); 942 941 if (dd) 943 942 dd->this_dma = dma; 944 943 ··· 3069 3070 } 3070 3071 3071 3072 udc->udp_baseaddr = devm_ioremap_resource(dev, res); 3072 - if (!udc->udp_baseaddr) { 3073 + if (IS_ERR(udc->udp_baseaddr)) { 3073 3074 dev_err(udc->dev, "IO map failure\n"); 3074 - return -ENOMEM; 3075 + return PTR_ERR(udc->udp_baseaddr); 3075 3076 } 3076 3077 3077 3078 /* Get USB device clock */
+14
drivers/usb/phy/phy-mxs-usb.c
··· 63 63 64 64 #define ANADIG_USB1_CHRG_DETECT_SET 0x1b4 65 65 #define ANADIG_USB1_CHRG_DETECT_CLR 0x1b8 66 + #define ANADIG_USB2_CHRG_DETECT_SET 0x214 66 67 #define ANADIG_USB1_CHRG_DETECT_EN_B BIT(20) 67 68 #define ANADIG_USB1_CHRG_DETECT_CHK_CHRG_B BIT(19) 68 69 #define ANADIG_USB1_CHRG_DETECT_CHK_CONTACT BIT(18) ··· 250 249 251 250 if (mxs_phy->data->flags & MXS_PHY_NEED_IP_FIX) 252 251 writel(BM_USBPHY_IP_FIX, base + HW_USBPHY_IP_SET); 252 + 253 + if (mxs_phy->regmap_anatop) { 254 + unsigned int reg = mxs_phy->port_id ? 255 + ANADIG_USB1_CHRG_DETECT_SET : 256 + ANADIG_USB2_CHRG_DETECT_SET; 257 + /* 258 + * The external charger detector needs to be disabled, 259 + * or the signal at DP will be poor 260 + */ 261 + regmap_write(mxs_phy->regmap_anatop, reg, 262 + ANADIG_USB1_CHRG_DETECT_EN_B | 263 + ANADIG_USB1_CHRG_DETECT_CHK_CHRG_B); 264 + } 253 265 254 266 mxs_phy_tx_init(mxs_phy); 255 267
+6
drivers/usb/serial/option.c
··· 1171 1171 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1213, 0xff) }, 1172 1172 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1214), 1173 1173 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, 1174 + { USB_DEVICE(TELIT_VENDOR_ID, 0x1260), 1175 + .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1176 + { USB_DEVICE(TELIT_VENDOR_ID, 0x1261), 1177 + .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1174 1178 { USB_DEVICE(TELIT_VENDOR_ID, 0x1900), /* Telit LN940 (QMI) */ 1175 1179 .driver_info = NCTRL(0) | RSVD(1) }, 1176 1180 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff), /* Telit LN940 (MBIM) */ ··· 1776 1772 { USB_DEVICE(ALINK_VENDOR_ID, SIMCOM_PRODUCT_SIM7100E), 1777 1773 .driver_info = RSVD(5) | RSVD(6) }, 1778 1774 { USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9003, 0xff) }, /* Simcom SIM7500/SIM7600 MBIM mode */ 1775 + { USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9011, 0xff), /* Simcom SIM7500/SIM7600 RNDIS mode */ 1776 + .driver_info = RSVD(7) }, 1779 1777 { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200), 1780 1778 .driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) }, 1781 1779 { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X220_X500D),
+1
drivers/usb/serial/pl2303.c
··· 106 106 { USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) }, 107 107 { USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) }, 108 108 { USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) }, 109 + { USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) }, 109 110 { } /* Terminating entry */ 110 111 }; 111 112
+3
drivers/usb/serial/pl2303.h
··· 155 155 #define SMART_VENDOR_ID 0x0b8c 156 156 #define SMART_PRODUCT_ID 0x2303 157 157 158 + /* Allied Telesis VT-Kit3 */ 159 + #define AT_VENDOR_ID 0x0caa 160 + #define AT_VTKIT3_PRODUCT_ID 0x3001
+5
drivers/usb/storage/unusual_realtek.h
··· 17 17 "USB Card Reader", 18 18 USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0), 19 19 20 + UNUSUAL_DEV(0x0bda, 0x0153, 0x0000, 0x9999, 21 + "Realtek", 22 + "USB Card Reader", 23 + USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0), 24 + 20 25 UNUSUAL_DEV(0x0bda, 0x0158, 0x0000, 0x9999, 21 26 "Realtek", 22 27 "USB Card Reader",
+1 -1
drivers/usb/typec/bus.c
··· 192 192 const struct typec_altmode * 193 193 typec_altmode_get_partner(struct typec_altmode *adev) 194 194 { 195 - return &to_altmode(adev)->partner->adev; 195 + return adev ? &to_altmode(adev)->partner->adev : NULL; 196 196 } 197 197 EXPORT_SYMBOL_GPL(typec_altmode_get_partner); 198 198
+4 -2
drivers/usb/typec/ucsi/ucsi_ccg.c
··· 862 862 863 863 not_signed_fw: 864 864 wr_buf = kzalloc(CCG4_ROW_SIZE + 4, GFP_KERNEL); 865 - if (!wr_buf) 866 - return -ENOMEM; 865 + if (!wr_buf) { 866 + err = -ENOMEM; 867 + goto release_fw; 868 + } 867 869 868 870 err = ccg_cmd_enter_flashing(uc); 869 871 if (err)
+65 -73
drivers/vfio/mdev/mdev_core.c
··· 102 102 kref_put(&parent->ref, mdev_release_parent); 103 103 } 104 104 105 - static int mdev_device_create_ops(struct kobject *kobj, 106 - struct mdev_device *mdev) 105 + /* Caller must hold parent unreg_sem read or write lock */ 106 + static void mdev_device_remove_common(struct mdev_device *mdev) 107 107 { 108 - struct mdev_parent *parent = mdev->parent; 108 + struct mdev_parent *parent; 109 + struct mdev_type *type; 109 110 int ret; 110 111 111 - ret = parent->ops->create(kobj, mdev); 112 - if (ret) 113 - return ret; 114 - 115 - ret = sysfs_create_groups(&mdev->dev.kobj, 116 - parent->ops->mdev_attr_groups); 117 - if (ret) 118 - parent->ops->remove(mdev); 119 - 120 - return ret; 121 - } 122 - 123 - /* 124 - * mdev_device_remove_ops gets called from sysfs's 'remove' and when parent 125 - * device is being unregistered from mdev device framework. 126 - * - 'force_remove' is set to 'false' when called from sysfs's 'remove' which 127 - * indicates that if the mdev device is active, used by VMM or userspace 128 - * application, vendor driver could return error then don't remove the device. 129 - * - 'force_remove' is set to 'true' when called from mdev_unregister_device() 130 - * which indicate that parent device is being removed from mdev device 131 - * framework so remove mdev device forcefully. 132 - */ 133 - static int mdev_device_remove_ops(struct mdev_device *mdev, bool force_remove) 134 - { 135 - struct mdev_parent *parent = mdev->parent; 136 - int ret; 137 - 138 - /* 139 - * Vendor driver can return error if VMM or userspace application is 140 - * using this mdev device. 141 - */ 112 + type = to_mdev_type(mdev->type_kobj); 113 + mdev_remove_sysfs_files(&mdev->dev, type); 114 + device_del(&mdev->dev); 115 + parent = mdev->parent; 116 + lockdep_assert_held(&parent->unreg_sem); 142 117 ret = parent->ops->remove(mdev); 143 - if (ret && !force_remove) 144 - return ret; 118 + if (ret) 119 + dev_err(&mdev->dev, "Remove failed: err=%d\n", ret); 145 120 146 - sysfs_remove_groups(&mdev->dev.kobj, parent->ops->mdev_attr_groups); 147 - return 0; 121 + /* Balances with device_initialize() */ 122 + put_device(&mdev->dev); 123 + mdev_put_parent(parent); 148 124 } 149 125 150 126 static int mdev_device_remove_cb(struct device *dev, void *data) 151 127 { 152 - if (dev_is_mdev(dev)) 153 - mdev_device_remove(dev, true); 128 + if (dev_is_mdev(dev)) { 129 + struct mdev_device *mdev; 154 130 131 + mdev = to_mdev_device(dev); 132 + mdev_device_remove_common(mdev); 133 + } 155 134 return 0; 156 135 } 157 136 ··· 172 193 } 173 194 174 195 kref_init(&parent->ref); 196 + init_rwsem(&parent->unreg_sem); 175 197 176 198 parent->dev = dev; 177 199 parent->ops = ops; ··· 231 251 dev_info(dev, "MDEV: Unregistering\n"); 232 252 233 253 list_del(&parent->next); 254 + mutex_unlock(&parent_list_lock); 255 + 256 + down_write(&parent->unreg_sem); 257 + 234 258 class_compat_remove_link(mdev_bus_compat_class, dev, NULL); 235 259 236 260 device_for_each_child(dev, NULL, mdev_device_remove_cb); 237 261 238 262 parent_remove_sysfs_files(parent); 263 + up_write(&parent->unreg_sem); 239 264 240 - mutex_unlock(&parent_list_lock); 241 265 mdev_put_parent(parent); 242 266 } 243 267 EXPORT_SYMBOL(mdev_unregister_device); 244 268 245 - static void mdev_device_release(struct device *dev) 269 + static void mdev_device_free(struct mdev_device *mdev) 246 270 { 247 - struct mdev_device *mdev = to_mdev_device(dev); 248 - 249 271 mutex_lock(&mdev_list_lock); 250 272 list_del(&mdev->next); 251 273 mutex_unlock(&mdev_list_lock); 252 274 253 275 dev_dbg(&mdev->dev, "MDEV: destroying\n"); 254 276 kfree(mdev); 277 + } 278 + 279 + static void mdev_device_release(struct device *dev) 280 + { 281 + struct mdev_device *mdev = to_mdev_device(dev); 282 + 283 + mdev_device_free(mdev); 255 284 } 256 285 257 286 int mdev_device_create(struct kobject *kobj, ··· 299 310 300 311 mdev->parent = parent; 301 312 313 + /* Check if parent unregistration has started */ 314 + if (!down_read_trylock(&parent->unreg_sem)) { 315 + mdev_device_free(mdev); 316 + ret = -ENODEV; 317 + goto mdev_fail; 318 + } 319 + 320 + device_initialize(&mdev->dev); 302 321 mdev->dev.parent = dev; 303 322 mdev->dev.bus = &mdev_bus_type; 304 323 mdev->dev.release = mdev_device_release; 305 324 dev_set_name(&mdev->dev, "%pUl", uuid); 325 + mdev->dev.groups = parent->ops->mdev_attr_groups; 326 + mdev->type_kobj = kobj; 306 327 307 - ret = device_register(&mdev->dev); 308 - if (ret) { 309 - put_device(&mdev->dev); 310 - goto mdev_fail; 311 - } 312 - 313 - ret = mdev_device_create_ops(kobj, mdev); 328 + ret = parent->ops->create(kobj, mdev); 314 329 if (ret) 315 - goto create_fail; 330 + goto ops_create_fail; 331 + 332 + ret = device_add(&mdev->dev); 333 + if (ret) 334 + goto add_fail; 316 335 317 336 ret = mdev_create_sysfs_files(&mdev->dev, type); 318 - if (ret) { 319 - mdev_device_remove_ops(mdev, true); 320 - goto create_fail; 321 - } 337 + if (ret) 338 + goto sysfs_fail; 322 339 323 - mdev->type_kobj = kobj; 324 340 mdev->active = true; 325 341 dev_dbg(&mdev->dev, "MDEV: created\n"); 342 + up_read(&parent->unreg_sem); 326 343 327 344 return 0; 328 345 329 - create_fail: 330 - device_unregister(&mdev->dev); 346 + sysfs_fail: 347 + device_del(&mdev->dev); 348 + add_fail: 349 + parent->ops->remove(mdev); 350 + ops_create_fail: 351 + up_read(&parent->unreg_sem); 352 + put_device(&mdev->dev); 331 353 mdev_fail: 332 354 mdev_put_parent(parent); 333 355 return ret; 334 356 } 335 357 336 - int mdev_device_remove(struct device *dev, bool force_remove) 358 + int mdev_device_remove(struct device *dev) 337 359 { 338 360 struct mdev_device *mdev, *tmp; 339 361 struct mdev_parent *parent; 340 - struct mdev_type *type; 341 - int ret; 342 362 343 363 mdev = to_mdev_device(dev); 344 364 ··· 370 372 mdev->active = false; 371 373 mutex_unlock(&mdev_list_lock); 372 374 373 - type = to_mdev_type(mdev->type_kobj); 374 375 parent = mdev->parent; 376 + /* Check if parent unregistration has started */ 377 + if (!down_read_trylock(&parent->unreg_sem)) 378 + return -ENODEV; 375 379 376 - ret = mdev_device_remove_ops(mdev, force_remove); 377 - if (ret) { 378 - mdev->active = true; 379 - return ret; 380 - } 381 - 382 - mdev_remove_sysfs_files(dev, type); 383 - device_unregister(dev); 384 - mdev_put_parent(parent); 385 - 380 + mdev_device_remove_common(mdev); 381 + up_read(&parent->unreg_sem); 386 382 return 0; 387 383 } 388 384
+3 -1
drivers/vfio/mdev/mdev_private.h
··· 23 23 struct list_head next; 24 24 struct kset *mdev_types_kset; 25 25 struct list_head type_list; 26 + /* Synchronize device creation/removal with parent unregistration */ 27 + struct rw_semaphore unreg_sem; 26 28 }; 27 29 28 30 struct mdev_device { ··· 62 60 63 61 int mdev_device_create(struct kobject *kobj, 64 62 struct device *dev, const guid_t *uuid); 65 - int mdev_device_remove(struct device *dev, bool force_remove); 63 + int mdev_device_remove(struct device *dev); 66 64 67 65 #endif /* MDEV_PRIVATE_H */
+2 -4
drivers/vfio/mdev/mdev_sysfs.c
··· 236 236 if (val && device_remove_file_self(dev, attr)) { 237 237 int ret; 238 238 239 - ret = mdev_device_remove(dev, false); 240 - if (ret) { 241 - device_create_file(dev, attr); 239 + ret = mdev_device_remove(dev); 240 + if (ret) 242 241 return ret; 243 - } 244 242 } 245 243 246 244 return count;
+11 -1
drivers/xen/swiotlb-xen.c
··· 202 202 retry: 203 203 bytes = xen_set_nslabs(xen_io_tlb_nslabs); 204 204 order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT); 205 + 206 + /* 207 + * IO TLB memory already allocated. Just use it. 208 + */ 209 + if (io_tlb_start != 0) { 210 + xen_io_tlb_start = phys_to_virt(io_tlb_start); 211 + goto end; 212 + } 213 + 205 214 /* 206 215 * Get IO TLB memory from any location. 207 216 */ ··· 240 231 m_ret = XEN_SWIOTLB_ENOMEM; 241 232 goto error; 242 233 } 243 - xen_io_tlb_end = xen_io_tlb_start + bytes; 244 234 /* 245 235 * And replace that memory with pages under 4GB. 246 236 */ ··· 266 258 } else 267 259 rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs); 268 260 261 + end: 262 + xen_io_tlb_end = xen_io_tlb_start + bytes; 269 263 if (!rc) 270 264 swiotlb_set_max_segment(PAGE_SIZE); 271 265
+3 -25
fs/btrfs/extent-tree.c
··· 11137 11137 * it while performing the free space search since we have already 11138 11138 * held back allocations. 11139 11139 */ 11140 - static int btrfs_trim_free_extents(struct btrfs_device *device, 11141 - struct fstrim_range *range, u64 *trimmed) 11140 + static int btrfs_trim_free_extents(struct btrfs_device *device, u64 *trimmed) 11142 11141 { 11143 - u64 start, len = 0, end = 0; 11142 + u64 start = SZ_1M, len = 0, end = 0; 11144 11143 int ret; 11145 11144 11146 - start = max_t(u64, range->start, SZ_1M); 11147 11145 *trimmed = 0; 11148 11146 11149 11147 /* Discard not supported = nothing to do. */ ··· 11184 11186 break; 11185 11187 } 11186 11188 11187 - /* Keep going until we satisfy minlen or reach end of space */ 11188 - if (len < range->minlen) { 11189 - mutex_unlock(&fs_info->chunk_mutex); 11190 - start += len; 11191 - continue; 11192 - } 11193 - 11194 - /* If we are out of the passed range break */ 11195 - if (start > range->start + range->len - 1) { 11196 - mutex_unlock(&fs_info->chunk_mutex); 11197 - break; 11198 - } 11199 - 11200 - start = max(range->start, start); 11201 - len = min(range->len, len); 11202 - 11203 11189 ret = btrfs_issue_discard(device->bdev, start, len, 11204 11190 &bytes); 11205 11191 if (!ret) ··· 11197 11215 11198 11216 start += len; 11199 11217 *trimmed += bytes; 11200 - 11201 - /* We've trimmed enough */ 11202 - if (*trimmed >= range->len) 11203 - break; 11204 11218 11205 11219 if (fatal_signal_pending(current)) { 11206 11220 ret = -ERESTARTSYS; ··· 11281 11303 mutex_lock(&fs_info->fs_devices->device_list_mutex); 11282 11304 devices = &fs_info->fs_devices->devices; 11283 11305 list_for_each_entry(device, devices, dev_list) { 11284 - ret = btrfs_trim_free_extents(device, range, &group_trimmed); 11306 + ret = btrfs_trim_free_extents(device, &group_trimmed); 11285 11307 if (ret) { 11286 11308 dev_failed++; 11287 11309 dev_ret = ret;
+4 -1
fs/gfs2/bmap.c
··· 991 991 static int gfs2_iomap_page_prepare(struct inode *inode, loff_t pos, 992 992 unsigned len, struct iomap *iomap) 993 993 { 994 + unsigned int blockmask = i_blocksize(inode) - 1; 994 995 struct gfs2_sbd *sdp = GFS2_SB(inode); 996 + unsigned int blocks; 995 997 996 - return gfs2_trans_begin(sdp, RES_DINODE + (len >> inode->i_blkbits), 0); 998 + blocks = ((pos & blockmask) + len + blockmask) >> inode->i_blkbits; 999 + return gfs2_trans_begin(sdp, RES_DINODE + blocks, 0); 997 1000 } 998 1001 999 1002 static void gfs2_iomap_page_done(struct inode *inode, loff_t pos,
+3 -1
fs/io_uring.c
··· 2777 2777 io_eventfd_unregister(ctx); 2778 2778 2779 2779 #if defined(CONFIG_UNIX) 2780 - if (ctx->ring_sock) 2780 + if (ctx->ring_sock) { 2781 + ctx->ring_sock->file = NULL; /* so that iput() is called */ 2781 2782 sock_release(ctx->ring_sock); 2783 + } 2782 2784 #endif 2783 2785 2784 2786 io_mem_free(ctx->sq_ring);
+12
fs/ocfs2/dcache.c
··· 296 296 297 297 out_attach: 298 298 spin_lock(&dentry_attach_lock); 299 + if (unlikely(dentry->d_fsdata && !alias)) { 300 + /* d_fsdata is set by a racing thread which is doing 301 + * the same thing as this thread is doing. Leave the racing 302 + * thread going ahead and we return here. 303 + */ 304 + spin_unlock(&dentry_attach_lock); 305 + iput(dl->dl_inode); 306 + ocfs2_lock_res_free(&dl->dl_lockres); 307 + kfree(dl); 308 + return 0; 309 + } 310 + 299 311 dentry->d_fsdata = dl; 300 312 dl->dl_count++; 301 313 spin_unlock(&dentry_attach_lock);
+1
include/drm/drm_edid.h
··· 471 471 struct i2c_adapter *adapter); 472 472 struct edid *drm_edid_duplicate(const struct edid *edid); 473 473 int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid); 474 + int drm_add_override_edid_modes(struct drm_connector *connector); 474 475 475 476 u8 drm_match_cea_mode(const struct drm_display_mode *to_match); 476 477 enum hdmi_picture_aspect drm_get_cea_aspect_ratio(const u8 video_code);
+1 -3
include/linux/cgroup-defs.h
··· 106 106 CFTYPE_WORLD_WRITABLE = (1 << 4), /* (DON'T USE FOR NEW FILES) S_IWUGO */ 107 107 CFTYPE_DEBUG = (1 << 5), /* create when cgroup_debug */ 108 108 109 - CFTYPE_SYMLINKED = (1 << 6), /* pointed to by symlink too */ 110 - 111 109 /* internal flags, do not use outside cgroup core proper */ 112 110 __CFTYPE_ONLY_ON_DFL = (1 << 16), /* only on default hierarchy */ 113 111 __CFTYPE_NOT_ON_DFL = (1 << 17), /* not on default hierarchy */ ··· 221 223 */ 222 224 struct list_head tasks; 223 225 struct list_head mg_tasks; 226 + struct list_head dying_tasks; 224 227 225 228 /* all css_task_iters currently walking this cset */ 226 229 struct list_head task_iters; ··· 544 545 * end of cftype array. 545 546 */ 546 547 char name[MAX_CFTYPE_NAME]; 547 - char link_name[MAX_CFTYPE_NAME]; 548 548 unsigned long private; 549 549 550 550 /*
+12 -2
include/linux/cgroup.h
··· 43 43 /* walk all threaded css_sets in the domain */ 44 44 #define CSS_TASK_ITER_THREADED (1U << 1) 45 45 46 + /* internal flags */ 47 + #define CSS_TASK_ITER_SKIPPED (1U << 16) 48 + 46 49 /* a css_task_iter should be treated as an opaque object */ 47 50 struct css_task_iter { 48 51 struct cgroup_subsys *ss; ··· 60 57 struct list_head *task_pos; 61 58 struct list_head *tasks_head; 62 59 struct list_head *mg_tasks_head; 60 + struct list_head *dying_tasks_head; 63 61 64 62 struct css_set *cur_cset; 65 63 struct css_set *cur_dcset; ··· 491 487 * 492 488 * Find the css for the (@task, @subsys_id) combination, increment a 493 489 * reference on and return it. This function is guaranteed to return a 494 - * valid css. 490 + * valid css. The returned css may already have been offlined. 495 491 */ 496 492 static inline struct cgroup_subsys_state * 497 493 task_get_css(struct task_struct *task, int subsys_id) ··· 501 497 rcu_read_lock(); 502 498 while (true) { 503 499 css = task_css(task, subsys_id); 504 - if (likely(css_tryget_online(css))) 500 + /* 501 + * Can't use css_tryget_online() here. A task which has 502 + * PF_EXITING set may stay associated with an offline css. 503 + * If such task calls this function, css_tryget_online() 504 + * will keep failing. 505 + */ 506 + if (likely(css_tryget(css))) 505 507 break; 506 508 cpu_relax(); 507 509 }
+1
include/linux/cpuhotplug.h
··· 101 101 CPUHP_AP_IRQ_BCM2836_STARTING, 102 102 CPUHP_AP_IRQ_MIPS_GIC_STARTING, 103 103 CPUHP_AP_ARM_MVEBU_COHERENCY, 104 + CPUHP_AP_MICROCODE_LOADER, 104 105 CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING, 105 106 CPUHP_AP_PERF_X86_STARTING, 106 107 CPUHP_AP_PERF_X86_AMD_IBS_STARTING,
+1
include/linux/device.h
··· 713 713 /* allows to add/remove a custom action to devres stack */ 714 714 int devm_add_action(struct device *dev, void (*action)(void *), void *data); 715 715 void devm_remove_action(struct device *dev, void (*action)(void *), void *data); 716 + void devm_release_action(struct device *dev, void (*action)(void *), void *data); 716 717 717 718 static inline int devm_add_action_or_reset(struct device *dev, 718 719 void (*action)(void *), void *data)
+49 -6
include/linux/genalloc.h
··· 75 75 struct list_head next_chunk; /* next chunk in pool */ 76 76 atomic_long_t avail; 77 77 phys_addr_t phys_addr; /* physical starting address of memory chunk */ 78 + void *owner; /* private data to retrieve at alloc time */ 78 79 unsigned long start_addr; /* start address of memory chunk */ 79 80 unsigned long end_addr; /* end address of memory chunk (inclusive) */ 80 81 unsigned long bits[0]; /* bitmap for allocating memory chunk */ ··· 97 96 98 97 extern struct gen_pool *gen_pool_create(int, int); 99 98 extern phys_addr_t gen_pool_virt_to_phys(struct gen_pool *pool, unsigned long); 100 - extern int gen_pool_add_virt(struct gen_pool *, unsigned long, phys_addr_t, 101 - size_t, int); 99 + extern int gen_pool_add_owner(struct gen_pool *, unsigned long, phys_addr_t, 100 + size_t, int, void *); 101 + 102 + static inline int gen_pool_add_virt(struct gen_pool *pool, unsigned long addr, 103 + phys_addr_t phys, size_t size, int nid) 104 + { 105 + return gen_pool_add_owner(pool, addr, phys, size, nid, NULL); 106 + } 107 + 102 108 /** 103 109 * gen_pool_add - add a new chunk of special memory to the pool 104 110 * @pool: pool to add new memory chunk to ··· 124 116 return gen_pool_add_virt(pool, addr, -1, size, nid); 125 117 } 126 118 extern void gen_pool_destroy(struct gen_pool *); 127 - extern unsigned long gen_pool_alloc(struct gen_pool *, size_t); 128 - extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, 129 - genpool_algo_t algo, void *data); 119 + unsigned long gen_pool_alloc_algo_owner(struct gen_pool *pool, size_t size, 120 + genpool_algo_t algo, void *data, void **owner); 121 + 122 + static inline unsigned long gen_pool_alloc_owner(struct gen_pool *pool, 123 + size_t size, void **owner) 124 + { 125 + return gen_pool_alloc_algo_owner(pool, size, pool->algo, pool->data, 126 + owner); 127 + } 128 + 129 + static inline unsigned long gen_pool_alloc_algo(struct gen_pool *pool, 130 + size_t size, genpool_algo_t algo, void *data) 131 + { 132 + return gen_pool_alloc_algo_owner(pool, size, algo, data, NULL); 133 + } 134 + 135 + /** 136 + * gen_pool_alloc - allocate special memory from the pool 137 + * @pool: pool to allocate from 138 + * @size: number of bytes to allocate from the pool 139 + * 140 + * Allocate the requested number of bytes from the specified pool. 141 + * Uses the pool allocation function (with first-fit algorithm by default). 142 + * Can not be used in NMI handler on architectures without 143 + * NMI-safe cmpxchg implementation. 144 + */ 145 + static inline unsigned long gen_pool_alloc(struct gen_pool *pool, size_t size) 146 + { 147 + return gen_pool_alloc_algo(pool, size, pool->algo, pool->data); 148 + } 149 + 130 150 extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, 131 151 dma_addr_t *dma); 132 - extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); 152 + extern void gen_pool_free_owner(struct gen_pool *pool, unsigned long addr, 153 + size_t size, void **owner); 154 + static inline void gen_pool_free(struct gen_pool *pool, unsigned long addr, 155 + size_t size) 156 + { 157 + gen_pool_free_owner(pool, addr, size, NULL); 158 + } 159 + 133 160 extern void gen_pool_for_each_chunk(struct gen_pool *, 134 161 void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); 135 162 extern size_t gen_pool_avail(struct gen_pool *);
+18 -8
include/linux/memcontrol.h
··· 117 117 struct mem_cgroup_per_node { 118 118 struct lruvec lruvec; 119 119 120 + /* Legacy local VM stats */ 121 + struct lruvec_stat __percpu *lruvec_stat_local; 122 + 123 + /* Subtree VM stats (batched updates) */ 120 124 struct lruvec_stat __percpu *lruvec_stat_cpu; 121 125 atomic_long_t lruvec_stat[NR_VM_NODE_STAT_ITEMS]; 122 - atomic_long_t lruvec_stat_local[NR_VM_NODE_STAT_ITEMS]; 123 126 124 127 unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; 125 128 ··· 268 265 atomic_t moving_account; 269 266 struct task_struct *move_lock_task; 270 267 271 - /* memory.stat */ 268 + /* Legacy local VM stats and events */ 269 + struct memcg_vmstats_percpu __percpu *vmstats_local; 270 + 271 + /* Subtree VM stats and events (batched updates) */ 272 272 struct memcg_vmstats_percpu __percpu *vmstats_percpu; 273 273 274 274 MEMCG_PADDING(_pad2_); 275 275 276 276 atomic_long_t vmstats[MEMCG_NR_STAT]; 277 - atomic_long_t vmstats_local[MEMCG_NR_STAT]; 278 - 279 277 atomic_long_t vmevents[NR_VM_EVENT_ITEMS]; 280 - atomic_long_t vmevents_local[NR_VM_EVENT_ITEMS]; 281 278 279 + /* memory.events */ 282 280 atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS]; 283 281 284 282 unsigned long socket_pressure; ··· 571 567 static inline unsigned long memcg_page_state_local(struct mem_cgroup *memcg, 572 568 int idx) 573 569 { 574 - long x = atomic_long_read(&memcg->vmstats_local[idx]); 570 + long x = 0; 571 + int cpu; 572 + 573 + for_each_possible_cpu(cpu) 574 + x += per_cpu(memcg->vmstats_local->stat[idx], cpu); 575 575 #ifdef CONFIG_SMP 576 576 if (x < 0) 577 577 x = 0; ··· 649 641 enum node_stat_item idx) 650 642 { 651 643 struct mem_cgroup_per_node *pn; 652 - long x; 644 + long x = 0; 645 + int cpu; 653 646 654 647 if (mem_cgroup_disabled()) 655 648 return node_page_state(lruvec_pgdat(lruvec), idx); 656 649 657 650 pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); 658 - x = atomic_long_read(&pn->lruvec_stat_local[idx]); 651 + for_each_possible_cpu(cpu) 652 + x += per_cpu(pn->lruvec_stat_local->count[idx], cpu); 659 653 #ifdef CONFIG_SMP 660 654 if (x < 0) 661 655 x = 0;
+8
include/linux/memremap.h
··· 81 81 * @res: physical address range covered by @ref 82 82 * @ref: reference count that pins the devm_memremap_pages() mapping 83 83 * @kill: callback to transition @ref to the dead state 84 + * @cleanup: callback to wait for @ref to be idle and reap it 84 85 * @dev: host device of the mapping for debug 85 86 * @data: private data pointer for page_free() 86 87 * @type: memory type: see MEMORY_* in memory_hotplug.h ··· 93 92 struct resource res; 94 93 struct percpu_ref *ref; 95 94 void (*kill)(struct percpu_ref *ref); 95 + void (*cleanup)(struct percpu_ref *ref); 96 96 struct device *dev; 97 97 void *data; 98 98 enum memory_type type; ··· 102 100 103 101 #ifdef CONFIG_ZONE_DEVICE 104 102 void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); 103 + void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap); 105 104 struct dev_pagemap *get_dev_pagemap(unsigned long pfn, 106 105 struct dev_pagemap *pgmap); 107 106 ··· 119 116 */ 120 117 WARN_ON_ONCE(1); 121 118 return ERR_PTR(-ENXIO); 119 + } 120 + 121 + static inline void devm_memunmap_pages(struct device *dev, 122 + struct dev_pagemap *pgmap) 123 + { 122 124 } 123 125 124 126 static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
+4
include/linux/sched/mm.h
··· 54 54 * followed by taking the mmap_sem for writing before modifying the 55 55 * vmas or anything the coredump pretends not to change from under it. 56 56 * 57 + * It also has to be called when mmgrab() is used in the context of 58 + * the process, but then the mm_count refcount is transferred outside 59 + * the context of the process to run down_write() on that pinned mm. 60 + * 57 61 * NOTE: find_extend_vma() called from GUP context is the only place 58 62 * that can modify the "mm" (notably the vm_start/end) under mmap_sem 59 63 * for reading and outside the context of the process, so it is also
+1
include/sound/sof/dai.h
··· 49 49 SOF_DAI_INTEL_SSP, /**< Intel SSP */ 50 50 SOF_DAI_INTEL_DMIC, /**< Intel DMIC */ 51 51 SOF_DAI_INTEL_HDA, /**< Intel HD/A */ 52 + SOF_DAI_INTEL_SOUNDWIRE, /**< Intel SoundWire */ 52 53 }; 53 54 54 55 /* general purpose DAI configuration */
+23
include/sound/sof/header.h
··· 48 48 #define SOF_IPC_FW_READY SOF_GLB_TYPE(0x7U) 49 49 #define SOF_IPC_GLB_DAI_MSG SOF_GLB_TYPE(0x8U) 50 50 #define SOF_IPC_GLB_TRACE_MSG SOF_GLB_TYPE(0x9U) 51 + #define SOF_IPC_GLB_GDB_DEBUG SOF_GLB_TYPE(0xAU) 51 52 52 53 /* 53 54 * DSP Command Message Types ··· 79 78 #define SOF_IPC_COMP_GET_VALUE SOF_CMD_TYPE(0x002) 80 79 #define SOF_IPC_COMP_SET_DATA SOF_CMD_TYPE(0x003) 81 80 #define SOF_IPC_COMP_GET_DATA SOF_CMD_TYPE(0x004) 81 + #define SOF_IPC_COMP_NOTIFICATION SOF_CMD_TYPE(0x005) 82 82 83 83 /* DAI messages */ 84 84 #define SOF_IPC_DAI_CONFIG SOF_CMD_TYPE(0x001) ··· 153 151 struct sof_ipc_compound_hdr { 154 152 struct sof_ipc_cmd_hdr hdr; 155 153 uint32_t count; /**< count of 0 means end of compound sequence */ 154 + } __packed; 155 + 156 + /** 157 + * OOPS header architecture specific data. 158 + */ 159 + struct sof_ipc_dsp_oops_arch_hdr { 160 + uint32_t arch; /* Identifier of architecture */ 161 + uint32_t totalsize; /* Total size of oops message */ 162 + } __packed; 163 + 164 + /** 165 + * OOPS header platform specific data. 166 + */ 167 + struct sof_ipc_dsp_oops_plat_hdr { 168 + uint32_t configidhi; /* ConfigID hi 32bits */ 169 + uint32_t configidlo; /* ConfigID lo 32bits */ 170 + uint32_t numaregs; /* Special regs num */ 171 + uint32_t stackoffset; /* Offset to stack pointer from beginning of 172 + * oops message 173 + */ 174 + uint32_t stackptr; /* Stack ptr */ 156 175 } __packed; 157 176 158 177 /** @}*/
+10 -10
include/sound/sof/info.h
··· 18 18 19 19 #define SOF_IPC_MAX_ELEMS 16 20 20 21 + /* 22 + * Firmware boot info flag bits (64-bit) 23 + */ 24 + #define SOF_IPC_INFO_BUILD BIT(0) 25 + #define SOF_IPC_INFO_LOCKS BIT(1) 26 + #define SOF_IPC_INFO_LOCKSV BIT(2) 27 + #define SOF_IPC_INFO_GDB BIT(3) 28 + 21 29 /* extended data types that can be appended onto end of sof_ipc_fw_ready */ 22 30 enum sof_ipc_ext_data { 23 31 SOF_IPC_EXT_DMA_BUFFER = 0, ··· 57 49 uint32_t hostbox_size; 58 50 struct sof_ipc_fw_version version; 59 51 60 - /* Miscellaneous debug flags showing build/debug features enabled */ 61 - union { 62 - uint64_t reserved; 63 - struct { 64 - uint64_t build:1; 65 - uint64_t locks:1; 66 - uint64_t locks_verbose:1; 67 - uint64_t gdb:1; 68 - } bits; 69 - } debug; 52 + /* Miscellaneous flags */ 53 + uint64_t flags; 70 54 71 55 /* reserved for future use */ 72 56 uint32_t reserved[4];
+7 -2
include/sound/sof/xtensa.h
··· 17 17 18 18 /* Xtensa Firmware Oops data */ 19 19 struct sof_ipc_dsp_oops_xtensa { 20 - struct sof_ipc_hdr hdr; 20 + struct sof_ipc_dsp_oops_arch_hdr arch_hdr; 21 + struct sof_ipc_dsp_oops_plat_hdr plat_hdr; 21 22 uint32_t exccause; 22 23 uint32_t excvaddr; 23 24 uint32_t ps; ··· 39 38 uint32_t intenable; 40 39 uint32_t interrupt; 41 40 uint32_t sar; 42 - uint32_t stack; 41 + uint32_t debugcause; 42 + uint32_t windowbase; 43 + uint32_t windowstart; 44 + uint32_t excsave1; 45 + uint32_t ar[]; 43 46 } __packed; 44 47 45 48 #endif
+1 -1
include/uapi/sound/sof/abi.h
··· 26 26 27 27 /* SOF ABI version major, minor and patch numbers */ 28 28 #define SOF_ABI_MAJOR 3 29 - #define SOF_ABI_MINOR 4 29 + #define SOF_ABI_MINOR 6 30 30 #define SOF_ABI_PATCH 0 31 31 32 32 /* SOF ABI version number. Format within 32bit word is MMmmmppp */
+81 -60
kernel/cgroup/cgroup.c
··· 215 215 216 216 static int cgroup_apply_control(struct cgroup *cgrp); 217 217 static void cgroup_finalize_control(struct cgroup *cgrp, int ret); 218 - static void css_task_iter_advance(struct css_task_iter *it); 218 + static void css_task_iter_skip(struct css_task_iter *it, 219 + struct task_struct *task); 219 220 static int cgroup_destroy_locked(struct cgroup *cgrp); 220 221 static struct cgroup_subsys_state *css_create(struct cgroup *cgrp, 221 222 struct cgroup_subsys *ss); ··· 739 738 .dom_cset = &init_css_set, 740 739 .tasks = LIST_HEAD_INIT(init_css_set.tasks), 741 740 .mg_tasks = LIST_HEAD_INIT(init_css_set.mg_tasks), 741 + .dying_tasks = LIST_HEAD_INIT(init_css_set.dying_tasks), 742 742 .task_iters = LIST_HEAD_INIT(init_css_set.task_iters), 743 743 .threaded_csets = LIST_HEAD_INIT(init_css_set.threaded_csets), 744 744 .cgrp_links = LIST_HEAD_INIT(init_css_set.cgrp_links), ··· 845 843 cgroup_update_populated(link->cgrp, populated); 846 844 } 847 845 846 + /* 847 + * @task is leaving, advance task iterators which are pointing to it so 848 + * that they can resume at the next position. Advancing an iterator might 849 + * remove it from the list, use safe walk. See css_task_iter_skip() for 850 + * details. 851 + */ 852 + static void css_set_skip_task_iters(struct css_set *cset, 853 + struct task_struct *task) 854 + { 855 + struct css_task_iter *it, *pos; 856 + 857 + list_for_each_entry_safe(it, pos, &cset->task_iters, iters_node) 858 + css_task_iter_skip(it, task); 859 + } 860 + 848 861 /** 849 862 * css_set_move_task - move a task from one css_set to another 850 863 * @task: task being moved ··· 885 868 css_set_update_populated(to_cset, true); 886 869 887 870 if (from_cset) { 888 - struct css_task_iter *it, *pos; 889 - 890 871 WARN_ON_ONCE(list_empty(&task->cg_list)); 891 872 892 - /* 893 - * @task is leaving, advance task iterators which are 894 - * pointing to it so that they can resume at the next 895 - * position. Advancing an iterator might remove it from 896 - * the list, use safe walk. See css_task_iter_advance*() 897 - * for details. 898 - */ 899 - list_for_each_entry_safe(it, pos, &from_cset->task_iters, 900 - iters_node) 901 - if (it->task_pos == &task->cg_list) 902 - css_task_iter_advance(it); 903 - 873 + css_set_skip_task_iters(from_cset, task); 904 874 list_del_init(&task->cg_list); 905 875 if (!css_set_populated(from_cset)) 906 876 css_set_update_populated(from_cset, false); ··· 1214 1210 cset->dom_cset = cset; 1215 1211 INIT_LIST_HEAD(&cset->tasks); 1216 1212 INIT_LIST_HEAD(&cset->mg_tasks); 1213 + INIT_LIST_HEAD(&cset->dying_tasks); 1217 1214 INIT_LIST_HEAD(&cset->task_iters); 1218 1215 INIT_LIST_HEAD(&cset->threaded_csets); 1219 1216 INIT_HLIST_NODE(&cset->hlist); ··· 1465 1460 1466 1461 static struct kernfs_syscall_ops cgroup_kf_syscall_ops; 1467 1462 1468 - static char *cgroup_fill_name(struct cgroup *cgrp, const struct cftype *cft, 1469 - char *buf, bool write_link_name) 1463 + static char *cgroup_file_name(struct cgroup *cgrp, const struct cftype *cft, 1464 + char *buf) 1470 1465 { 1471 1466 struct cgroup_subsys *ss = cft->ss; 1472 1467 ··· 1476 1471 1477 1472 snprintf(buf, CGROUP_FILE_NAME_MAX, "%s%s.%s", 1478 1473 dbg, cgroup_on_dfl(cgrp) ? ss->name : ss->legacy_name, 1479 - write_link_name ? cft->link_name : cft->name); 1474 + cft->name); 1480 1475 } else { 1481 - strscpy(buf, write_link_name ? cft->link_name : cft->name, 1482 - CGROUP_FILE_NAME_MAX); 1476 + strscpy(buf, cft->name, CGROUP_FILE_NAME_MAX); 1483 1477 } 1484 1478 return buf; 1485 - } 1486 - 1487 - static char *cgroup_file_name(struct cgroup *cgrp, const struct cftype *cft, 1488 - char *buf) 1489 - { 1490 - return cgroup_fill_name(cgrp, cft, buf, false); 1491 - } 1492 - 1493 - static char *cgroup_link_name(struct cgroup *cgrp, const struct cftype *cft, 1494 - char *buf) 1495 - { 1496 - return cgroup_fill_name(cgrp, cft, buf, true); 1497 1479 } 1498 1480 1499 1481 /** ··· 1641 1649 } 1642 1650 1643 1651 kernfs_remove_by_name(cgrp->kn, cgroup_file_name(cgrp, cft, name)); 1644 - if (cft->flags & CFTYPE_SYMLINKED) 1645 - kernfs_remove_by_name(cgrp->kn, 1646 - cgroup_link_name(cgrp, cft, name)); 1647 1652 } 1648 1653 1649 1654 /** ··· 3826 3837 { 3827 3838 char name[CGROUP_FILE_NAME_MAX]; 3828 3839 struct kernfs_node *kn; 3829 - struct kernfs_node *kn_link; 3830 3840 struct lock_class_key *key = NULL; 3831 3841 int ret; 3832 3842 ··· 3854 3866 spin_lock_irq(&cgroup_file_kn_lock); 3855 3867 cfile->kn = kn; 3856 3868 spin_unlock_irq(&cgroup_file_kn_lock); 3857 - } 3858 - 3859 - if (cft->flags & CFTYPE_SYMLINKED) { 3860 - kn_link = kernfs_create_link(cgrp->kn, 3861 - cgroup_link_name(cgrp, cft, name), 3862 - kn); 3863 - if (IS_ERR(kn_link)) 3864 - return PTR_ERR(kn_link); 3865 3869 } 3866 3870 3867 3871 return 0; ··· 4413 4433 it->task_pos = NULL; 4414 4434 return; 4415 4435 } 4416 - } while (!css_set_populated(cset)); 4436 + } while (!css_set_populated(cset) && list_empty(&cset->dying_tasks)); 4417 4437 4418 4438 if (!list_empty(&cset->tasks)) 4419 4439 it->task_pos = cset->tasks.next; 4420 - else 4440 + else if (!list_empty(&cset->mg_tasks)) 4421 4441 it->task_pos = cset->mg_tasks.next; 4442 + else 4443 + it->task_pos = cset->dying_tasks.next; 4422 4444 4423 4445 it->tasks_head = &cset->tasks; 4424 4446 it->mg_tasks_head = &cset->mg_tasks; 4447 + it->dying_tasks_head = &cset->dying_tasks; 4425 4448 4426 4449 /* 4427 4450 * We don't keep css_sets locked across iteration steps and thus ··· 4450 4467 list_add(&it->iters_node, &cset->task_iters); 4451 4468 } 4452 4469 4470 + static void css_task_iter_skip(struct css_task_iter *it, 4471 + struct task_struct *task) 4472 + { 4473 + lockdep_assert_held(&css_set_lock); 4474 + 4475 + if (it->task_pos == &task->cg_list) { 4476 + it->task_pos = it->task_pos->next; 4477 + it->flags |= CSS_TASK_ITER_SKIPPED; 4478 + } 4479 + } 4480 + 4453 4481 static void css_task_iter_advance(struct css_task_iter *it) 4454 4482 { 4455 - struct list_head *next; 4483 + struct task_struct *task; 4456 4484 4457 4485 lockdep_assert_held(&css_set_lock); 4458 4486 repeat: ··· 4473 4479 * consumed first and then ->mg_tasks. After ->mg_tasks, 4474 4480 * we move onto the next cset. 4475 4481 */ 4476 - next = it->task_pos->next; 4477 - 4478 - if (next == it->tasks_head) 4479 - next = it->mg_tasks_head->next; 4480 - 4481 - if (next == it->mg_tasks_head) 4482 - css_task_iter_advance_css_set(it); 4482 + if (it->flags & CSS_TASK_ITER_SKIPPED) 4483 + it->flags &= ~CSS_TASK_ITER_SKIPPED; 4483 4484 else 4484 - it->task_pos = next; 4485 + it->task_pos = it->task_pos->next; 4486 + 4487 + if (it->task_pos == it->tasks_head) 4488 + it->task_pos = it->mg_tasks_head->next; 4489 + if (it->task_pos == it->mg_tasks_head) 4490 + it->task_pos = it->dying_tasks_head->next; 4491 + if (it->task_pos == it->dying_tasks_head) 4492 + css_task_iter_advance_css_set(it); 4485 4493 } else { 4486 4494 /* called from start, proceed to the first cset */ 4487 4495 css_task_iter_advance_css_set(it); 4488 4496 } 4489 4497 4490 - /* if PROCS, skip over tasks which aren't group leaders */ 4491 - if ((it->flags & CSS_TASK_ITER_PROCS) && it->task_pos && 4492 - !thread_group_leader(list_entry(it->task_pos, struct task_struct, 4493 - cg_list))) 4494 - goto repeat; 4498 + if (!it->task_pos) 4499 + return; 4500 + 4501 + task = list_entry(it->task_pos, struct task_struct, cg_list); 4502 + 4503 + if (it->flags & CSS_TASK_ITER_PROCS) { 4504 + /* if PROCS, skip over tasks which aren't group leaders */ 4505 + if (!thread_group_leader(task)) 4506 + goto repeat; 4507 + 4508 + /* and dying leaders w/o live member threads */ 4509 + if (!atomic_read(&task->signal->live)) 4510 + goto repeat; 4511 + } else { 4512 + /* skip all dying ones */ 4513 + if (task->flags & PF_EXITING) 4514 + goto repeat; 4515 + } 4495 4516 } 4496 4517 4497 4518 /** ··· 4561 4552 } 4562 4553 4563 4554 spin_lock_irq(&css_set_lock); 4555 + 4556 + /* @it may be half-advanced by skips, finish advancing */ 4557 + if (it->flags & CSS_TASK_ITER_SKIPPED) 4558 + css_task_iter_advance(it); 4564 4559 4565 4560 if (it->task_pos) { 4566 4561 it->cur_task = list_entry(it->task_pos, struct task_struct, ··· 6047 6034 if (!list_empty(&tsk->cg_list)) { 6048 6035 spin_lock_irq(&css_set_lock); 6049 6036 css_set_move_task(tsk, cset, NULL, false); 6037 + list_add_tail(&tsk->cg_list, &cset->dying_tasks); 6050 6038 cset->nr_tasks--; 6051 6039 6052 6040 WARN_ON_ONCE(cgroup_task_frozen(tsk)); ··· 6073 6059 do_each_subsys_mask(ss, ssid, have_release_callback) { 6074 6060 ss->release(task); 6075 6061 } while_each_subsys_mask(); 6062 + 6063 + if (use_task_css_set_links) { 6064 + spin_lock_irq(&css_set_lock); 6065 + css_set_skip_task_iters(task_css_set(task), task); 6066 + list_del_init(&task->cg_list); 6067 + spin_unlock_irq(&css_set_lock); 6068 + } 6076 6069 } 6077 6070 6078 6071 void cgroup_free(struct task_struct *task)
+14 -1
kernel/cgroup/cpuset.c
··· 3254 3254 spin_unlock_irqrestore(&callback_lock, flags); 3255 3255 } 3256 3256 3257 + /** 3258 + * cpuset_cpus_allowed_fallback - final fallback before complete catastrophe. 3259 + * @tsk: pointer to task_struct with which the scheduler is struggling 3260 + * 3261 + * Description: In the case that the scheduler cannot find an allowed cpu in 3262 + * tsk->cpus_allowed, we fall back to task_cs(tsk)->cpus_allowed. In legacy 3263 + * mode however, this value is the same as task_cs(tsk)->effective_cpus, 3264 + * which will not contain a sane cpumask during cases such as cpu hotplugging. 3265 + * This is the absolute last resort for the scheduler and it is only used if 3266 + * _every_ other avenue has been traveled. 3267 + **/ 3268 + 3257 3269 void cpuset_cpus_allowed_fallback(struct task_struct *tsk) 3258 3270 { 3259 3271 rcu_read_lock(); 3260 - do_set_cpus_allowed(tsk, task_cs(tsk)->effective_cpus); 3272 + do_set_cpus_allowed(tsk, is_in_v2_mode() ? 3273 + task_cs(tsk)->cpus_allowed : cpu_possible_mask); 3261 3274 rcu_read_unlock(); 3262 3275 3263 3276 /*
+9
kernel/cred.c
··· 446 446 if (task->mm) 447 447 set_dumpable(task->mm, suid_dumpable); 448 448 task->pdeath_signal = 0; 449 + /* 450 + * If a task drops privileges and becomes nondumpable, 451 + * the dumpability change must become visible before 452 + * the credential change; otherwise, a __ptrace_may_access() 453 + * racing with this change may be able to attach to a task it 454 + * shouldn't be able to attach to (as if the task had dropped 455 + * privileges without becoming nondumpable). 456 + * Pairs with a read barrier in __ptrace_may_access(). 457 + */ 449 458 smp_wmb(); 450 459 } 451 460
+1 -1
kernel/exit.c
··· 195 195 rcu_read_unlock(); 196 196 197 197 proc_flush_task(p); 198 + cgroup_release(p); 198 199 199 200 write_lock_irq(&tasklist_lock); 200 201 ptrace_release_task(p); ··· 221 220 } 222 221 223 222 write_unlock_irq(&tasklist_lock); 224 - cgroup_release(p); 225 223 release_thread(p); 226 224 call_rcu(&p->rcu, delayed_put_task_struct); 227 225
+6
kernel/livepatch/core.c
··· 18 18 #include <linux/elf.h> 19 19 #include <linux/moduleloader.h> 20 20 #include <linux/completion.h> 21 + #include <linux/memory.h> 21 22 #include <asm/cacheflush.h> 22 23 #include "core.h" 23 24 #include "patch.h" ··· 719 718 struct klp_func *func; 720 719 int ret; 721 720 721 + mutex_lock(&text_mutex); 722 + 722 723 module_disable_ro(patch->mod); 723 724 ret = klp_write_object_relocations(patch->mod, obj); 724 725 if (ret) { 725 726 module_enable_ro(patch->mod, true); 727 + mutex_unlock(&text_mutex); 726 728 return ret; 727 729 } 728 730 729 731 arch_klp_init_object_loaded(patch, obj); 730 732 module_enable_ro(patch->mod, true); 733 + 734 + mutex_unlock(&text_mutex); 731 735 732 736 klp_for_each_func(obj, func) { 733 737 ret = klp_find_object_symbol(obj->name, func->old_name,
+18 -5
kernel/memremap.c
··· 95 95 pgmap->kill(pgmap->ref); 96 96 for_each_device_pfn(pfn, pgmap) 97 97 put_page(pfn_to_page(pfn)); 98 + pgmap->cleanup(pgmap->ref); 98 99 99 100 /* pages are dead and unused, undo the arch mapping */ 100 101 align_start = res->start & ~(SECTION_SIZE - 1); ··· 134 133 * 2/ The altmap field may optionally be initialized, in which case altmap_valid 135 134 * must be set to true 136 135 * 137 - * 3/ pgmap->ref must be 'live' on entry and will be killed at 138 - * devm_memremap_pages_release() time, or if this routine fails. 136 + * 3/ pgmap->ref must be 'live' on entry and will be killed and reaped 137 + * at devm_memremap_pages_release() time, or if this routine fails. 139 138 * 140 139 * 4/ res is expected to be a host memory range that could feasibly be 141 140 * treated as a "System RAM" range, i.e. not a device mmio range, but ··· 157 156 pgprot_t pgprot = PAGE_KERNEL; 158 157 int error, nid, is_ram; 159 158 160 - if (!pgmap->ref || !pgmap->kill) 159 + if (!pgmap->ref || !pgmap->kill || !pgmap->cleanup) { 160 + WARN(1, "Missing reference count teardown definition\n"); 161 161 return ERR_PTR(-EINVAL); 162 + } 162 163 163 164 align_start = res->start & ~(SECTION_SIZE - 1); 164 165 align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) ··· 171 168 if (conflict_pgmap) { 172 169 dev_WARN(dev, "Conflicting mapping in same section\n"); 173 170 put_dev_pagemap(conflict_pgmap); 174 - return ERR_PTR(-ENOMEM); 171 + error = -ENOMEM; 172 + goto err_array; 175 173 } 176 174 177 175 conflict_pgmap = get_dev_pagemap(PHYS_PFN(align_end), NULL); 178 176 if (conflict_pgmap) { 179 177 dev_WARN(dev, "Conflicting mapping in same section\n"); 180 178 put_dev_pagemap(conflict_pgmap); 181 - return ERR_PTR(-ENOMEM); 179 + error = -ENOMEM; 180 + goto err_array; 182 181 } 183 182 184 183 is_ram = region_intersects(align_start, align_size, ··· 272 267 pgmap_array_delete(res); 273 268 err_array: 274 269 pgmap->kill(pgmap->ref); 270 + pgmap->cleanup(pgmap->ref); 271 + 275 272 return ERR_PTR(error); 276 273 } 277 274 EXPORT_SYMBOL_GPL(devm_memremap_pages); 275 + 276 + void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap) 277 + { 278 + devm_release_action(dev, devm_memremap_pages_release, pgmap); 279 + } 280 + EXPORT_SYMBOL_GPL(devm_memunmap_pages); 278 281 279 282 unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) 280 283 {
+18 -2
kernel/ptrace.c
··· 324 324 return -EPERM; 325 325 ok: 326 326 rcu_read_unlock(); 327 + /* 328 + * If a task drops privileges and becomes nondumpable (through a syscall 329 + * like setresuid()) while we are trying to access it, we must ensure 330 + * that the dumpability is read after the credentials; otherwise, 331 + * we may be able to attach to a task that we shouldn't be able to 332 + * attach to (as if the task had dropped privileges without becoming 333 + * nondumpable). 334 + * Pairs with a write barrier in commit_creds(). 335 + */ 336 + smp_rmb(); 327 337 mm = task->mm; 328 338 if (mm && 329 339 ((get_dumpable(mm) != SUID_DUMP_USER) && ··· 715 705 if (arg.nr < 0) 716 706 return -EINVAL; 717 707 708 + /* Ensure arg.off fits in an unsigned long */ 709 + if (arg.off > ULONG_MAX) 710 + return 0; 711 + 718 712 if (arg.flags & PTRACE_PEEKSIGINFO_SHARED) 719 713 pending = &child->signal->shared_pending; 720 714 else ··· 726 712 727 713 for (i = 0; i < arg.nr; ) { 728 714 kernel_siginfo_t info; 729 - s32 off = arg.off + i; 715 + unsigned long off = arg.off + i; 716 + bool found = false; 730 717 731 718 spin_lock_irq(&child->sighand->siglock); 732 719 list_for_each_entry(q, &pending->list, list) { 733 720 if (!off--) { 721 + found = true; 734 722 copy_siginfo(&info, &q->info); 735 723 break; 736 724 } 737 725 } 738 726 spin_unlock_irq(&child->sighand->siglock); 739 727 740 - if (off >= 0) /* beyond the end of the list */ 728 + if (!found) /* beyond the end of the list */ 741 729 break; 742 730 743 731 #ifdef CONFIG_COMPAT
+3 -2
kernel/time/timekeeping.c
··· 808 808 struct timekeeper *tk = &tk_core.timekeeper; 809 809 unsigned int seq; 810 810 ktime_t base, *offset = offsets[offs]; 811 + u64 nsecs; 811 812 812 813 WARN_ON(timekeeping_suspended); 813 814 814 815 do { 815 816 seq = read_seqcount_begin(&tk_core.seq); 816 817 base = ktime_add(tk->tkr_mono.base, *offset); 818 + nsecs = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift; 817 819 818 820 } while (read_seqcount_retry(&tk_core.seq, seq)); 819 821 820 - return base; 821 - 822 + return base + nsecs; 822 823 } 823 824 EXPORT_SYMBOL_GPL(ktime_get_coarse_with_offset); 824 825
+16 -6
kernel/trace/ftrace.c
··· 34 34 #include <linux/hash.h> 35 35 #include <linux/rcupdate.h> 36 36 #include <linux/kprobes.h> 37 + #include <linux/memory.h> 37 38 38 39 #include <trace/events/sched.h> 39 40 ··· 2611 2610 { 2612 2611 int ret; 2613 2612 2613 + mutex_lock(&text_mutex); 2614 + 2614 2615 ret = ftrace_arch_code_modify_prepare(); 2615 2616 FTRACE_WARN_ON(ret); 2616 2617 if (ret) 2617 - return; 2618 + goto out_unlock; 2618 2619 2619 2620 /* 2620 2621 * By default we use stop_machine() to modify the code. ··· 2628 2625 2629 2626 ret = ftrace_arch_code_modify_post_process(); 2630 2627 FTRACE_WARN_ON(ret); 2628 + 2629 + out_unlock: 2630 + mutex_unlock(&text_mutex); 2631 2631 } 2632 2632 2633 2633 static void ftrace_run_modify_code(struct ftrace_ops *ops, int command, ··· 2941 2935 p = &pg->records[i]; 2942 2936 p->flags = rec_flags; 2943 2937 2944 - #ifndef CC_USING_NOP_MCOUNT 2945 2938 /* 2946 2939 * Do the initial record conversion from mcount jump 2947 2940 * to the NOP instructions. 2948 2941 */ 2949 - if (!ftrace_code_disable(mod, p)) 2942 + if (!__is_defined(CC_USING_NOP_MCOUNT) && 2943 + !ftrace_code_disable(mod, p)) 2950 2944 break; 2951 - #endif 2952 2945 2953 2946 update_cnt++; 2954 2947 } ··· 4226 4221 struct ftrace_func_entry *entry; 4227 4222 struct ftrace_func_map *map; 4228 4223 struct hlist_head *hhd; 4229 - int size = 1 << mapper->hash.size_bits; 4230 - int i; 4224 + int size, i; 4225 + 4226 + if (!mapper) 4227 + return; 4231 4228 4232 4229 if (free_func && mapper->hash.count) { 4230 + size = 1 << mapper->hash.size_bits; 4233 4231 for (i = 0; i < size; i++) { 4234 4232 hhd = &mapper->hash.buckets[i]; 4235 4233 hlist_for_each_entry(entry, hhd, hlist) { ··· 5784 5776 struct ftrace_page *pg; 5785 5777 5786 5778 mutex_lock(&ftrace_lock); 5779 + mutex_lock(&text_mutex); 5787 5780 5788 5781 if (ftrace_disabled) 5789 5782 goto out_unlock; ··· 5846 5837 ftrace_arch_code_modify_post_process(); 5847 5838 5848 5839 out_unlock: 5840 + mutex_unlock(&text_mutex); 5849 5841 mutex_unlock(&ftrace_lock); 5850 5842 5851 5843 process_cached_mods(mod->name);
+2 -2
kernel/trace/trace.c
··· 6923 6923 6924 6924 static DEFINE_MUTEX(tracing_err_log_lock); 6925 6925 6926 - struct tracing_log_err *get_tracing_log_err(struct trace_array *tr) 6926 + static struct tracing_log_err *get_tracing_log_err(struct trace_array *tr) 6927 6927 { 6928 6928 struct tracing_log_err *err; 6929 6929 ··· 8192 8192 .llseek = default_llseek, 8193 8193 }; 8194 8194 8195 - struct dentry *trace_instance_dir; 8195 + static struct dentry *trace_instance_dir; 8196 8196 8197 8197 static void 8198 8198 init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer);
+1 -1
kernel/trace/trace_output.c
··· 1057 1057 1058 1058 trace_seq_puts(s, "<stack trace>\n"); 1059 1059 1060 - for (p = field->caller; p && *p != ULONG_MAX && p < end; p++) { 1060 + for (p = field->caller; p && p < end && *p != ULONG_MAX; p++) { 1061 1061 1062 1062 if (trace_seq_has_overflowed(s)) 1063 1063 break;
+10 -5
kernel/trace/trace_uprobe.c
··· 426 426 /* 427 427 * Argument syntax: 428 428 * - Add uprobe: p|r[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] 429 - * 430 - * - Remove uprobe: -:[GRP/]EVENT 431 429 */ 432 430 static int trace_uprobe_create(int argc, const char **argv) 433 431 { ··· 441 443 ret = 0; 442 444 ref_ctr_offset = 0; 443 445 444 - /* argc must be >= 1 */ 445 - if (argv[0][0] == 'r') 446 + switch (argv[0][0]) { 447 + case 'r': 446 448 is_return = true; 447 - else if (argv[0][0] != 'p' || argc < 2) 449 + break; 450 + case 'p': 451 + break; 452 + default: 453 + return -ECANCELED; 454 + } 455 + 456 + if (argc < 2) 448 457 return -ECANCELED; 449 458 450 459 if (argv[0][1] == ':')
+25 -26
lib/genalloc.c
··· 168 168 EXPORT_SYMBOL(gen_pool_create); 169 169 170 170 /** 171 - * gen_pool_add_virt - add a new chunk of special memory to the pool 171 + * gen_pool_add_owner- add a new chunk of special memory to the pool 172 172 * @pool: pool to add new memory chunk to 173 173 * @virt: virtual starting address of memory chunk to add to pool 174 174 * @phys: physical starting address of memory chunk to add to pool 175 175 * @size: size in bytes of the memory chunk to add to pool 176 176 * @nid: node id of the node the chunk structure and bitmap should be 177 177 * allocated on, or -1 178 + * @owner: private data the publisher would like to recall at alloc time 178 179 * 179 180 * Add a new chunk of special memory to the specified pool. 180 181 * 181 182 * Returns 0 on success or a -ve errno on failure. 182 183 */ 183 - int gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phys, 184 - size_t size, int nid) 184 + int gen_pool_add_owner(struct gen_pool *pool, unsigned long virt, phys_addr_t phys, 185 + size_t size, int nid, void *owner) 185 186 { 186 187 struct gen_pool_chunk *chunk; 187 188 int nbits = size >> pool->min_alloc_order; ··· 196 195 chunk->phys_addr = phys; 197 196 chunk->start_addr = virt; 198 197 chunk->end_addr = virt + size - 1; 198 + chunk->owner = owner; 199 199 atomic_long_set(&chunk->avail, size); 200 200 201 201 spin_lock(&pool->lock); ··· 205 203 206 204 return 0; 207 205 } 208 - EXPORT_SYMBOL(gen_pool_add_virt); 206 + EXPORT_SYMBOL(gen_pool_add_owner); 209 207 210 208 /** 211 209 * gen_pool_virt_to_phys - return the physical address of memory ··· 262 260 EXPORT_SYMBOL(gen_pool_destroy); 263 261 264 262 /** 265 - * gen_pool_alloc - allocate special memory from the pool 266 - * @pool: pool to allocate from 267 - * @size: number of bytes to allocate from the pool 268 - * 269 - * Allocate the requested number of bytes from the specified pool. 270 - * Uses the pool allocation function (with first-fit algorithm by default). 271 - * Can not be used in NMI handler on architectures without 272 - * NMI-safe cmpxchg implementation. 273 - */ 274 - unsigned long gen_pool_alloc(struct gen_pool *pool, size_t size) 275 - { 276 - return gen_pool_alloc_algo(pool, size, pool->algo, pool->data); 277 - } 278 - EXPORT_SYMBOL(gen_pool_alloc); 279 - 280 - /** 281 - * gen_pool_alloc_algo - allocate special memory from the pool 263 + * gen_pool_alloc_algo_owner - allocate special memory from the pool 282 264 * @pool: pool to allocate from 283 265 * @size: number of bytes to allocate from the pool 284 266 * @algo: algorithm passed from caller 285 267 * @data: data passed to algorithm 268 + * @owner: optionally retrieve the chunk owner 286 269 * 287 270 * Allocate the requested number of bytes from the specified pool. 288 271 * Uses the pool allocation function (with first-fit algorithm by default). 289 272 * Can not be used in NMI handler on architectures without 290 273 * NMI-safe cmpxchg implementation. 291 274 */ 292 - unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, 293 - genpool_algo_t algo, void *data) 275 + unsigned long gen_pool_alloc_algo_owner(struct gen_pool *pool, size_t size, 276 + genpool_algo_t algo, void *data, void **owner) 294 277 { 295 278 struct gen_pool_chunk *chunk; 296 279 unsigned long addr = 0; ··· 285 298 #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG 286 299 BUG_ON(in_nmi()); 287 300 #endif 301 + 302 + if (owner) 303 + *owner = NULL; 288 304 289 305 if (size == 0) 290 306 return 0; ··· 316 326 addr = chunk->start_addr + ((unsigned long)start_bit << order); 317 327 size = nbits << order; 318 328 atomic_long_sub(size, &chunk->avail); 329 + if (owner) 330 + *owner = chunk->owner; 319 331 break; 320 332 } 321 333 rcu_read_unlock(); 322 334 return addr; 323 335 } 324 - EXPORT_SYMBOL(gen_pool_alloc_algo); 336 + EXPORT_SYMBOL(gen_pool_alloc_algo_owner); 325 337 326 338 /** 327 339 * gen_pool_dma_alloc - allocate special memory from the pool for DMA usage ··· 359 367 * @pool: pool to free to 360 368 * @addr: starting address of memory to free back to pool 361 369 * @size: size in bytes of memory to free 370 + * @owner: private data stashed at gen_pool_add() time 362 371 * 363 372 * Free previously allocated special memory back to the specified 364 373 * pool. Can not be used in NMI handler on architectures without 365 374 * NMI-safe cmpxchg implementation. 366 375 */ 367 - void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) 376 + void gen_pool_free_owner(struct gen_pool *pool, unsigned long addr, size_t size, 377 + void **owner) 368 378 { 369 379 struct gen_pool_chunk *chunk; 370 380 int order = pool->min_alloc_order; ··· 375 381 #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG 376 382 BUG_ON(in_nmi()); 377 383 #endif 384 + 385 + if (owner) 386 + *owner = NULL; 378 387 379 388 nbits = (size + (1UL << order) - 1) >> order; 380 389 rcu_read_lock(); ··· 389 392 BUG_ON(remain); 390 393 size = nbits << order; 391 394 atomic_long_add(size, &chunk->avail); 395 + if (owner) 396 + *owner = chunk->owner; 392 397 rcu_read_unlock(); 393 398 return; 394 399 } ··· 398 399 rcu_read_unlock(); 399 400 BUG(); 400 401 } 401 - EXPORT_SYMBOL(gen_pool_free); 402 + EXPORT_SYMBOL(gen_pool_free_owner); 402 403 403 404 /** 404 405 * gen_pool_for_each_chunk - call func for every chunk of generic memory pool
+3 -11
mm/hmm.c
··· 1354 1354 complete(&devmem->completion); 1355 1355 } 1356 1356 1357 - static void hmm_devmem_ref_exit(void *data) 1357 + static void hmm_devmem_ref_exit(struct percpu_ref *ref) 1358 1358 { 1359 - struct percpu_ref *ref = data; 1360 1359 struct hmm_devmem *devmem; 1361 1360 1362 1361 devmem = container_of(ref, struct hmm_devmem, ref); ··· 1432 1433 if (ret) 1433 1434 return ERR_PTR(ret); 1434 1435 1435 - ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, &devmem->ref); 1436 - if (ret) 1437 - return ERR_PTR(ret); 1438 - 1439 1436 size = ALIGN(size, PA_SECTION_SIZE); 1440 1437 addr = min((unsigned long)iomem_resource.end, 1441 1438 (1UL << MAX_PHYSMEM_BITS) - 1); ··· 1470 1475 devmem->pagemap.ref = &devmem->ref; 1471 1476 devmem->pagemap.data = devmem; 1472 1477 devmem->pagemap.kill = hmm_devmem_ref_kill; 1478 + devmem->pagemap.cleanup = hmm_devmem_ref_exit; 1473 1479 1474 1480 result = devm_memremap_pages(devmem->device, &devmem->pagemap); 1475 1481 if (IS_ERR(result)) ··· 1508 1512 if (ret) 1509 1513 return ERR_PTR(ret); 1510 1514 1511 - ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, 1512 - &devmem->ref); 1513 - if (ret) 1514 - return ERR_PTR(ret); 1515 - 1516 1515 devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; 1517 1516 devmem->pfn_last = devmem->pfn_first + 1518 1517 (resource_size(devmem->resource) >> PAGE_SHIFT); ··· 1520 1529 devmem->pagemap.ref = &devmem->ref; 1521 1530 devmem->pagemap.data = devmem; 1522 1531 devmem->pagemap.kill = hmm_devmem_ref_kill; 1532 + devmem->pagemap.cleanup = hmm_devmem_ref_exit; 1523 1533 1524 1534 result = devm_memremap_pages(devmem->device, &devmem->pagemap); 1525 1535 if (IS_ERR(result))
+3
mm/khugepaged.c
··· 1004 1004 * handled by the anon_vma lock + PG_lock. 1005 1005 */ 1006 1006 down_write(&mm->mmap_sem); 1007 + result = SCAN_ANY_PROCESS; 1008 + if (!mmget_still_valid(mm)) 1009 + goto out; 1007 1010 result = hugepage_vma_revalidate(mm, address, &vma); 1008 1011 if (result) 1009 1012 goto out;
+1 -1
mm/list_lru.c
··· 354 354 } 355 355 return 0; 356 356 fail: 357 - __memcg_destroy_list_lru_node(memcg_lrus, begin, i - 1); 357 + __memcg_destroy_list_lru_node(memcg_lrus, begin, i); 358 358 return -ENOMEM; 359 359 } 360 360
+28 -13
mm/memcontrol.c
··· 691 691 if (mem_cgroup_disabled()) 692 692 return; 693 693 694 + __this_cpu_add(memcg->vmstats_local->stat[idx], val); 695 + 694 696 x = val + __this_cpu_read(memcg->vmstats_percpu->stat[idx]); 695 697 if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) { 696 698 struct mem_cgroup *mi; 697 699 698 - atomic_long_add(x, &memcg->vmstats_local[idx]); 699 700 for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) 700 701 atomic_long_add(x, &mi->vmstats[idx]); 701 702 x = 0; ··· 746 745 __mod_memcg_state(memcg, idx, val); 747 746 748 747 /* Update lruvec */ 748 + __this_cpu_add(pn->lruvec_stat_local->count[idx], val); 749 + 749 750 x = val + __this_cpu_read(pn->lruvec_stat_cpu->count[idx]); 750 751 if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) { 751 752 struct mem_cgroup_per_node *pi; 752 753 753 - atomic_long_add(x, &pn->lruvec_stat_local[idx]); 754 754 for (pi = pn; pi; pi = parent_nodeinfo(pi, pgdat->node_id)) 755 755 atomic_long_add(x, &pi->lruvec_stat[idx]); 756 756 x = 0; ··· 773 771 if (mem_cgroup_disabled()) 774 772 return; 775 773 774 + __this_cpu_add(memcg->vmstats_local->events[idx], count); 775 + 776 776 x = count + __this_cpu_read(memcg->vmstats_percpu->events[idx]); 777 777 if (unlikely(x > MEMCG_CHARGE_BATCH)) { 778 778 struct mem_cgroup *mi; 779 779 780 - atomic_long_add(x, &memcg->vmevents_local[idx]); 781 780 for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) 782 781 atomic_long_add(x, &mi->vmevents[idx]); 783 782 x = 0; ··· 793 790 794 791 static unsigned long memcg_events_local(struct mem_cgroup *memcg, int event) 795 792 { 796 - return atomic_long_read(&memcg->vmevents_local[event]); 793 + long x = 0; 794 + int cpu; 795 + 796 + for_each_possible_cpu(cpu) 797 + x += per_cpu(memcg->vmstats_local->events[event], cpu); 798 + return x; 797 799 } 798 800 799 801 static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, ··· 2199 2191 long x; 2200 2192 2201 2193 x = this_cpu_xchg(memcg->vmstats_percpu->stat[i], 0); 2202 - if (x) { 2203 - atomic_long_add(x, &memcg->vmstats_local[i]); 2194 + if (x) 2204 2195 for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) 2205 2196 atomic_long_add(x, &memcg->vmstats[i]); 2206 - } 2207 2197 2208 2198 if (i >= NR_VM_NODE_STAT_ITEMS) 2209 2199 continue; ··· 2211 2205 2212 2206 pn = mem_cgroup_nodeinfo(memcg, nid); 2213 2207 x = this_cpu_xchg(pn->lruvec_stat_cpu->count[i], 0); 2214 - if (x) { 2215 - atomic_long_add(x, &pn->lruvec_stat_local[i]); 2208 + if (x) 2216 2209 do { 2217 2210 atomic_long_add(x, &pn->lruvec_stat[i]); 2218 2211 } while ((pn = parent_nodeinfo(pn, nid))); 2219 - } 2220 2212 } 2221 2213 } 2222 2214 ··· 2222 2218 long x; 2223 2219 2224 2220 x = this_cpu_xchg(memcg->vmstats_percpu->events[i], 0); 2225 - if (x) { 2226 - atomic_long_add(x, &memcg->vmevents_local[i]); 2221 + if (x) 2227 2222 for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) 2228 2223 atomic_long_add(x, &memcg->vmevents[i]); 2229 - } 2230 2224 } 2231 2225 } 2232 2226 ··· 4485 4483 if (!pn) 4486 4484 return 1; 4487 4485 4486 + pn->lruvec_stat_local = alloc_percpu(struct lruvec_stat); 4487 + if (!pn->lruvec_stat_local) { 4488 + kfree(pn); 4489 + return 1; 4490 + } 4491 + 4488 4492 pn->lruvec_stat_cpu = alloc_percpu(struct lruvec_stat); 4489 4493 if (!pn->lruvec_stat_cpu) { 4494 + free_percpu(pn->lruvec_stat_local); 4490 4495 kfree(pn); 4491 4496 return 1; 4492 4497 } ··· 4515 4506 return; 4516 4507 4517 4508 free_percpu(pn->lruvec_stat_cpu); 4509 + free_percpu(pn->lruvec_stat_local); 4518 4510 kfree(pn); 4519 4511 } 4520 4512 ··· 4526 4516 for_each_node(node) 4527 4517 free_mem_cgroup_per_node_info(memcg, node); 4528 4518 free_percpu(memcg->vmstats_percpu); 4519 + free_percpu(memcg->vmstats_local); 4529 4520 kfree(memcg); 4530 4521 } 4531 4522 ··· 4553 4542 1, MEM_CGROUP_ID_MAX, 4554 4543 GFP_KERNEL); 4555 4544 if (memcg->id.id < 0) 4545 + goto fail; 4546 + 4547 + memcg->vmstats_local = alloc_percpu(struct memcg_vmstats_percpu); 4548 + if (!memcg->vmstats_local) 4556 4549 goto fail; 4557 4550 4558 4551 memcg->vmstats_percpu = alloc_percpu(struct memcg_vmstats_percpu);
+4 -3
mm/mlock.c
··· 636 636 * is also counted. 637 637 * Return value: previously mlocked page counts 638 638 */ 639 - static int count_mm_mlocked_page_nr(struct mm_struct *mm, 639 + static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm, 640 640 unsigned long start, size_t len) 641 641 { 642 642 struct vm_area_struct *vma; 643 - int count = 0; 643 + unsigned long count = 0; 644 644 645 645 if (mm == NULL) 646 646 mm = current->mm; ··· 797 797 unsigned long lock_limit; 798 798 int ret; 799 799 800 - if (!flags || (flags & ~(MCL_CURRENT | MCL_FUTURE | MCL_ONFAULT))) 800 + if (!flags || (flags & ~(MCL_CURRENT | MCL_FUTURE | MCL_ONFAULT)) || 801 + flags == MCL_ONFAULT) 801 802 return -EINVAL; 802 803 803 804 if (!can_do_mlock())
+19 -5
mm/mmu_gather.c
··· 245 245 { 246 246 /* 247 247 * If there are parallel threads are doing PTE changes on same range 248 - * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB 249 - * flush by batching, a thread has stable TLB entry can fail to flush 250 - * the TLB by observing pte_none|!pte_dirty, for example so flush TLB 251 - * forcefully if we detect parallel PTE batching threads. 248 + * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB 249 + * flush by batching, one thread may end up seeing inconsistent PTEs 250 + * and result in having stale TLB entries. So flush TLB forcefully 251 + * if we detect parallel PTE batching threads. 252 + * 253 + * However, some syscalls, e.g. munmap(), may free page tables, this 254 + * needs force flush everything in the given range. Otherwise this 255 + * may result in having stale TLB entries for some architectures, 256 + * e.g. aarch64, that could specify flush what level TLB. 252 257 */ 253 258 if (mm_tlb_flush_nested(tlb->mm)) { 259 + /* 260 + * The aarch64 yields better performance with fullmm by 261 + * avoiding multiple CPUs spamming TLBI messages at the 262 + * same time. 263 + * 264 + * On x86 non-fullmm doesn't yield significant difference 265 + * against fullmm. 266 + */ 267 + tlb->fullmm = 1; 254 268 __tlb_reset_range(tlb); 255 - __tlb_adjust_range(tlb, start, end - start); 269 + tlb->freed_tables = 1; 256 270 } 257 271 258 272 tlb_flush_mmu(tlb);
+8 -6
mm/vmalloc.c
··· 2123 2123 /* Handle removing and resetting vm mappings related to the vm_struct. */ 2124 2124 static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages) 2125 2125 { 2126 - unsigned long addr = (unsigned long)area->addr; 2127 2126 unsigned long start = ULONG_MAX, end = 0; 2128 2127 int flush_reset = area->flags & VM_FLUSH_RESET_PERMS; 2128 + int flush_dmap = 0; 2129 2129 int i; 2130 2130 2131 2131 /* ··· 2135 2135 * execute permissions, without leaving a RW+X window. 2136 2136 */ 2137 2137 if (flush_reset && !IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) { 2138 - set_memory_nx(addr, area->nr_pages); 2139 - set_memory_rw(addr, area->nr_pages); 2138 + set_memory_nx((unsigned long)area->addr, area->nr_pages); 2139 + set_memory_rw((unsigned long)area->addr, area->nr_pages); 2140 2140 } 2141 2141 2142 2142 remove_vm_area(area->addr); ··· 2160 2160 * the vm_unmap_aliases() flush includes the direct map. 2161 2161 */ 2162 2162 for (i = 0; i < area->nr_pages; i++) { 2163 - if (page_address(area->pages[i])) { 2163 + unsigned long addr = (unsigned long)page_address(area->pages[i]); 2164 + if (addr) { 2164 2165 start = min(addr, start); 2165 - end = max(addr, end); 2166 + end = max(addr + PAGE_SIZE, end); 2167 + flush_dmap = 1; 2166 2168 } 2167 2169 } 2168 2170 ··· 2174 2172 * reset the direct map permissions to the default. 2175 2173 */ 2176 2174 set_area_direct_map(area, set_direct_map_invalid_noflush); 2177 - _vm_unmap_aliases(start, end, 1); 2175 + _vm_unmap_aliases(start, end, flush_dmap); 2178 2176 set_area_direct_map(area, set_direct_map_default_noflush); 2179 2177 } 2180 2178
+3 -3
mm/vmscan.c
··· 1505 1505 1506 1506 list_for_each_entry_safe(page, next, page_list, lru) { 1507 1507 if (page_is_file_cache(page) && !PageDirty(page) && 1508 - !__PageMovable(page)) { 1508 + !__PageMovable(page) && !PageUnevictable(page)) { 1509 1509 ClearPageActive(page); 1510 1510 list_move(&page->lru, &clean_pages); 1511 1511 } ··· 1953 1953 if (global_reclaim(sc)) 1954 1954 __count_vm_events(item, nr_reclaimed); 1955 1955 __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); 1956 - reclaim_stat->recent_rotated[0] = stat.nr_activate[0]; 1957 - reclaim_stat->recent_rotated[1] = stat.nr_activate[1]; 1956 + reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; 1957 + reclaim_stat->recent_rotated[1] += stat.nr_activate[1]; 1958 1958 1959 1959 move_pages_to_lru(lruvec, &page_list); 1960 1960
+1 -1
scripts/decode_stacktrace.sh
··· 73 73 if [[ "${cache[$module,$address]+isset}" == "isset" ]]; then 74 74 local code=${cache[$module,$address]} 75 75 else 76 - local code=$(addr2line -i -e "$objfile" "$address") 76 + local code=$(${CROSS_COMPILE}addr2line -i -e "$objfile" "$address") 77 77 cache[$module,$address]=$code 78 78 fi 79 79
+8 -2
security/selinux/avc.c
··· 739 739 rc = security_sid_to_context_inval(sad->state, sad->ssid, &scontext, 740 740 &scontext_len); 741 741 if (!rc && scontext) { 742 - audit_log_format(ab, " srawcon=%s", scontext); 742 + if (scontext_len && scontext[scontext_len - 1] == '\0') 743 + scontext_len--; 744 + audit_log_format(ab, " srawcon="); 745 + audit_log_n_untrustedstring(ab, scontext, scontext_len); 743 746 kfree(scontext); 744 747 } 745 748 746 749 rc = security_sid_to_context_inval(sad->state, sad->tsid, &scontext, 747 750 &scontext_len); 748 751 if (!rc && scontext) { 749 - audit_log_format(ab, " trawcon=%s", scontext); 752 + if (scontext_len && scontext[scontext_len - 1] == '\0') 753 + scontext_len--; 754 + audit_log_format(ab, " trawcon="); 755 + audit_log_n_untrustedstring(ab, scontext, scontext_len); 750 756 kfree(scontext); 751 757 } 752 758 }
+28 -11
security/selinux/hooks.c
··· 1052 1052 if (token == Opt_error) 1053 1053 return -EINVAL; 1054 1054 1055 - if (token != Opt_seclabel) 1055 + if (token != Opt_seclabel) { 1056 1056 val = kmemdup_nul(val, len, GFP_KERNEL); 1057 + if (!val) { 1058 + rc = -ENOMEM; 1059 + goto free_opt; 1060 + } 1061 + } 1057 1062 rc = selinux_add_opt(token, val, mnt_opts); 1058 1063 if (unlikely(rc)) { 1059 1064 kfree(val); 1060 - if (*mnt_opts) { 1061 - selinux_free_mnt_opts(*mnt_opts); 1062 - *mnt_opts = NULL; 1063 - } 1065 + goto free_opt; 1066 + } 1067 + return rc; 1068 + 1069 + free_opt: 1070 + if (*mnt_opts) { 1071 + selinux_free_mnt_opts(*mnt_opts); 1072 + *mnt_opts = NULL; 1064 1073 } 1065 1074 return rc; 1066 1075 } ··· 2625 2616 char *from = options; 2626 2617 char *to = options; 2627 2618 bool first = true; 2619 + int rc; 2628 2620 2629 2621 while (1) { 2630 2622 int len = opt_len(from); 2631 - int token, rc; 2623 + int token; 2632 2624 char *arg = NULL; 2633 2625 2634 2626 token = match_opt_prefix(from, len, &arg); ··· 2645 2635 *q++ = c; 2646 2636 } 2647 2637 arg = kmemdup_nul(arg, q - arg, GFP_KERNEL); 2638 + if (!arg) { 2639 + rc = -ENOMEM; 2640 + goto free_opt; 2641 + } 2648 2642 } 2649 2643 rc = selinux_add_opt(token, arg, mnt_opts); 2650 2644 if (unlikely(rc)) { 2651 2645 kfree(arg); 2652 - if (*mnt_opts) { 2653 - selinux_free_mnt_opts(*mnt_opts); 2654 - *mnt_opts = NULL; 2655 - } 2656 - return rc; 2646 + goto free_opt; 2657 2647 } 2658 2648 } else { 2659 2649 if (!first) { // copy with preceding comma ··· 2671 2661 } 2672 2662 *to = '\0'; 2673 2663 return 0; 2664 + 2665 + free_opt: 2666 + if (*mnt_opts) { 2667 + selinux_free_mnt_opts(*mnt_opts); 2668 + *mnt_opts = NULL; 2669 + } 2670 + return rc; 2674 2671 } 2675 2672 2676 2673 static int selinux_sb_remount(struct super_block *sb, void *mnt_opts)
+7 -5
security/smack/smack_lsm.c
··· 68 68 int len; 69 69 int opt; 70 70 } smk_mount_opts[] = { 71 + {"smackfsdef", sizeof("smackfsdef") - 1, Opt_fsdefault}, 71 72 A(fsdefault), A(fsfloor), A(fshat), A(fsroot), A(fstransmute) 72 73 }; 73 74 #undef A ··· 683 682 } 684 683 685 684 static const struct fs_parameter_spec smack_param_specs[] = { 686 - fsparam_string("fsdefault", Opt_fsdefault), 687 - fsparam_string("fsfloor", Opt_fsfloor), 688 - fsparam_string("fshat", Opt_fshat), 689 - fsparam_string("fsroot", Opt_fsroot), 690 - fsparam_string("fstransmute", Opt_fstransmute), 685 + fsparam_string("smackfsdef", Opt_fsdefault), 686 + fsparam_string("smackfsdefault", Opt_fsdefault), 687 + fsparam_string("smackfsfloor", Opt_fsfloor), 688 + fsparam_string("smackfshat", Opt_fshat), 689 + fsparam_string("smackfsroot", Opt_fsroot), 690 + fsparam_string("smackfstransmute", Opt_fstransmute), 691 691 {} 692 692 }; 693 693
+1 -1
sound/firewire/motu/motu-stream.c
··· 344 344 } 345 345 346 346 amdtp_stream_destroy(stream); 347 - fw_iso_resources_free(resources); 347 + fw_iso_resources_destroy(resources); 348 348 } 349 349 350 350 int snd_motu_stream_init_duplex(struct snd_motu *motu)
-3
sound/firewire/oxfw/oxfw.c
··· 148 148 oxfw->midi_input_ports = 0; 149 149 oxfw->midi_output_ports = 0; 150 150 151 - /* Output stream exists but no data channels are useful. */ 152 - oxfw->has_output = false; 153 - 154 151 return snd_oxfw_scs1x_add(oxfw); 155 152 } 156 153
-1
sound/hda/ext/hdac_ext_bus.c
··· 162 162 void snd_hdac_ext_bus_device_exit(struct hdac_device *hdev) 163 163 { 164 164 snd_hdac_device_exit(hdev); 165 - kfree(hdev); 166 165 } 167 166 EXPORT_SYMBOL_GPL(snd_hdac_ext_bus_device_exit); 168 167
+8 -1
sound/pci/hda/hda_codec.c
··· 826 826 if (codec->core.type == HDA_DEV_LEGACY) 827 827 snd_hdac_device_unregister(&codec->core); 828 828 codec_display_power(codec, false); 829 - put_device(hda_codec_dev(codec)); 829 + 830 + /* 831 + * In the case of ASoC HD-audio bus, the device refcount is released in 832 + * snd_hdac_ext_bus_device_remove() explicitly. 833 + */ 834 + if (codec->core.type == HDA_DEV_LEGACY) 835 + put_device(hda_codec_dev(codec)); 836 + 830 837 return 0; 831 838 } 832 839
+66 -27
sound/pci/hda/patch_realtek.c
··· 4120 4120 static void alc_headset_mode_unplugged(struct hda_codec *codec) 4121 4121 { 4122 4122 static struct coef_fw coef0255[] = { 4123 + WRITE_COEF(0x1b, 0x0c0b), /* LDO and MISC control */ 4123 4124 WRITE_COEF(0x45, 0xd089), /* UAJ function set to menual mode */ 4124 4125 UPDATE_COEFEX(0x57, 0x05, 1<<14, 0), /* Direct Drive HP Amp control(Set to verb control)*/ 4125 4126 WRITE_COEF(0x06, 0x6104), /* Set MIC2 Vref gate with HP */ 4126 4127 WRITE_COEFEX(0x57, 0x03, 0x8aa6), /* Direct Drive HP Amp control */ 4127 4128 {} 4128 4129 }; 4129 - static struct coef_fw coef0255_1[] = { 4130 - WRITE_COEF(0x1b, 0x0c0b), /* LDO and MISC control */ 4131 - {} 4132 - }; 4133 4130 static struct coef_fw coef0256[] = { 4134 4131 WRITE_COEF(0x1b, 0x0c4b), /* LDO and MISC control */ 4132 + WRITE_COEF(0x45, 0xd089), /* UAJ function set to menual mode */ 4133 + WRITE_COEF(0x06, 0x6104), /* Set MIC2 Vref gate with HP */ 4134 + WRITE_COEFEX(0x57, 0x03, 0x09a3), /* Direct Drive HP Amp control */ 4135 + UPDATE_COEFEX(0x57, 0x05, 1<<14, 0), /* Direct Drive HP Amp control(Set to verb control)*/ 4135 4136 {} 4136 4137 }; 4137 4138 static struct coef_fw coef0233[] = { ··· 4195 4194 4196 4195 switch (codec->core.vendor_id) { 4197 4196 case 0x10ec0255: 4198 - alc_process_coef_fw(codec, coef0255_1); 4199 4197 alc_process_coef_fw(codec, coef0255); 4200 4198 break; 4201 4199 case 0x10ec0236: 4202 4200 case 0x10ec0256: 4203 4201 alc_process_coef_fw(codec, coef0256); 4204 - alc_process_coef_fw(codec, coef0255); 4205 4202 break; 4206 4203 case 0x10ec0234: 4207 4204 case 0x10ec0274: ··· 4252 4253 WRITE_COEF(0x06, 0x6100), /* Set MIC2 Vref gate to normal */ 4253 4254 {} 4254 4255 }; 4256 + static struct coef_fw coef0256[] = { 4257 + UPDATE_COEFEX(0x57, 0x05, 1<<14, 1<<14), /* Direct Drive HP Amp control(Set to verb control)*/ 4258 + WRITE_COEFEX(0x57, 0x03, 0x09a3), 4259 + WRITE_COEF(0x06, 0x6100), /* Set MIC2 Vref gate to normal */ 4260 + {} 4261 + }; 4255 4262 static struct coef_fw coef0233[] = { 4256 4263 UPDATE_COEF(0x35, 0, 1<<14), 4257 4264 WRITE_COEF(0x06, 0x2100), ··· 4305 4300 }; 4306 4301 4307 4302 switch (codec->core.vendor_id) { 4308 - case 0x10ec0236: 4309 4303 case 0x10ec0255: 4310 - case 0x10ec0256: 4311 4304 alc_write_coef_idx(codec, 0x45, 0xc489); 4312 4305 snd_hda_set_pin_ctl_cache(codec, hp_pin, 0); 4313 4306 alc_process_coef_fw(codec, coef0255); 4307 + snd_hda_set_pin_ctl_cache(codec, mic_pin, PIN_VREF50); 4308 + break; 4309 + case 0x10ec0236: 4310 + case 0x10ec0256: 4311 + alc_write_coef_idx(codec, 0x45, 0xc489); 4312 + snd_hda_set_pin_ctl_cache(codec, hp_pin, 0); 4313 + alc_process_coef_fw(codec, coef0256); 4314 4314 snd_hda_set_pin_ctl_cache(codec, mic_pin, PIN_VREF50); 4315 4315 break; 4316 4316 case 0x10ec0234: ··· 4399 4389 WRITE_COEF(0x49, 0x0049), 4400 4390 {} 4401 4391 }; 4392 + static struct coef_fw coef0256[] = { 4393 + WRITE_COEF(0x45, 0xc489), 4394 + WRITE_COEFEX(0x57, 0x03, 0x0da3), 4395 + WRITE_COEF(0x49, 0x0049), 4396 + UPDATE_COEFEX(0x57, 0x05, 1<<14, 0), /* Direct Drive HP Amp control(Set to verb control)*/ 4397 + WRITE_COEF(0x06, 0x6100), 4398 + {} 4399 + }; 4402 4400 static struct coef_fw coef0233[] = { 4403 4401 WRITE_COEF(0x06, 0x2100), 4404 4402 WRITE_COEF(0x32, 0x4ea3), ··· 4457 4439 alc_process_coef_fw(codec, alc225_pre_hsmode); 4458 4440 alc_process_coef_fw(codec, coef0225); 4459 4441 break; 4460 - case 0x10ec0236: 4461 4442 case 0x10ec0255: 4462 - case 0x10ec0256: 4463 4443 alc_process_coef_fw(codec, coef0255); 4444 + break; 4445 + case 0x10ec0236: 4446 + case 0x10ec0256: 4447 + alc_write_coef_idx(codec, 0x1b, 0x0e4b); 4448 + alc_write_coef_idx(codec, 0x45, 0xc089); 4449 + msleep(50); 4450 + alc_process_coef_fw(codec, coef0256); 4464 4451 break; 4465 4452 case 0x10ec0234: 4466 4453 case 0x10ec0274: ··· 4510 4487 }; 4511 4488 static struct coef_fw coef0256[] = { 4512 4489 WRITE_COEF(0x45, 0xd489), /* Set to CTIA type */ 4513 - WRITE_COEF(0x1b, 0x0c6b), 4514 - WRITE_COEFEX(0x57, 0x03, 0x8ea6), 4490 + WRITE_COEF(0x1b, 0x0e6b), 4515 4491 {} 4516 4492 }; 4517 4493 static struct coef_fw coef0233[] = { ··· 4628 4606 }; 4629 4607 static struct coef_fw coef0256[] = { 4630 4608 WRITE_COEF(0x45, 0xe489), /* Set to OMTP Type */ 4631 - WRITE_COEF(0x1b, 0x0c6b), 4632 - WRITE_COEFEX(0x57, 0x03, 0x8ea6), 4609 + WRITE_COEF(0x1b, 0x0e6b), 4633 4610 {} 4634 4611 }; 4635 4612 static struct coef_fw coef0233[] = { ··· 4760 4739 }; 4761 4740 4762 4741 switch (codec->core.vendor_id) { 4763 - case 0x10ec0236: 4764 4742 case 0x10ec0255: 4765 - case 0x10ec0256: 4766 4743 alc_process_coef_fw(codec, coef0255); 4767 4744 msleep(300); 4768 4745 val = alc_read_coef_idx(codec, 0x46); 4769 4746 is_ctia = (val & 0x0070) == 0x0070; 4747 + break; 4748 + case 0x10ec0236: 4749 + case 0x10ec0256: 4750 + alc_write_coef_idx(codec, 0x1b, 0x0e4b); 4751 + alc_write_coef_idx(codec, 0x06, 0x6104); 4752 + alc_write_coefex_idx(codec, 0x57, 0x3, 0x09a3); 4753 + 4754 + snd_hda_codec_write(codec, 0x21, 0, 4755 + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); 4756 + msleep(80); 4757 + snd_hda_codec_write(codec, 0x21, 0, 4758 + AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); 4759 + 4760 + alc_process_coef_fw(codec, coef0255); 4761 + msleep(300); 4762 + val = alc_read_coef_idx(codec, 0x46); 4763 + is_ctia = (val & 0x0070) == 0x0070; 4764 + 4765 + alc_write_coefex_idx(codec, 0x57, 0x3, 0x0da3); 4766 + alc_update_coefex_idx(codec, 0x57, 0x5, 1<<14, 0); 4767 + 4768 + snd_hda_codec_write(codec, 0x21, 0, 4769 + AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); 4770 + msleep(80); 4771 + snd_hda_codec_write(codec, 0x21, 0, 4772 + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE); 4770 4773 break; 4771 4774 case 0x10ec0234: 4772 4775 case 0x10ec0274: ··· 6255 6210 .chain_id = ALC269_FIXUP_THINKPAD_ACPI, 6256 6211 }, 6257 6212 [ALC255_FIXUP_ACER_MIC_NO_PRESENCE] = { 6258 - .type = HDA_FIXUP_VERBS, 6259 - .v.verbs = (const struct hda_verb[]) { 6260 - /* Enable the Mic */ 6261 - { 0x20, AC_VERB_SET_COEF_INDEX, 0x45 }, 6262 - { 0x20, AC_VERB_SET_PROC_COEF, 0x5089 }, 6263 - {} 6213 + .type = HDA_FIXUP_PINS, 6214 + .v.pins = (const struct hda_pintbl[]) { 6215 + { 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */ 6216 + { } 6264 6217 }, 6265 6218 .chained = true, 6266 - .chain_id = ALC269_FIXUP_LIFEBOOK_EXTMIC 6219 + .chain_id = ALC255_FIXUP_HEADSET_MODE 6267 6220 }, 6268 6221 [ALC255_FIXUP_ASUS_MIC_NO_PRESENCE] = { 6269 6222 .type = HDA_FIXUP_PINS, ··· 7305 7262 {0x18, 0x02a11030}, 7306 7263 {0x19, 0x0181303F}, 7307 7264 {0x21, 0x0221102f}), 7308 - SND_HDA_PIN_QUIRK(0x10ec0255, 0x1025, "Acer", ALC255_FIXUP_ACER_MIC_NO_PRESENCE, 7309 - {0x12, 0x90a60140}, 7310 - {0x14, 0x90170120}, 7311 - {0x21, 0x02211030}), 7312 7265 SND_HDA_PIN_QUIRK(0x10ec0255, 0x1025, "Acer", ALC255_FIXUP_ACER_MIC_NO_PRESENCE, 7313 7266 {0x12, 0x90a601c0}, 7314 7267 {0x14, 0x90171120},
+1 -1
sound/pci/ice1712/ews.c
··· 812 812 813 813 snd_i2c_lock(ice->i2c); 814 814 byte = reg; 815 - if (snd_i2c_sendbytes(spec->i2cdevs[EWS_I2C_6FIRE], &byte, 1)) { 815 + if (snd_i2c_sendbytes(spec->i2cdevs[EWS_I2C_6FIRE], &byte, 1) != 1) { 816 816 snd_i2c_unlock(ice->i2c); 817 817 dev_err(ice->card->dev, "cannot send pca\n"); 818 818 return -EIO;
+11 -7
sound/soc/codecs/ak4458.c
··· 304 304 AK4458_00_CONTROL1, 305 305 AK4458_RSTN_MASK, 306 306 0x0); 307 - return ret; 307 + if (ret < 0) 308 + return ret; 309 + 310 + return 0; 308 311 } 309 312 310 313 static int ak4458_hw_params(struct snd_pcm_substream *substream, ··· 539 536 } 540 537 } 541 538 542 - static void ak4458_init(struct snd_soc_component *component) 539 + static int ak4458_init(struct snd_soc_component *component) 543 540 { 544 541 struct ak4458_priv *ak4458 = snd_soc_component_get_drvdata(component); 542 + int ret; 545 543 546 544 /* External Mute ON */ 547 545 if (ak4458->mute_gpiod) ··· 550 546 551 547 ak4458_power_on(ak4458); 552 548 553 - snd_soc_component_update_bits(component, AK4458_00_CONTROL1, 549 + ret = snd_soc_component_update_bits(component, AK4458_00_CONTROL1, 554 550 0x80, 0x80); /* ACKS bit = 1; 10000000 */ 551 + if (ret < 0) 552 + return ret; 555 553 556 - ak4458_rstn_control(component, 1); 554 + return ak4458_rstn_control(component, 1); 557 555 } 558 556 559 557 static int ak4458_probe(struct snd_soc_component *component) 560 558 { 561 559 struct ak4458_priv *ak4458 = snd_soc_component_get_drvdata(component); 562 560 563 - ak4458_init(component); 564 - 565 561 ak4458->fs = 48000; 566 562 567 - return 0; 563 + return ak4458_init(component); 568 564 } 569 565 570 566 static void ak4458_remove(struct snd_soc_component *component)
+1 -1
sound/soc/codecs/cs4265.c
··· 60 60 static bool cs4265_readable_register(struct device *dev, unsigned int reg) 61 61 { 62 62 switch (reg) { 63 - case CS4265_CHIP_ID ... CS4265_SPDIF_CTL2: 63 + case CS4265_CHIP_ID ... CS4265_MAX_REGISTER: 64 64 return true; 65 65 default: 66 66 return false;
+1
sound/soc/codecs/cs42xx8.c
··· 558 558 msleep(5); 559 559 560 560 regcache_cache_only(cs42xx8->regmap, false); 561 + regcache_mark_dirty(cs42xx8->regmap); 561 562 562 563 ret = regcache_sync(cs42xx8->regmap); 563 564 if (ret) {
+16
sound/soc/codecs/max98090.c
··· 1909 1909 return 0; 1910 1910 } 1911 1911 1912 + static int max98090_dai_startup(struct snd_pcm_substream *substream, 1913 + struct snd_soc_dai *dai) 1914 + { 1915 + struct snd_soc_component *component = dai->component; 1916 + struct max98090_priv *max98090 = snd_soc_component_get_drvdata(component); 1917 + unsigned int fmt = max98090->dai_fmt; 1918 + 1919 + /* Remove 24-bit format support if it is not in right justified mode. */ 1920 + if ((fmt & SND_SOC_DAIFMT_FORMAT_MASK) != SND_SOC_DAIFMT_RIGHT_J) { 1921 + substream->runtime->hw.formats = SNDRV_PCM_FMTBIT_S16_LE; 1922 + snd_pcm_hw_constraint_msbits(substream->runtime, 0, 16, 16); 1923 + } 1924 + return 0; 1925 + } 1926 + 1912 1927 static int max98090_dai_hw_params(struct snd_pcm_substream *substream, 1913 1928 struct snd_pcm_hw_params *params, 1914 1929 struct snd_soc_dai *dai) ··· 2331 2316 #define MAX98090_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE) 2332 2317 2333 2318 static const struct snd_soc_dai_ops max98090_dai_ops = { 2319 + .startup = max98090_dai_startup, 2334 2320 .set_sysclk = max98090_dai_set_sysclk, 2335 2321 .set_fmt = max98090_dai_set_fmt, 2336 2322 .set_tdm_slot = max98090_set_tdm_slot,
+2 -1
sound/soc/codecs/rt274.c
··· 405 405 { 406 406 struct rt274_priv *rt274 = snd_soc_component_get_drvdata(component); 407 407 408 + rt274->jack = jack; 409 + 408 410 if (jack == NULL) { 409 411 /* Disable jack detection */ 410 412 regmap_update_bits(rt274->regmap, RT274_EAPD_GPIO_IRQ_CTRL, ··· 414 412 415 413 return 0; 416 414 } 417 - rt274->jack = jack; 418 415 419 416 regmap_update_bits(rt274->regmap, RT274_EAPD_GPIO_IRQ_CTRL, 420 417 RT274_IRQ_EN, RT274_IRQ_EN);
+12
sound/soc/codecs/rt5670.c
··· 2882 2882 RT5670_DEV_GPIO | 2883 2883 RT5670_JD_MODE3), 2884 2884 }, 2885 + { 2886 + .callback = rt5670_quirk_cb, 2887 + .ident = "Aegex 10 tablet (RU2)", 2888 + .matches = { 2889 + DMI_MATCH(DMI_SYS_VENDOR, "AEGEX"), 2890 + DMI_MATCH(DMI_PRODUCT_VERSION, "RU2"), 2891 + }, 2892 + .driver_data = (unsigned long *)(RT5670_DMIC_EN | 2893 + RT5670_DMIC2_INR | 2894 + RT5670_DEV_GPIO | 2895 + RT5670_JD_MODE3), 2896 + }, 2885 2897 {} 2886 2898 }; 2887 2899
+3 -2
sound/soc/codecs/rt5677-spi.c
··· 101 101 u32 word_size = min_t(u32, dstlen, 8); 102 102 103 103 for (w = 0; w < dstlen; w += word_size) { 104 - for (i = 0; i < word_size; i++) { 104 + for (i = 0; i < word_size && i + w < dstlen; i++) { 105 105 si = w + word_size - i - 1; 106 106 dst[w + i] = si < srclen ? src[si] : 0; 107 107 } ··· 152 152 status |= spi_sync(g_spi, &m); 153 153 mutex_unlock(&spi_mutex); 154 154 155 + 155 156 /* Copy data back to caller buffer */ 156 - rt5677_spi_reverse(cb + offset, t[1].len, body, t[1].len); 157 + rt5677_spi_reverse(cb + offset, len - offset, body, t[1].len); 157 158 } 158 159 return status; 159 160 }
+2 -2
sound/soc/fsl/fsl_asrc.c
··· 282 282 return -EINVAL; 283 283 } 284 284 285 - if ((outrate > 8000 && outrate < 30000) && 286 - (outrate/inrate > 24 || inrate/outrate > 8)) { 285 + if ((outrate >= 8000 && outrate <= 30000) && 286 + (outrate > 24 * inrate || inrate > 8 * outrate)) { 287 287 pair_err("exceed supported ratio range [1/24, 8] for \ 288 288 inrate/outrate: %d/%d\n", inrate, outrate); 289 289 return -EINVAL;
+2 -2
sound/soc/intel/atom/sst/sst_pvt.c
··· 158 158 { 159 159 struct ipc_post *msg; 160 160 161 - msg = kzalloc(sizeof(*msg), GFP_KERNEL); 161 + msg = kzalloc(sizeof(*msg), GFP_ATOMIC); 162 162 if (!msg) 163 163 return -ENOMEM; 164 164 if (large) { 165 - msg->mailbox_data = kzalloc(SST_MAILBOX_SIZE, GFP_KERNEL); 165 + msg->mailbox_data = kzalloc(SST_MAILBOX_SIZE, GFP_ATOMIC); 166 166 if (!msg->mailbox_data) { 167 167 kfree(msg); 168 168 return -ENOMEM;
+1 -1
sound/soc/intel/boards/bytcht_es8316.c
··· 487 487 } 488 488 489 489 /* override plaform name, if required */ 490 + byt_cht_es8316_card.dev = dev; 490 491 platform_name = mach->mach_params.platform; 491 492 492 493 ret = snd_soc_fixup_dai_links_platform_name(&byt_cht_es8316_card, ··· 568 567 (quirk & BYT_CHT_ES8316_MONO_SPEAKER) ? "mono" : "stereo", 569 568 mic_name[BYT_CHT_ES8316_MAP(quirk)]); 570 569 byt_cht_es8316_card.long_name = long_name; 571 - byt_cht_es8316_card.dev = dev; 572 570 snd_soc_card_set_drvdata(&byt_cht_es8316_card, priv); 573 571 574 572 ret = devm_snd_soc_register_card(dev, &byt_cht_es8316_card);
+1 -1
sound/soc/intel/boards/cht_bsw_max98090_ti.c
··· 446 446 } 447 447 448 448 /* override plaform name, if required */ 449 + snd_soc_card_cht.dev = &pdev->dev; 449 450 mach = (&pdev->dev)->platform_data; 450 451 platform_name = mach->mach_params.platform; 451 452 ··· 456 455 return ret_val; 457 456 458 457 /* register the soc card */ 459 - snd_soc_card_cht.dev = &pdev->dev; 460 458 snd_soc_card_set_drvdata(&snd_soc_card_cht, drv); 461 459 462 460 if (drv->quirks & QUIRK_PMC_PLT_CLK_0)
+1 -1
sound/soc/intel/boards/cht_bsw_nau8824.c
··· 249 249 snd_soc_card_set_drvdata(&snd_soc_card_cht, drv); 250 250 251 251 /* override plaform name, if required */ 252 + snd_soc_card_cht.dev = &pdev->dev; 252 253 mach = (&pdev->dev)->platform_data; 253 254 platform_name = mach->mach_params.platform; 254 255 ··· 259 258 return ret_val; 260 259 261 260 /* register the soc card */ 262 - snd_soc_card_cht.dev = &pdev->dev; 263 261 ret_val = devm_snd_soc_register_card(&pdev->dev, &snd_soc_card_cht); 264 262 if (ret_val) { 265 263 dev_err(&pdev->dev,
+1 -1
sound/soc/intel/boards/cht_bsw_rt5672.c
··· 418 418 } 419 419 420 420 /* override plaform name, if required */ 421 + snd_soc_card_cht.dev = &pdev->dev; 421 422 platform_name = mach->mach_params.platform; 422 423 423 424 ret_val = snd_soc_fixup_dai_links_platform_name(&snd_soc_card_cht, ··· 436 435 snd_soc_card_set_drvdata(&snd_soc_card_cht, drv); 437 436 438 437 /* register the soc card */ 439 - snd_soc_card_cht.dev = &pdev->dev; 440 438 ret_val = devm_snd_soc_register_card(&pdev->dev, &snd_soc_card_cht); 441 439 if (ret_val) { 442 440 dev_err(&pdev->dev,
+6 -5
sound/soc/intel/boards/sof_rt5682.c
··· 29 29 #define SOF_RT5682_MCLK_EN BIT(3) 30 30 #define SOF_RT5682_MCLK_24MHZ BIT(4) 31 31 #define SOF_SPEAKER_AMP_PRESENT BIT(5) 32 - #define SOF_RT5682_SSP_AMP(quirk) ((quirk) & GENMASK(8, 6)) 33 - #define SOF_RT5682_SSP_AMP_MASK (GENMASK(8, 6)) 34 32 #define SOF_RT5682_SSP_AMP_SHIFT 6 33 + #define SOF_RT5682_SSP_AMP_MASK (GENMASK(8, 6)) 34 + #define SOF_RT5682_SSP_AMP(quirk) \ 35 + (((quirk) << SOF_RT5682_SSP_AMP_SHIFT) & SOF_RT5682_SSP_AMP_MASK) 35 36 36 37 /* Default: MCLK on, MCLK 19.2M, SSP0 */ 37 38 static unsigned long sof_rt5682_quirk = SOF_RT5682_MCLK_EN | ··· 145 144 jack = &ctx->sof_headset; 146 145 147 146 snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_PLAYPAUSE); 148 - snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOLUMEUP); 149 - snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEDOWN); 150 - snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOICECOMMAND); 147 + snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOICECOMMAND); 148 + snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEUP); 149 + snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOLUMEDOWN); 151 150 ret = snd_soc_component_set_jack(component, jack, NULL); 152 151 153 152 if (ret) {
+17
sound/soc/intel/common/soc-acpi-intel-byt-match.c
··· 13 13 14 14 #define BYT_THINKPAD_10 1 15 15 #define BYT_POV_P1006W 2 16 + #define BYT_AEGEX_10 3 16 17 17 18 static int byt_thinkpad10_quirk_cb(const struct dmi_system_id *id) 18 19 { ··· 24 23 static int byt_pov_p1006w_quirk_cb(const struct dmi_system_id *id) 25 24 { 26 25 byt_machine_id = BYT_POV_P1006W; 26 + return 1; 27 + } 28 + 29 + static int byt_aegex10_quirk_cb(const struct dmi_system_id *id) 30 + { 31 + byt_machine_id = BYT_AEGEX_10; 27 32 return 1; 28 33 } 29 34 ··· 73 66 DMI_EXACT_MATCH(DMI_BOARD_NAME, "0E57"), 74 67 }, 75 68 }, 69 + { 70 + /* Aegex 10 tablet (RU2) */ 71 + .callback = byt_aegex10_quirk_cb, 72 + .matches = { 73 + DMI_MATCH(DMI_SYS_VENDOR, "AEGEX"), 74 + DMI_MATCH(DMI_PRODUCT_VERSION, "RU2"), 75 + }, 76 + }, 76 77 { } 77 78 }; 78 79 80 + /* The Thinkapd 10 and Aegex 10 tablets have the same ID problem */ 79 81 static struct snd_soc_acpi_mach byt_thinkpad_10 = { 80 82 .id = "10EC5640", 81 83 .drv_name = "cht-bsw-rt5672", ··· 111 95 112 96 switch (byt_machine_id) { 113 97 case BYT_THINKPAD_10: 98 + case BYT_AEGEX_10: 114 99 return &byt_thinkpad_10; 115 100 case BYT_POV_P1006W: 116 101 return &byt_pov_p1006w;
+6 -6
sound/soc/intel/common/soc-acpi-intel-cnl-match.c
··· 29 29 .sof_tplg_filename = "sof-cnl-rt274.tplg", 30 30 }, 31 31 { 32 - .id = "10EC5682", 33 - .drv_name = "sof_rt5682", 34 - .sof_fw_filename = "sof-cnl.ri", 35 - .sof_tplg_filename = "sof-cml-rt5682.tplg", 36 - }, 37 - { 38 32 .id = "MX98357A", 39 33 .drv_name = "sof_rt5682", 40 34 .quirk_data = &cml_codecs, 41 35 .sof_fw_filename = "sof-cnl.ri", 42 36 .sof_tplg_filename = "sof-cml-rt5682-max98357a.tplg", 37 + }, 38 + { 39 + .id = "10EC5682", 40 + .drv_name = "sof_rt5682", 41 + .sof_fw_filename = "sof-cnl.ri", 42 + .sof_tplg_filename = "sof-cml-rt5682.tplg", 43 43 }, 44 44 45 45 {},
+1 -1
sound/soc/mediatek/Kconfig
··· 133 133 134 134 config SND_SOC_MT8183_DA7219_MAX98357A 135 135 tristate "ASoC Audio driver for MT8183 with DA7219 MAX98357A codec" 136 - depends on SND_SOC_MT8183 136 + depends on SND_SOC_MT8183 && I2C 137 137 select SND_SOC_MT6358 138 138 select SND_SOC_MAX98357A 139 139 select SND_SOC_DA7219
+18 -18
sound/soc/soc-core.c
··· 228 228 229 229 static void soc_cleanup_card_debugfs(struct snd_soc_card *card) 230 230 { 231 + if (!card->debugfs_card_root) 232 + return; 231 233 debugfs_remove_recursive(card->debugfs_card_root); 234 + card->debugfs_card_root = NULL; 232 235 } 233 236 234 237 static void snd_soc_debugfs_init(void) ··· 2040 2037 static int soc_cleanup_card_resources(struct snd_soc_card *card) 2041 2038 { 2042 2039 /* free the ALSA card at first; this syncs with pending operations */ 2043 - if (card->snd_card) 2040 + if (card->snd_card) { 2044 2041 snd_card_free(card->snd_card); 2042 + card->snd_card = NULL; 2043 + } 2045 2044 2046 2045 /* remove and free each DAI */ 2047 2046 soc_remove_dai_links(card); ··· 2070 2065 int ret, i, order; 2071 2066 2072 2067 mutex_lock(&client_mutex); 2068 + for_each_card_prelinks(card, i, dai_link) { 2069 + ret = soc_init_dai_link(card, dai_link); 2070 + if (ret) { 2071 + soc_cleanup_platform(card); 2072 + dev_err(card->dev, "ASoC: failed to init link %s: %d\n", 2073 + dai_link->name, ret); 2074 + mutex_unlock(&client_mutex); 2075 + return ret; 2076 + } 2077 + } 2073 2078 mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_INIT); 2074 2079 2075 2080 card->dapm.bias_level = SND_SOC_BIAS_OFF; ··· 2804 2789 */ 2805 2790 int snd_soc_register_card(struct snd_soc_card *card) 2806 2791 { 2807 - int i, ret; 2808 - struct snd_soc_dai_link *link; 2809 - 2810 2792 if (!card->name || !card->dev) 2811 2793 return -EINVAL; 2812 - 2813 - mutex_lock(&client_mutex); 2814 - for_each_card_prelinks(card, i, link) { 2815 - 2816 - ret = soc_init_dai_link(card, link); 2817 - if (ret) { 2818 - soc_cleanup_platform(card); 2819 - dev_err(card->dev, "ASoC: failed to init link %s\n", 2820 - link->name); 2821 - mutex_unlock(&client_mutex); 2822 - return ret; 2823 - } 2824 - } 2825 - mutex_unlock(&client_mutex); 2826 2794 2827 2795 dev_set_drvdata(card->dev, card); 2828 2796 ··· 2837 2839 snd_soc_dapm_shutdown(card); 2838 2840 snd_soc_flush_all_delayed_work(card); 2839 2841 2842 + mutex_lock(&client_mutex); 2840 2843 /* remove all components used by DAI links on this card */ 2841 2844 for_each_comp_order(order) { 2842 2845 for_each_card_rtds(card, rtd) { 2843 2846 soc_remove_link_components(card, rtd, order); 2844 2847 } 2845 2848 } 2849 + mutex_unlock(&client_mutex); 2846 2850 2847 2851 soc_cleanup_card_resources(card); 2848 2852 if (!unregister)
+5 -2
sound/soc/soc-dapm.c
··· 2193 2193 2194 2194 static void dapm_debugfs_cleanup(struct snd_soc_dapm_context *dapm) 2195 2195 { 2196 + if (!dapm->debugfs_dapm) 2197 + return; 2196 2198 debugfs_remove_recursive(dapm->debugfs_dapm); 2199 + dapm->debugfs_dapm = NULL; 2197 2200 } 2198 2201 2199 2202 #else ··· 3834 3831 ret); 3835 3832 goto out; 3836 3833 } 3837 - source->active++; 3838 3834 } 3835 + source->active++; 3839 3836 ret = soc_dai_hw_params(&substream, params, source); 3840 3837 if (ret < 0) 3841 3838 goto out; ··· 3856 3853 ret); 3857 3854 goto out; 3858 3855 } 3859 - sink->active++; 3860 3856 } 3857 + sink->active++; 3861 3858 ret = soc_dai_hw_params(&substream, params, sink); 3862 3859 if (ret < 0) 3863 3860 goto out;
+2 -1
sound/soc/soc-pcm.c
··· 2479 2479 2480 2480 if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_PARAMS) && 2481 2481 (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) && 2482 - (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND)) 2482 + (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND) && 2483 + (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED)) 2483 2484 continue; 2484 2485 2485 2486 dev_dbg(be->dev, "ASoC: prepare BE %s\n",
+6 -2
sound/soc/sof/Kconfig
··· 45 45 if SND_SOC_SOF_OPTIONS 46 46 47 47 config SND_SOC_SOF_NOCODEC 48 - tristate "SOF nocodec mode Support" 48 + tristate 49 + 50 + config SND_SOC_SOF_NOCODEC_SUPPORT 51 + bool "SOF nocodec mode support" 49 52 help 50 53 This adds support for a dummy/nocodec machine driver fallback 51 54 option if no known codec is detected. This is typically only ··· 84 81 85 82 config SND_SOC_SOF_FORCE_NOCODEC_MODE 86 83 bool "SOF force nocodec Mode" 87 - depends on SND_SOC_SOF_NOCODEC 84 + depends on SND_SOC_SOF_NOCODEC_SUPPORT 88 85 help 89 86 This forces SOF to use dummy/nocodec as machine driver, even 90 87 though there is a codec detected on the real platform. This is ··· 139 136 config SND_SOC_SOF 140 137 tristate 141 138 select SND_SOC_TOPOLOGY 139 + select SND_SOC_SOF_NOCODEC if SND_SOC_SOF_NOCODEC_SUPPORT 142 140 help 143 141 This option is not user-selectable but automagically handled by 144 142 'select' statements at a higher level
+5 -4
sound/soc/sof/control.c
··· 349 349 struct snd_sof_dev *sdev = scontrol->sdev; 350 350 struct sof_ipc_ctrl_data *cdata = scontrol->control_data; 351 351 struct sof_abi_hdr *data = cdata->data; 352 + size_t size = data->size + sizeof(*data); 352 353 int ret, err; 353 354 354 355 if (be->max > sizeof(ucontrol->value.bytes.data)) { ··· 359 358 return -EINVAL; 360 359 } 361 360 362 - if (data->size > be->max) { 361 + if (size > be->max) { 363 362 dev_err_ratelimited(sdev->dev, 364 - "error: size too big %d bytes max is %d\n", 365 - data->size, be->max); 363 + "error: size too big %zu bytes max is %d\n", 364 + size, be->max); 366 365 return -EINVAL; 367 366 } 368 367 ··· 376 375 } 377 376 378 377 /* copy from kcontrol */ 379 - memcpy(data, ucontrol->value.bytes.data, data->size); 378 + memcpy(data, ucontrol->value.bytes.data, size); 380 379 381 380 /* notify DSP of byte control updates */ 382 381 snd_sof_ipc_set_get_comp_data(sdev->ipc, scontrol,
+25 -4
sound/soc/sof/core.c
··· 382 382 383 383 if (IS_ERR(plat_data->pdev_mach)) { 384 384 ret = PTR_ERR(plat_data->pdev_mach); 385 - goto comp_err; 385 + goto fw_run_err; 386 386 } 387 387 388 388 dev_dbg(sdev->dev, "created machine %s\n", ··· 393 393 394 394 return 0; 395 395 396 - comp_err: 397 - snd_soc_unregister_component(sdev->dev); 396 + #if !IS_ENABLED(CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE) 398 397 fw_run_err: 399 398 snd_sof_fw_unload(sdev); 400 399 fw_load_err: ··· 402 403 snd_sof_free_debug(sdev); 403 404 dbg_err: 404 405 snd_sof_remove(sdev); 406 + #else 407 + 408 + /* 409 + * when the probe_continue is handled in a work queue, the 410 + * probe does not fail so we don't release resources here. 411 + * They will be released with an explicit call to 412 + * snd_sof_device_remove() when the PCI/ACPI device is removed 413 + */ 414 + 415 + fw_run_err: 416 + fw_load_err: 417 + ipc_err: 418 + dbg_err: 419 + 420 + #endif 405 421 406 422 return ret; 407 423 } ··· 498 484 snd_sof_ipc_free(sdev); 499 485 snd_sof_free_debug(sdev); 500 486 snd_sof_free_trace(sdev); 501 - snd_sof_remove(sdev); 502 487 503 488 /* 504 489 * Unregister machine driver. This will unbind the snd_card which ··· 506 493 */ 507 494 if (!IS_ERR_OR_NULL(pdata->pdev_mach)) 508 495 platform_device_unregister(pdata->pdev_mach); 496 + 497 + /* 498 + * Unregistering the machine driver results in unloading the topology. 499 + * Some widgets, ex: scheduler, attempt to power down the core they are 500 + * scheduled on, when they are unloaded. Therefore, the DSP must be 501 + * removed only after the topology has been unloaded. 502 + */ 503 + snd_sof_remove(sdev); 509 504 510 505 /* release firmware */ 511 506 release_firmware(pdata->fw);
+14 -12
sound/soc/sof/intel/bdw.c
··· 220 220 struct sof_ipc_panic_info *panic_info, 221 221 u32 *stack, size_t stack_words) 222 222 { 223 - /* first read regsisters */ 224 - sof_mailbox_read(sdev, sdev->dsp_oops_offset, xoops, sizeof(*xoops)); 223 + u32 offset = sdev->dsp_oops_offset; 224 + 225 + /* first read registers */ 226 + sof_mailbox_read(sdev, offset, xoops, sizeof(*xoops)); 227 + 228 + /* note: variable AR register array is not read */ 225 229 226 230 /* then get panic info */ 227 - sof_mailbox_read(sdev, sdev->dsp_oops_offset + sizeof(*xoops), 228 - panic_info, sizeof(*panic_info)); 231 + offset += xoops->arch_hdr.totalsize; 232 + sof_mailbox_read(sdev, offset, panic_info, sizeof(*panic_info)); 229 233 230 234 /* then get the stack */ 231 - sof_mailbox_read(sdev, sdev->dsp_oops_offset + sizeof(*xoops) + 232 - sizeof(*panic_info), stack, 233 - stack_words * sizeof(u32)); 235 + offset += sizeof(*panic_info); 236 + sof_mailbox_read(sdev, offset, stack, stack_words * sizeof(u32)); 234 237 } 235 238 236 239 static void bdw_dump(struct snd_sof_dev *sdev, u32 flags) ··· 286 283 SHIM_IMRX, SHIM_IMRX_DONE, 287 284 SHIM_IMRX_DONE); 288 285 286 + spin_lock_irq(&sdev->ipc_lock); 287 + 289 288 /* 290 289 * handle immediate reply from DSP core. If the msg is 291 290 * found, set done bit in cmd_done which is called at the ··· 299 294 snd_sof_ipc_reply(sdev, ipcx); 300 295 301 296 bdw_dsp_done(sdev); 297 + 298 + spin_unlock_irq(&sdev->ipc_lock); 302 299 } 303 300 304 301 ipcd = snd_sof_dsp_read(sdev, BDW_DSP_BAR, SHIM_IPCD); ··· 492 485 { 493 486 struct snd_sof_ipc_msg *msg = sdev->msg; 494 487 struct sof_ipc_reply reply; 495 - unsigned long flags; 496 488 int ret = 0; 497 489 498 490 /* ··· 506 500 507 501 /* get reply */ 508 502 sof_mailbox_read(sdev, sdev->host_box.offset, &reply, sizeof(reply)); 509 - 510 - spin_lock_irqsave(&sdev->ipc_lock, flags); 511 503 512 504 if (reply.error < 0) { 513 505 memcpy(msg->reply_data, &reply, sizeof(reply)); ··· 525 521 } 526 522 527 523 msg->reply_error = ret; 528 - 529 - spin_unlock_irqrestore(&sdev->ipc_lock, flags); 530 524 } 531 525 532 526 static void bdw_host_done(struct snd_sof_dev *sdev)
+14 -11
sound/soc/sof/intel/byt.c
··· 265 265 struct sof_ipc_panic_info *panic_info, 266 266 u32 *stack, size_t stack_words) 267 267 { 268 + u32 offset = sdev->dsp_oops_offset; 269 + 268 270 /* first read regsisters */ 269 - sof_mailbox_read(sdev, sdev->dsp_oops_offset, xoops, sizeof(*xoops)); 271 + sof_mailbox_read(sdev, offset, xoops, sizeof(*xoops)); 272 + 273 + /* note: variable AR register array is not read */ 270 274 271 275 /* then get panic info */ 272 - sof_mailbox_read(sdev, sdev->dsp_oops_offset + sizeof(*xoops), 273 - panic_info, sizeof(*panic_info)); 276 + offset += xoops->arch_hdr.totalsize; 277 + sof_mailbox_read(sdev, offset, panic_info, sizeof(*panic_info)); 274 278 275 279 /* then get the stack */ 276 - sof_mailbox_read(sdev, sdev->dsp_oops_offset + sizeof(*xoops) + 277 - sizeof(*panic_info), stack, 278 - stack_words * sizeof(u32)); 280 + offset += sizeof(*panic_info); 281 + sof_mailbox_read(sdev, offset, stack, stack_words * sizeof(u32)); 279 282 } 280 283 281 284 static void byt_dump(struct snd_sof_dev *sdev, u32 flags) ··· 332 329 SHIM_IMRX, 333 330 SHIM_IMRX_DONE, 334 331 SHIM_IMRX_DONE); 332 + 333 + spin_lock_irq(&sdev->ipc_lock); 334 + 335 335 /* 336 336 * handle immediate reply from DSP core. If the msg is 337 337 * found, set done bit in cmd_done which is called at the ··· 346 340 snd_sof_ipc_reply(sdev, ipcx); 347 341 348 342 byt_dsp_done(sdev); 343 + 344 + spin_unlock_irq(&sdev->ipc_lock); 349 345 } 350 346 351 347 /* new message from DSP */ ··· 391 383 { 392 384 struct snd_sof_ipc_msg *msg = sdev->msg; 393 385 struct sof_ipc_reply reply; 394 - unsigned long flags; 395 386 int ret = 0; 396 387 397 388 /* ··· 405 398 406 399 /* get reply */ 407 400 sof_mailbox_read(sdev, sdev->host_box.offset, &reply, sizeof(reply)); 408 - 409 - spin_lock_irqsave(&sdev->ipc_lock, flags); 410 401 411 402 if (reply.error < 0) { 412 403 memcpy(msg->reply_data, &reply, sizeof(reply)); ··· 424 419 } 425 420 426 421 msg->reply_error = ret; 427 - 428 - spin_unlock_irqrestore(&sdev->ipc_lock, flags); 429 422 } 430 423 431 424 static void byt_host_done(struct snd_sof_dev *sdev)
+4
sound/soc/sof/intel/cnl.c
··· 64 64 CNL_DSP_REG_HIPCCTL, 65 65 CNL_DSP_REG_HIPCCTL_DONE, 0); 66 66 67 + spin_lock_irq(&sdev->ipc_lock); 68 + 67 69 /* handle immediate reply from DSP core */ 68 70 hda_dsp_ipc_get_reply(sdev); 69 71 snd_sof_ipc_reply(sdev, msg); ··· 76 74 } 77 75 78 76 cnl_ipc_dsp_done(sdev); 77 + 78 + spin_unlock_irq(&sdev->ipc_lock); 79 79 80 80 ret = IRQ_HANDLED; 81 81 }
+93 -9
sound/soc/sof/intel/hda-ctrl.c
··· 161 161 return 0; 162 162 } 163 163 164 - #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) 165 - /* 166 - * While performing reset, controller may not come back properly and causing 167 - * issues, so recommendation is to set CGCTL.MISCBDCGE to 0 then do reset 168 - * (init chip) and then again set CGCTL.MISCBDCGE to 1 169 - */ 170 164 int hda_dsp_ctrl_init_chip(struct snd_sof_dev *sdev, bool full_reset) 171 165 { 172 166 struct hdac_bus *bus = sof_to_bus(sdev); 173 - int ret; 167 + struct hdac_stream *stream; 168 + int sd_offset, ret = 0; 169 + 170 + if (bus->chip_init) 171 + return 0; 174 172 175 173 hda_dsp_ctrl_misc_clock_gating(sdev, false); 176 - ret = snd_hdac_bus_init_chip(bus, full_reset); 174 + 175 + if (full_reset) { 176 + /* clear WAKESTS */ 177 + snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, SOF_HDA_WAKESTS, 178 + SOF_HDA_WAKESTS_INT_MASK, 179 + SOF_HDA_WAKESTS_INT_MASK); 180 + 181 + /* reset HDA controller */ 182 + ret = hda_dsp_ctrl_link_reset(sdev, true); 183 + if (ret < 0) { 184 + dev_err(sdev->dev, "error: failed to reset HDA controller\n"); 185 + return ret; 186 + } 187 + 188 + usleep_range(500, 1000); 189 + 190 + /* exit HDA controller reset */ 191 + ret = hda_dsp_ctrl_link_reset(sdev, false); 192 + if (ret < 0) { 193 + dev_err(sdev->dev, "error: failed to exit HDA controller reset\n"); 194 + return ret; 195 + } 196 + 197 + usleep_range(1000, 1200); 198 + } 199 + 200 + #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) 201 + /* check to see if controller is ready */ 202 + if (!snd_hdac_chip_readb(bus, GCTL)) { 203 + dev_dbg(bus->dev, "controller not ready!\n"); 204 + return -EBUSY; 205 + } 206 + 207 + /* Accept unsolicited responses */ 208 + snd_hdac_chip_updatel(bus, GCTL, AZX_GCTL_UNSOL, AZX_GCTL_UNSOL); 209 + 210 + /* detect codecs */ 211 + if (!bus->codec_mask) { 212 + bus->codec_mask = snd_hdac_chip_readw(bus, STATESTS); 213 + dev_dbg(bus->dev, "codec_mask = 0x%lx\n", bus->codec_mask); 214 + } 215 + #endif 216 + 217 + /* clear stream status */ 218 + list_for_each_entry(stream, &bus->stream_list, list) { 219 + sd_offset = SOF_STREAM_SD_OFFSET(stream); 220 + snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, 221 + sd_offset + 222 + SOF_HDA_ADSP_REG_CL_SD_STS, 223 + SOF_HDA_CL_DMA_SD_INT_MASK, 224 + SOF_HDA_CL_DMA_SD_INT_MASK); 225 + } 226 + 227 + /* clear WAKESTS */ 228 + snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, SOF_HDA_WAKESTS, 229 + SOF_HDA_WAKESTS_INT_MASK, 230 + SOF_HDA_WAKESTS_INT_MASK); 231 + 232 + #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) 233 + /* clear rirb status */ 234 + snd_hdac_chip_writeb(bus, RIRBSTS, RIRB_INT_MASK); 235 + #endif 236 + 237 + /* clear interrupt status register */ 238 + snd_sof_dsp_write(sdev, HDA_DSP_HDA_BAR, SOF_HDA_INTSTS, 239 + SOF_HDA_INT_CTRL_EN | SOF_HDA_INT_ALL_STREAM); 240 + 241 + #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) 242 + /* initialize the codec command I/O */ 243 + snd_hdac_bus_init_cmd_io(bus); 244 + #endif 245 + 246 + /* enable CIE and GIE interrupts */ 247 + snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, SOF_HDA_INTCTL, 248 + SOF_HDA_INT_CTRL_EN | SOF_HDA_INT_GLOBAL_EN, 249 + SOF_HDA_INT_CTRL_EN | SOF_HDA_INT_GLOBAL_EN); 250 + 251 + #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) 252 + /* program the position buffer */ 253 + if (bus->use_posbuf && bus->posbuf.addr) { 254 + snd_hdac_chip_writel(bus, DPLBASE, (u32)bus->posbuf.addr); 255 + snd_hdac_chip_writel(bus, DPUBASE, 256 + upper_32_bits(bus->posbuf.addr)); 257 + } 258 + #endif 259 + 260 + bus->chip_init = true; 261 + 177 262 hda_dsp_ctrl_misc_clock_gating(sdev, true); 178 263 179 264 return ret; 180 265 } 181 - #endif
+14 -3
sound/soc/sof/intel/hda-ipc.c
··· 72 72 struct snd_sof_ipc_msg *msg = sdev->msg; 73 73 struct sof_ipc_reply reply; 74 74 struct sof_ipc_cmd_hdr *hdr; 75 - unsigned long flags; 76 75 int ret = 0; 77 76 78 77 /* ··· 83 84 dev_warn(sdev->dev, "unexpected ipc interrupt raised!\n"); 84 85 return; 85 86 } 86 - spin_lock_irqsave(&sdev->ipc_lock, flags); 87 87 88 88 hdr = msg->msg_data; 89 89 if (hdr->cmd == (SOF_IPC_GLB_PM_MSG | SOF_IPC_PM_CTX_SAVE)) { ··· 121 123 out: 122 124 msg->reply_error = ret; 123 125 124 - spin_unlock_irqrestore(&sdev->ipc_lock, flags); 125 126 } 126 127 127 128 static bool hda_dsp_ipc_is_sof(uint32_t msg) ··· 169 172 HDA_DSP_REG_HIPCCTL, 170 173 HDA_DSP_REG_HIPCCTL_DONE, 0); 171 174 175 + /* 176 + * Make sure the interrupt thread cannot be preempted between 177 + * waking up the sender and re-enabling the interrupt. Also 178 + * protect against a theoretical race with sof_ipc_tx_message(): 179 + * if the DSP is fast enough to receive an IPC message, reply to 180 + * it, and the host interrupt processing calls this function on 181 + * a different core from the one, where the sending is taking 182 + * place, the message might not yet be marked as expecting a 183 + * reply. 184 + */ 185 + spin_lock_irq(&sdev->ipc_lock); 186 + 172 187 /* handle immediate reply from DSP core - ignore ROM messages */ 173 188 if (hda_dsp_ipc_is_sof(msg)) { 174 189 hda_dsp_ipc_get_reply(sdev); ··· 195 186 196 187 /* set the done bit */ 197 188 hda_dsp_ipc_dsp_done(sdev); 189 + 190 + spin_unlock_irq(&sdev->ipc_lock); 198 191 199 192 ret = IRQ_HANDLED; 200 193 }
+37 -94
sound/soc/sof/intel/hda.c
··· 108 108 struct sof_ipc_panic_info *panic_info, 109 109 u32 *stack, size_t stack_words) 110 110 { 111 + u32 offset = sdev->dsp_oops_offset; 112 + 111 113 /* first read registers */ 112 - sof_block_read(sdev, sdev->mmio_bar, sdev->dsp_oops_offset, xoops, 113 - sizeof(*xoops)); 114 + sof_mailbox_read(sdev, offset, xoops, sizeof(*xoops)); 115 + 116 + /* note: variable AR register array is not read */ 114 117 115 118 /* then get panic info */ 116 - sof_block_read(sdev, sdev->mmio_bar, sdev->dsp_oops_offset + 117 - sizeof(*xoops), panic_info, sizeof(*panic_info)); 119 + offset += xoops->arch_hdr.totalsize; 120 + sof_block_read(sdev, sdev->mmio_bar, offset, 121 + panic_info, sizeof(*panic_info)); 118 122 119 123 /* then get the stack */ 120 - sof_block_read(sdev, sdev->mmio_bar, sdev->dsp_oops_offset + 121 - sizeof(*xoops) + sizeof(*panic_info), stack, 124 + offset += sizeof(*panic_info); 125 + sof_block_read(sdev, sdev->mmio_bar, offset, stack, 122 126 stack_words * sizeof(u32)); 123 127 } 124 128 ··· 227 223 228 224 /* initialise hdac bus */ 229 225 bus->addr = pci_resource_start(pci, 0); 226 + #if IS_ENABLED(CONFIG_PCI) 230 227 bus->remap_addr = pci_ioremap_bar(pci, 0); 228 + #endif 231 229 if (!bus->remap_addr) { 232 230 dev_err(bus->dev, "error: ioremap error\n"); 233 231 return -ENXIO; ··· 270 264 return tplg_filename; 271 265 } 272 266 267 + #endif 268 + 273 269 static int hda_init_caps(struct snd_sof_dev *sdev) 274 270 { 275 271 struct hdac_bus *bus = sof_to_bus(sdev); 272 + #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) 276 273 struct hdac_ext_link *hlink; 277 274 struct snd_soc_acpi_mach_params *mach_params; 278 275 struct snd_soc_acpi_mach *hda_mach; ··· 283 274 struct snd_soc_acpi_mach *mach; 284 275 const char *tplg_filename; 285 276 int codec_num = 0; 286 - int ret = 0; 287 277 int i; 278 + #endif 279 + int ret = 0; 288 280 289 281 device_disable_async_suspend(bus->dev); 290 282 ··· 293 283 if (bus->ppcap) 294 284 dev_dbg(sdev->dev, "PP capability, will probe DSP later.\n"); 295 285 286 + ret = hda_dsp_ctrl_init_chip(sdev, true); 287 + if (ret < 0) { 288 + dev_err(bus->dev, "error: init chip failed with ret: %d\n", 289 + ret); 290 + return ret; 291 + } 292 + 293 + #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) 296 294 if (bus->mlcap) 297 295 snd_hdac_ext_bus_get_ml_capabilities(bus); 298 296 ··· 309 291 if (ret < 0) { 310 292 dev_err(sdev->dev, "error: no HDMI audio devices found\n"); 311 293 return ret; 312 - } 313 - 314 - ret = hda_dsp_ctrl_init_chip(sdev, true); 315 - if (ret < 0) { 316 - dev_err(bus->dev, "error: init chip failed with ret: %d\n", ret); 317 - goto out; 318 294 } 319 295 320 296 /* codec detection */ ··· 351 339 /* use local variable for readability */ 352 340 tplg_filename = pdata->tplg_filename; 353 341 tplg_filename = fixup_tplg_name(sdev, tplg_filename); 354 - if (!tplg_filename) 355 - goto out; 342 + if (!tplg_filename) { 343 + hda_codec_i915_exit(sdev); 344 + return ret; 345 + } 356 346 pdata->tplg_filename = tplg_filename; 357 347 } 358 348 } ··· 378 364 */ 379 365 list_for_each_entry(hlink, &bus->hlink_list, list) 380 366 snd_hdac_ext_bus_link_put(bus, hlink); 381 - 382 - return 0; 383 - 384 - out: 385 - hda_codec_i915_exit(sdev); 386 - return ret; 387 - } 388 - 389 - #else 390 - 391 - static int hda_init_caps(struct snd_sof_dev *sdev) 392 - { 393 - /* 394 - * set CGCTL.MISCBDCGE to 0 during reset and set back to 1 395 - * when reset finished. 396 - * TODO: maybe no need for init_caps? 397 - */ 398 - hda_dsp_ctrl_misc_clock_gating(sdev, 0); 399 - 400 - /* clear WAKESTS */ 401 - snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, SOF_HDA_WAKESTS, 402 - SOF_HDA_WAKESTS_INT_MASK, 403 - SOF_HDA_WAKESTS_INT_MASK); 404 - 405 - return 0; 406 - } 407 - 408 367 #endif 368 + return 0; 369 + } 409 370 410 371 static const struct sof_intel_dsp_desc 411 372 *get_chip_info(struct snd_sof_pdata *pdata) ··· 398 409 struct pci_dev *pci = to_pci_dev(sdev->dev); 399 410 struct sof_intel_hda_dev *hdev; 400 411 struct hdac_bus *bus; 401 - struct hdac_stream *stream; 402 412 const struct sof_intel_dsp_desc *chip; 403 - int sd_offset, ret = 0; 413 + int ret = 0; 404 414 405 415 /* 406 416 * detect DSP by checking class/subclass/prog-id information ··· 456 468 goto hdac_bus_unmap; 457 469 458 470 /* DSP base */ 471 + #if IS_ENABLED(CONFIG_PCI) 459 472 sdev->bar[HDA_DSP_BAR] = pci_ioremap_bar(pci, HDA_DSP_BAR); 473 + #endif 460 474 if (!sdev->bar[HDA_DSP_BAR]) { 461 475 dev_err(sdev->dev, "error: ioremap error\n"); 462 476 ret = -ENXIO; ··· 548 558 if (ret < 0) 549 559 goto free_ipc_irq; 550 560 551 - /* reset HDA controller */ 552 - ret = hda_dsp_ctrl_link_reset(sdev, true); 553 - if (ret < 0) { 554 - dev_err(sdev->dev, "error: failed to reset HDA controller\n"); 555 - goto free_ipc_irq; 556 - } 557 - 558 - /* exit HDA controller reset */ 559 - ret = hda_dsp_ctrl_link_reset(sdev, false); 560 - if (ret < 0) { 561 - dev_err(sdev->dev, "error: failed to exit HDA controller reset\n"); 562 - goto free_ipc_irq; 563 - } 564 - 565 - /* clear stream status */ 566 - list_for_each_entry(stream, &bus->stream_list, list) { 567 - sd_offset = SOF_STREAM_SD_OFFSET(stream); 568 - snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, 569 - sd_offset + 570 - SOF_HDA_ADSP_REG_CL_SD_STS, 571 - SOF_HDA_CL_DMA_SD_INT_MASK, 572 - SOF_HDA_CL_DMA_SD_INT_MASK); 573 - } 574 - 575 - /* clear WAKESTS */ 576 - snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, SOF_HDA_WAKESTS, 577 - SOF_HDA_WAKESTS_INT_MASK, 578 - SOF_HDA_WAKESTS_INT_MASK); 579 - 580 - /* clear interrupt status register */ 581 - snd_sof_dsp_write(sdev, HDA_DSP_HDA_BAR, SOF_HDA_INTSTS, 582 - SOF_HDA_INT_CTRL_EN | SOF_HDA_INT_ALL_STREAM); 583 - 584 - /* enable CIE and GIE interrupts */ 585 - snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, SOF_HDA_INTCTL, 586 - SOF_HDA_INT_CTRL_EN | SOF_HDA_INT_GLOBAL_EN, 587 - SOF_HDA_INT_CTRL_EN | SOF_HDA_INT_GLOBAL_EN); 588 - 589 - /* re-enable CGCTL.MISCBDCGE after reset */ 590 - hda_dsp_ctrl_misc_clock_gating(sdev, true); 591 - 592 - device_disable_async_suspend(&pci->dev); 593 - 594 - /* enable DSP features */ 595 - snd_sof_dsp_update_bits(sdev, HDA_DSP_PP_BAR, SOF_HDA_REG_PP_PPCTL, 596 - SOF_HDA_PPCTL_GPROCEN, SOF_HDA_PPCTL_GPROCEN); 597 - 598 - /* enable DSP IRQ */ 599 - snd_sof_dsp_update_bits(sdev, HDA_DSP_PP_BAR, SOF_HDA_REG_PP_PPCTL, 600 - SOF_HDA_PPCTL_PIE, SOF_HDA_PPCTL_PIE); 561 + /* enable ppcap interrupt */ 562 + hda_dsp_ctrl_ppcap_enable(sdev, true); 563 + hda_dsp_ctrl_ppcap_int_enable(sdev, true); 601 564 602 565 /* initialize waitq for code loading */ 603 566 init_waitqueue_head(&sdev->waitq);
+8 -18
sound/soc/sof/ipc.c
··· 115 115 } 116 116 break; 117 117 case SOF_IPC_GLB_COMP_MSG: 118 - str = "GLB_COMP_MSG: SET_VALUE"; 118 + str = "GLB_COMP_MSG"; 119 119 switch (type) { 120 120 case SOF_IPC_COMP_SET_VALUE: 121 121 str2 = "SET_VALUE"; break; ··· 308 308 int snd_sof_ipc_reply(struct snd_sof_dev *sdev, u32 msg_id) 309 309 { 310 310 struct snd_sof_ipc_msg *msg = &sdev->ipc->msg; 311 - unsigned long flags; 312 - 313 - /* 314 - * Protect against a theoretical race with sof_ipc_tx_message(): if the 315 - * DSP is fast enough to receive an IPC message, reply to it, and the 316 - * host interrupt processing calls this function on a different core 317 - * from the one, where the sending is taking place, the message might 318 - * not yet be marked as expecting a reply. 319 - */ 320 - spin_lock_irqsave(&sdev->ipc_lock, flags); 321 311 322 312 if (msg->ipc_complete) { 323 - spin_unlock_irqrestore(&sdev->ipc_lock, flags); 324 313 dev_err(sdev->dev, "error: no reply expected, received 0x%x", 325 314 msg_id); 326 315 return -EINVAL; ··· 318 329 /* wake up and return the error if we have waiters on this message ? */ 319 330 msg->ipc_complete = true; 320 331 wake_up(&msg->waitq); 321 - 322 - spin_unlock_irqrestore(&sdev->ipc_lock, flags); 323 332 324 333 return 0; 325 334 } ··· 763 776 } 764 777 } 765 778 766 - if (ready->debug.bits.build) { 779 + if (ready->flags & SOF_IPC_INFO_BUILD) { 767 780 dev_info(sdev->dev, 768 781 "Firmware debug build %d on %s-%s - options:\n" 769 782 " GDB: %s\n" 770 783 " lock debug: %s\n" 771 784 " lock vdebug: %s\n", 772 785 v->build, v->date, v->time, 773 - ready->debug.bits.gdb ? "enabled" : "disabled", 774 - ready->debug.bits.locks ? "enabled" : "disabled", 775 - ready->debug.bits.locks_verbose ? "enabled" : "disabled"); 786 + ready->flags & SOF_IPC_INFO_GDB ? 787 + "enabled" : "disabled", 788 + ready->flags & SOF_IPC_INFO_LOCKS ? 789 + "enabled" : "disabled", 790 + ready->flags & SOF_IPC_INFO_LOCKSV ? 791 + "enabled" : "disabled"); 776 792 } 777 793 778 794 /* copy the fw_version into debugfs at first boot */
+2
sound/soc/sof/loader.c
··· 372 372 msecs_to_jiffies(sdev->boot_timeout)); 373 373 if (ret == 0) { 374 374 dev_err(sdev->dev, "error: firmware boot failure\n"); 375 + /* after this point FW_READY msg should be ignored */ 376 + sdev->boot_complete = true; 375 377 snd_sof_dsp_dbg_dump(sdev, SOF_DBG_REGS | SOF_DBG_MBOX | 376 378 SOF_DBG_TEXT | SOF_DBG_PCI); 377 379 return -EIO;
+4 -4
sound/soc/sof/pcm.c
··· 211 211 /* save pcm hw_params */ 212 212 memcpy(&spcm->params[substream->stream], params, sizeof(*params)); 213 213 214 - INIT_WORK(&spcm->stream[substream->stream].period_elapsed_work, 215 - sof_pcm_period_elapsed_work); 214 + /* clear hw_params_upon_resume flag */ 215 + spcm->hw_params_upon_resume[substream->stream] = 0; 216 216 217 217 return ret; 218 218 } ··· 429 429 dev_dbg(sdev->dev, "pcm: open stream %d dir %d\n", spcm->pcm.pcm_id, 430 430 substream->stream); 431 431 432 - /* clear hw_params_upon_resume flag */ 433 - spcm->hw_params_upon_resume[substream->stream] = 0; 432 + INIT_WORK(&spcm->stream[substream->stream].period_elapsed_work, 433 + sof_pcm_period_elapsed_work); 434 434 435 435 caps = &spcm->pcm.caps[substream->stream]; 436 436
+1 -1
sound/soc/sof/xtensa/core.c
··· 110 110 u32 stack_words) 111 111 { 112 112 struct sof_ipc_dsp_oops_xtensa *xoops = oops; 113 - u32 stack_ptr = xoops->stack; 113 + u32 stack_ptr = xoops->plat_hdr.stackptr; 114 114 /* 4 * 8chars + 3 ws + 1 terminating NUL */ 115 115 unsigned char buf[4 * 8 + 3 + 1]; 116 116 int i;
+9
sound/soc/sunxi/sun4i-codec.c
··· 1320 1320 gpiod_set_value_cansleep(scodec->gpio_pa, 1321 1321 !!SND_SOC_DAPM_EVENT_ON(event)); 1322 1322 1323 + if (SND_SOC_DAPM_EVENT_ON(event)) { 1324 + /* 1325 + * Need a delay to wait for DAC to push the data. 700ms seems 1326 + * to be the best compromise not to feel this delay while 1327 + * playing a sound. 1328 + */ 1329 + msleep(700); 1330 + } 1331 + 1323 1332 return 0; 1324 1333 } 1325 1334
+5 -1
sound/soc/sunxi/sun4i-i2s.c
··· 106 106 107 107 #define SUN8I_I2S_TX_CHAN_MAP_REG 0x44 108 108 #define SUN8I_I2S_TX_CHAN_SEL_REG 0x34 109 - #define SUN8I_I2S_TX_CHAN_OFFSET_MASK GENMASK(13, 11) 109 + #define SUN8I_I2S_TX_CHAN_OFFSET_MASK GENMASK(13, 12) 110 110 #define SUN8I_I2S_TX_CHAN_OFFSET(offset) (offset << 12) 111 111 #define SUN8I_I2S_TX_CHAN_EN_MASK GENMASK(11, 4) 112 112 #define SUN8I_I2S_TX_CHAN_EN(num_chan) (((1 << num_chan) - 1) << 4) ··· 454 454 val++; 455 455 /* blck offset determines whether i2s or LJ */ 456 456 regmap_update_bits(i2s->regmap, SUN8I_I2S_TX_CHAN_SEL_REG, 457 + SUN8I_I2S_TX_CHAN_OFFSET_MASK, 458 + SUN8I_I2S_TX_CHAN_OFFSET(offset)); 459 + 460 + regmap_update_bits(i2s->regmap, SUN8I_I2S_RX_CHAN_SEL_REG, 457 461 SUN8I_I2S_TX_CHAN_OFFSET_MASK, 458 462 SUN8I_I2S_TX_CHAN_OFFSET(offset)); 459 463 }
+2
tools/testing/nvdimm/test/iomap.c
··· 100 100 { 101 101 struct dev_pagemap *pgmap = _pgmap; 102 102 103 + WARN_ON(!pgmap || !pgmap->ref || !pgmap->kill || !pgmap->cleanup); 103 104 pgmap->kill(pgmap->ref); 105 + pgmap->cleanup(pgmap->ref); 104 106 } 105 107 106 108 void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)