Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.5-rc6 into usb-next

We need the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3790 -2367
+5 -5
Documentation/dev-tools/kcov.rst
··· 251 251 .. code-block:: c 252 252 253 253 struct kcov_remote_arg { 254 - unsigned trace_mode; 255 - unsigned area_size; 256 - unsigned num_handles; 257 - uint64_t common_handle; 258 - uint64_t handles[0]; 254 + __u32 trace_mode; 255 + __u32 area_size; 256 + __u32 num_handles; 257 + __aligned_u64 common_handle; 258 + __aligned_u64 handles[0]; 259 259 }; 260 260 261 261 #define KCOV_INIT_TRACE _IOR('c', 1, unsigned long)
+5 -8
Documentation/dev-tools/kunit/start.rst
··· 24 24 For more information on this wrapper (also called kunit_tool) checkout the 25 25 :doc:`kunit-tool` page. 26 26 27 - Creating a kunitconfig 28 - ====================== 27 + Creating a .kunitconfig 28 + ======================= 29 29 The Python script is a thin wrapper around Kbuild. As such, it needs to be 30 - configured with a ``kunitconfig`` file. This file essentially contains the 30 + configured with a ``.kunitconfig`` file. This file essentially contains the 31 31 regular Kernel config, with the specific test targets as well. 32 32 33 33 .. code-block:: bash 34 34 35 - git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO 36 35 cd $PATH_TO_LINUX_REPO 37 - ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig 38 - 39 - You may want to add kunitconfig to your local gitignore. 36 + cp arch/um/configs/kunit_defconfig .kunitconfig 40 37 41 38 Verifying KUnit Works 42 39 --------------------- ··· 148 151 149 152 obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o 150 153 151 - Now add it to your ``kunitconfig``: 154 + Now add it to your ``.kunitconfig``: 152 155 153 156 .. code-block:: none 154 157
+4 -2
Documentation/devicetree/bindings/i2c/i2c-at91.txt
··· 18 18 - dma-names: should contain "tx" and "rx". 19 19 - atmel,fifo-size: maximum number of data the RX and TX FIFOs can store for FIFO 20 20 capable I2C controllers. 21 - - i2c-sda-hold-time-ns: TWD hold time, only available for "atmel,sama5d4-i2c" 22 - and "atmel,sama5d2-i2c". 21 + - i2c-sda-hold-time-ns: TWD hold time, only available for: 22 + "atmel,sama5d4-i2c", 23 + "atmel,sama5d2-i2c", 24 + "microchip,sam9x60-i2c". 23 25 - Child nodes conforming to i2c bus binding 24 26 25 27 Examples :
+2 -2
Documentation/devicetree/bindings/spi/spi-controller.yaml
··· 111 111 spi-rx-bus-width: 112 112 allOf: 113 113 - $ref: /schemas/types.yaml#/definitions/uint32 114 - - enum: [ 1, 2, 4 ] 114 + - enum: [ 1, 2, 4, 8 ] 115 115 - default: 1 116 116 description: 117 117 Bus width to the SPI bus used for MISO. ··· 123 123 spi-tx-bus-width: 124 124 allOf: 125 125 - $ref: /schemas/types.yaml#/definitions/uint32 126 - - enum: [ 1, 2, 4 ] 126 + - enum: [ 1, 2, 4, 8 ] 127 127 - default: 1 128 128 description: 129 129 Bus width to the SPI bus used for MOSI.
+1 -1
Documentation/features/debug/gcov-profile-all/arch-support.txt
··· 23 23 | openrisc: | TODO | 24 24 | parisc: | TODO | 25 25 | powerpc: | ok | 26 - | riscv: | TODO | 26 + | riscv: | ok | 27 27 | s390: | ok | 28 28 | sh: | ok | 29 29 | sparc: | TODO |
-6
Documentation/networking/dsa/sja1105.rst
··· 230 230 against this restriction and errors out when appropriate. Schedule analysis is 231 231 needed to avoid this, which is outside the scope of the document. 232 232 233 - At the moment, the time-aware scheduler can only be triggered based on a 234 - standalone clock and not based on PTP time. This means the base-time argument 235 - from tc-taprio is ignored and the schedule starts right away. It also means it 236 - is more difficult to phase-align the scheduler with the other devices in the 237 - network. 238 - 239 233 Device Tree bindings and board design 240 234 ===================================== 241 235
+1 -1
Documentation/networking/ip-sysctl.txt
··· 603 603 with the current initial RTO of 1second. With this the final timeout 604 604 for a passive TCP connection will happen after 63seconds. 605 605 606 - tcp_syncookies - BOOLEAN 606 + tcp_syncookies - INTEGER 607 607 Only valid when the kernel was compiled with CONFIG_SYN_COOKIES 608 608 Send out syncookies when the syn backlog queue of a socket 609 609 overflows. This is to prevent against the common 'SYN flood attack'
+2 -2
Documentation/networking/netdev-FAQ.rst
··· 34 34 mainline tree from Linus, and ``net-next`` is where the new code goes 35 35 for the future release. You can find the trees here: 36 36 37 - - https://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git 38 - - https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 37 + - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 38 + - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git 39 39 40 40 Q: How often do changes from these trees make it to the mainline Linus tree? 41 41 ----------------------------------------------------------------------------
+1
Documentation/process/index.rst
··· 60 60 volatile-considered-harmful 61 61 botching-up-ioctls 62 62 clang-format 63 + ../riscv/patch-acceptance 63 64 64 65 .. only:: subproject and html 65 66
+1
Documentation/riscv/index.rst
··· 7 7 8 8 boot-image-header 9 9 pmu 10 + patch-acceptance 10 11 11 12 .. only:: subproject and html 12 13
+35
Documentation/riscv/patch-acceptance.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + arch/riscv maintenance guidelines for developers 4 + ================================================ 5 + 6 + Overview 7 + -------- 8 + The RISC-V instruction set architecture is developed in the open: 9 + in-progress drafts are available for all to review and to experiment 10 + with implementations. New module or extension drafts can change 11 + during the development process - sometimes in ways that are 12 + incompatible with previous drafts. This flexibility can present a 13 + challenge for RISC-V Linux maintenance. Linux maintainers disapprove 14 + of churn, and the Linux development process prefers well-reviewed and 15 + tested code over experimental code. We wish to extend these same 16 + principles to the RISC-V-related code that will be accepted for 17 + inclusion in the kernel. 18 + 19 + Submit Checklist Addendum 20 + ------------------------- 21 + We'll only accept patches for new modules or extensions if the 22 + specifications for those modules or extensions are listed as being 23 + "Frozen" or "Ratified" by the RISC-V Foundation. (Developers may, of 24 + course, maintain their own Linux kernel trees that contain code for 25 + any draft extensions that they wish.) 26 + 27 + Additionally, the RISC-V specification allows implementors to create 28 + their own custom extensions. These custom extensions aren't required 29 + to go through any review or ratification process by the RISC-V 30 + Foundation. To avoid the maintenance complexity and potential 31 + performance impact of adding kernel code for implementor-specific 32 + RISC-V extensions, we'll only to accept patches for extensions that 33 + have been officially frozen or ratified by the RISC-V Foundation. 34 + (Implementors, may, of course, maintain their own Linux kernel trees 35 + containing code for any custom extensions that they wish.)
+9 -8
MAINTAINERS
··· 771 771 772 772 AMAZON ETHERNET DRIVERS 773 773 M: Netanel Belgazal <netanel@amazon.com> 774 + M: Arthur Kiyanovski <akiyano@amazon.com> 775 + R: Guy Tzalik <gtzalik@amazon.com> 774 776 R: Saeed Bishara <saeedb@amazon.com> 775 777 R: Zorik Machulsky <zorik@amazon.com> 776 778 L: netdev@vger.kernel.org ··· 7036 7034 S: Maintained 7037 7035 F: Documentation/firmware-guide/acpi/gpio-properties.rst 7038 7036 F: drivers/gpio/gpiolib-acpi.c 7037 + F: drivers/gpio/gpiolib-acpi.h 7039 7038 7040 7039 GPIO IR Transmitter 7041 7040 M: Sean Young <sean@mess.org> ··· 11460 11457 L: netdev@vger.kernel.org 11461 11458 W: http://www.linuxfoundation.org/en/Net 11462 11459 Q: http://patchwork.ozlabs.org/project/netdev/list/ 11463 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git 11464 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 11460 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 11461 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git 11465 11462 S: Odd Fixes 11466 11463 F: Documentation/devicetree/bindings/net/ 11467 11464 F: drivers/net/ ··· 11502 11499 L: netdev@vger.kernel.org 11503 11500 W: http://www.linuxfoundation.org/en/Net 11504 11501 Q: http://patchwork.ozlabs.org/project/netdev/list/ 11505 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git 11506 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 11502 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 11503 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git 11507 11504 B: mailto:netdev@vger.kernel.org 11508 11505 S: Maintained 11509 11506 F: net/ ··· 11548 11545 M: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> 11549 11546 M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> 11550 11547 L: netdev@vger.kernel.org 11551 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git 11548 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 11552 11549 S: Maintained 11553 11550 F: net/ipv4/ 11554 11551 F: net/ipv6/ ··· 13679 13676 13680 13677 QUALCOMM ETHQOS ETHERNET DRIVER 13681 13678 M: Vinod Koul <vkoul@kernel.org> 13682 - M: Niklas Cassel <niklas.cassel@linaro.org> 13683 13679 L: netdev@vger.kernel.org 13684 13680 S: Maintained 13685 13681 F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c ··· 14120 14118 M: Palmer Dabbelt <palmer@dabbelt.com> 14121 14119 M: Albert Ou <aou@eecs.berkeley.edu> 14122 14120 L: linux-riscv@lists.infradead.org 14121 + P: Documentation/riscv/patch-acceptance.rst 14123 14122 T: git git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git 14124 14123 S: Supported 14125 14124 F: arch/riscv/ ··· 14548 14545 14549 14546 SAMSUNG SXGBE DRIVERS 14550 14547 M: Byungho An <bh74.an@samsung.com> 14551 - M: Girish K S <ks.giri@samsung.com> 14552 - M: Vipul Pandya <vipul.pandya@samsung.com> 14553 14548 S: Supported 14554 14549 L: netdev@vger.kernel.org 14555 14550 F: drivers/net/ethernet/samsung/sxgbe/
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 5 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc6 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION*
+4 -4
arch/arc/include/asm/entry-arcv2.h
··· 162 162 #endif 163 163 164 164 #ifdef CONFIG_ARC_HAS_ACCL_REGS 165 - ST2 r58, r59, PT_sp + 12 165 + ST2 r58, r59, PT_r58 166 166 #endif 167 167 168 168 .endm ··· 172 172 173 173 LD2 gp, fp, PT_r26 ; gp (r26), fp (r27) 174 174 175 - ld r12, [sp, PT_sp + 4] 176 - ld r30, [sp, PT_sp + 8] 175 + ld r12, [sp, PT_r12] 176 + ld r30, [sp, PT_r30] 177 177 178 178 ; Restore SP (into AUX_USER_SP) only if returning to U mode 179 179 ; - for K mode, it will be implicitly restored as stack is unwound ··· 190 190 #endif 191 191 192 192 #ifdef CONFIG_ARC_HAS_ACCL_REGS 193 - LD2 r58, r59, PT_sp + 12 193 + LD2 r58, r59, PT_r58 194 194 #endif 195 195 .endm 196 196
-1
arch/arc/include/asm/hugepage.h
··· 8 8 #define _ASM_ARC_HUGEPAGE_H 9 9 10 10 #include <linux/types.h> 11 - #define __ARCH_USE_5LEVEL_HACK 12 11 #include <asm-generic/pgtable-nopmd.h> 13 12 14 13 static inline pte_t pmd_pte(pmd_t pmd)
+9 -1
arch/arc/kernel/asm-offsets.c
··· 66 66 67 67 DEFINE(SZ_CALLEE_REGS, sizeof(struct callee_regs)); 68 68 DEFINE(SZ_PT_REGS, sizeof(struct pt_regs)); 69 - DEFINE(PT_user_r25, offsetof(struct pt_regs, user_r25)); 69 + 70 + #ifdef CONFIG_ISA_ARCV2 71 + OFFSET(PT_r12, pt_regs, r12); 72 + OFFSET(PT_r30, pt_regs, r30); 73 + #endif 74 + #ifdef CONFIG_ARC_HAS_ACCL_REGS 75 + OFFSET(PT_r58, pt_regs, r58); 76 + OFFSET(PT_r59, pt_regs, r59); 77 + #endif 70 78 71 79 return 0; 72 80 }
+1 -1
arch/arc/plat-eznps/Kconfig
··· 7 7 menuconfig ARC_PLAT_EZNPS 8 8 bool "\"EZchip\" ARC dev platform" 9 9 select CPU_BIG_ENDIAN 10 - select CLKSRC_NPS 10 + select CLKSRC_NPS if !PHYS_ADDR_T_64BIT 11 11 select EZNPS_GIC 12 12 select EZCHIP_NPS_MANAGEMENT_ENET if ETHERNET 13 13 help
+1
arch/arm/Kconfig
··· 72 72 select HAVE_ARM_SMCCC if CPU_V7 73 73 select HAVE_EBPF_JIT if !CPU_ENDIAN_BE32 74 74 select HAVE_CONTEXT_TRACKING 75 + select HAVE_COPY_THREAD_TLS 75 76 select HAVE_C_RECORDMCOUNT 76 77 select HAVE_DEBUG_KMEMLEAK 77 78 select HAVE_DMA_CONTIGUOUS if MMU
+3 -3
arch/arm/kernel/process.c
··· 226 226 asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); 227 227 228 228 int 229 - copy_thread(unsigned long clone_flags, unsigned long stack_start, 230 - unsigned long stk_sz, struct task_struct *p) 229 + copy_thread_tls(unsigned long clone_flags, unsigned long stack_start, 230 + unsigned long stk_sz, struct task_struct *p, unsigned long tls) 231 231 { 232 232 struct thread_info *thread = task_thread_info(p); 233 233 struct pt_regs *childregs = task_pt_regs(p); ··· 261 261 clear_ptrace_hw_breakpoint(p); 262 262 263 263 if (clone_flags & CLONE_SETTLS) 264 - thread->tp_value[0] = childregs->ARM_r3; 264 + thread->tp_value[0] = tls; 265 265 thread->tp_value[1] = get_tpuser(); 266 266 267 267 thread_notify(THREAD_NOTIFY_COPY, thread);
+1
arch/arm64/Kconfig
··· 138 138 select HAVE_CMPXCHG_DOUBLE 139 139 select HAVE_CMPXCHG_LOCAL 140 140 select HAVE_CONTEXT_TRACKING 141 + select HAVE_COPY_THREAD_TLS 141 142 select HAVE_DEBUG_BUGVERBOSE 142 143 select HAVE_DEBUG_KMEMLEAK 143 144 select HAVE_DMA_CONTIGUOUS
+2 -3
arch/arm64/include/asm/pgtable-prot.h
··· 85 85 #define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE) 86 86 #define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) 87 87 #define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN) 88 - #define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN) 89 88 90 89 #define __P000 PAGE_NONE 91 90 #define __P001 PAGE_READONLY 92 91 #define __P010 PAGE_READONLY 93 92 #define __P011 PAGE_READONLY 94 - #define __P100 PAGE_EXECONLY 93 + #define __P100 PAGE_READONLY_EXEC 95 94 #define __P101 PAGE_READONLY_EXEC 96 95 #define __P110 PAGE_READONLY_EXEC 97 96 #define __P111 PAGE_READONLY_EXEC ··· 99 100 #define __S001 PAGE_READONLY 100 101 #define __S010 PAGE_SHARED 101 102 #define __S011 PAGE_SHARED 102 - #define __S100 PAGE_EXECONLY 103 + #define __S100 PAGE_READONLY_EXEC 103 104 #define __S101 PAGE_READONLY_EXEC 104 105 #define __S110 PAGE_SHARED_EXEC 105 106 #define __S111 PAGE_SHARED_EXEC
+3 -7
arch/arm64/include/asm/pgtable.h
··· 96 96 #define pte_dirty(pte) (pte_sw_dirty(pte) || pte_hw_dirty(pte)) 97 97 98 98 #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID)) 99 - /* 100 - * Execute-only user mappings do not have the PTE_USER bit set. All valid 101 - * kernel mappings have the PTE_UXN bit set. 102 - */ 103 99 #define pte_valid_not_user(pte) \ 104 - ((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN)) 100 + ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) 105 101 #define pte_valid_young(pte) \ 106 102 ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF)) 107 103 #define pte_valid_user(pte) \ ··· 113 117 114 118 /* 115 119 * p??_access_permitted() is true for valid user mappings (subject to the 116 - * write permission check) other than user execute-only which do not have the 117 - * PTE_USER bit set. PROT_NONE mappings do not have the PTE_VALID bit set. 120 + * write permission check). PROT_NONE mappings do not have the PTE_VALID bit 121 + * set. 118 122 */ 119 123 #define pte_access_permitted(pte, write) \ 120 124 (pte_valid_user(pte) && (!(write) || pte_write(pte)))
-1
arch/arm64/include/asm/unistd.h
··· 42 42 #endif 43 43 44 44 #define __ARCH_WANT_SYS_CLONE 45 - #define __ARCH_WANT_SYS_CLONE3 46 45 47 46 #ifndef __COMPAT_SYSCALL_NR 48 47 #include <uapi/asm/unistd.h>
+1
arch/arm64/include/uapi/asm/unistd.h
··· 19 19 #define __ARCH_WANT_NEW_STAT 20 20 #define __ARCH_WANT_SET_GET_RLIMIT 21 21 #define __ARCH_WANT_TIME32_SYSCALLS 22 + #define __ARCH_WANT_SYS_CLONE3 22 23 23 24 #include <asm-generic/unistd.h>
+5 -5
arch/arm64/kernel/process.c
··· 360 360 361 361 asmlinkage void ret_from_fork(void) asm("ret_from_fork"); 362 362 363 - int copy_thread(unsigned long clone_flags, unsigned long stack_start, 364 - unsigned long stk_sz, struct task_struct *p) 363 + int copy_thread_tls(unsigned long clone_flags, unsigned long stack_start, 364 + unsigned long stk_sz, struct task_struct *p, unsigned long tls) 365 365 { 366 366 struct pt_regs *childregs = task_pt_regs(p); 367 367 ··· 394 394 } 395 395 396 396 /* 397 - * If a TLS pointer was passed to clone (4th argument), use it 398 - * for the new thread. 397 + * If a TLS pointer was passed to clone, use it for the new 398 + * thread. 399 399 */ 400 400 if (clone_flags & CLONE_SETTLS) 401 - p->thread.uw.tp_value = childregs->regs[3]; 401 + p->thread.uw.tp_value = tls; 402 402 } else { 403 403 memset(childregs, 0, sizeof(struct pt_regs)); 404 404 childregs->pstate = PSR_MODE_EL1h;
+1 -1
arch/arm64/mm/fault.c
··· 445 445 const struct fault_info *inf; 446 446 struct mm_struct *mm = current->mm; 447 447 vm_fault_t fault, major = 0; 448 - unsigned long vm_flags = VM_READ | VM_WRITE; 448 + unsigned long vm_flags = VM_READ | VM_WRITE | VM_EXEC; 449 449 unsigned int mm_flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; 450 450 451 451 if (kprobe_page_fault(regs, esr))
+1 -3
arch/arm64/mm/mmu.c
··· 1070 1070 { 1071 1071 unsigned long start_pfn = start >> PAGE_SHIFT; 1072 1072 unsigned long nr_pages = size >> PAGE_SHIFT; 1073 - struct zone *zone; 1074 1073 1075 1074 /* 1076 1075 * FIXME: Cleanup page tables (also in arch_add_memory() in case ··· 1078 1079 * unplug. ARCH_ENABLE_MEMORY_HOTREMOVE must not be 1079 1080 * unlocked yet. 1080 1081 */ 1081 - zone = page_zone(pfn_to_page(start_pfn)); 1082 - __remove_pages(zone, start_pfn, nr_pages, altmap); 1082 + __remove_pages(start_pfn, nr_pages, altmap); 1083 1083 } 1084 1084 #endif
+4 -4
arch/hexagon/include/asm/atomic.h
··· 91 91 "1: %0 = memw_locked(%1);\n" \ 92 92 " %0 = "#op "(%0,%2);\n" \ 93 93 " memw_locked(%1,P3)=%0;\n" \ 94 - " if !P3 jump 1b;\n" \ 94 + " if (!P3) jump 1b;\n" \ 95 95 : "=&r" (output) \ 96 96 : "r" (&v->counter), "r" (i) \ 97 97 : "memory", "p3" \ ··· 107 107 "1: %0 = memw_locked(%1);\n" \ 108 108 " %0 = "#op "(%0,%2);\n" \ 109 109 " memw_locked(%1,P3)=%0;\n" \ 110 - " if !P3 jump 1b;\n" \ 110 + " if (!P3) jump 1b;\n" \ 111 111 : "=&r" (output) \ 112 112 : "r" (&v->counter), "r" (i) \ 113 113 : "memory", "p3" \ ··· 124 124 "1: %0 = memw_locked(%2);\n" \ 125 125 " %1 = "#op "(%0,%3);\n" \ 126 126 " memw_locked(%2,P3)=%1;\n" \ 127 - " if !P3 jump 1b;\n" \ 127 + " if (!P3) jump 1b;\n" \ 128 128 : "=&r" (output), "=&r" (val) \ 129 129 : "r" (&v->counter), "r" (i) \ 130 130 : "memory", "p3" \ ··· 173 173 " }" 174 174 " memw_locked(%2, p3) = %1;" 175 175 " {" 176 - " if !p3 jump 1b;" 176 + " if (!p3) jump 1b;" 177 177 " }" 178 178 "2:" 179 179 : "=&r" (__oldval), "=&r" (tmp)
+4 -4
arch/hexagon/include/asm/bitops.h
··· 38 38 "1: R12 = memw_locked(R10);\n" 39 39 " { P0 = tstbit(R12,R11); R12 = clrbit(R12,R11); }\n" 40 40 " memw_locked(R10,P1) = R12;\n" 41 - " {if !P1 jump 1b; %0 = mux(P0,#1,#0);}\n" 41 + " {if (!P1) jump 1b; %0 = mux(P0,#1,#0);}\n" 42 42 : "=&r" (oldval) 43 43 : "r" (addr), "r" (nr) 44 44 : "r10", "r11", "r12", "p0", "p1", "memory" ··· 62 62 "1: R12 = memw_locked(R10);\n" 63 63 " { P0 = tstbit(R12,R11); R12 = setbit(R12,R11); }\n" 64 64 " memw_locked(R10,P1) = R12;\n" 65 - " {if !P1 jump 1b; %0 = mux(P0,#1,#0);}\n" 65 + " {if (!P1) jump 1b; %0 = mux(P0,#1,#0);}\n" 66 66 : "=&r" (oldval) 67 67 : "r" (addr), "r" (nr) 68 68 : "r10", "r11", "r12", "p0", "p1", "memory" ··· 88 88 "1: R12 = memw_locked(R10);\n" 89 89 " { P0 = tstbit(R12,R11); R12 = togglebit(R12,R11); }\n" 90 90 " memw_locked(R10,P1) = R12;\n" 91 - " {if !P1 jump 1b; %0 = mux(P0,#1,#0);}\n" 91 + " {if (!P1) jump 1b; %0 = mux(P0,#1,#0);}\n" 92 92 : "=&r" (oldval) 93 93 : "r" (addr), "r" (nr) 94 94 : "r10", "r11", "r12", "p0", "p1", "memory" ··· 223 223 int r; 224 224 225 225 asm("{ P0 = cmp.eq(%1,#0); %0 = ct0(%1);}\n" 226 - "{ if P0 %0 = #0; if !P0 %0 = add(%0,#1);}\n" 226 + "{ if (P0) %0 = #0; if (!P0) %0 = add(%0,#1);}\n" 227 227 : "=&r" (r) 228 228 : "r" (x) 229 229 : "p0");
+1 -1
arch/hexagon/include/asm/cmpxchg.h
··· 30 30 __asm__ __volatile__ ( 31 31 "1: %0 = memw_locked(%1);\n" /* load into retval */ 32 32 " memw_locked(%1,P0) = %2;\n" /* store into memory */ 33 - " if !P0 jump 1b;\n" 33 + " if (!P0) jump 1b;\n" 34 34 : "=&r" (retval) 35 35 : "r" (ptr), "r" (x) 36 36 : "memory", "p0"
+3 -3
arch/hexagon/include/asm/futex.h
··· 16 16 /* For example: %1 = %4 */ \ 17 17 insn \ 18 18 "2: memw_locked(%3,p2) = %1;\n" \ 19 - " if !p2 jump 1b;\n" \ 19 + " if (!p2) jump 1b;\n" \ 20 20 " %1 = #0;\n" \ 21 21 "3:\n" \ 22 22 ".section .fixup,\"ax\"\n" \ ··· 84 84 "1: %1 = memw_locked(%3)\n" 85 85 " {\n" 86 86 " p2 = cmp.eq(%1,%4)\n" 87 - " if !p2.new jump:NT 3f\n" 87 + " if (!p2.new) jump:NT 3f\n" 88 88 " }\n" 89 89 "2: memw_locked(%3,p2) = %5\n" 90 - " if !p2 jump 1b\n" 90 + " if (!p2) jump 1b\n" 91 91 "3:\n" 92 92 ".section .fixup,\"ax\"\n" 93 93 "4: %0 = #%6\n"
+1
arch/hexagon/include/asm/io.h
··· 173 173 174 174 void __iomem *ioremap(unsigned long phys_addr, unsigned long size); 175 175 #define ioremap_nocache ioremap 176 + #define ioremap_uc(X, Y) ioremap((X), (Y)) 176 177 177 178 178 179 #define __raw_writel writel
+10 -10
arch/hexagon/include/asm/spinlock.h
··· 30 30 __asm__ __volatile__( 31 31 "1: R6 = memw_locked(%0);\n" 32 32 " { P3 = cmp.ge(R6,#0); R6 = add(R6,#1);}\n" 33 - " { if !P3 jump 1b; }\n" 33 + " { if (!P3) jump 1b; }\n" 34 34 " memw_locked(%0,P3) = R6;\n" 35 - " { if !P3 jump 1b; }\n" 35 + " { if (!P3) jump 1b; }\n" 36 36 : 37 37 : "r" (&lock->lock) 38 38 : "memory", "r6", "p3" ··· 46 46 "1: R6 = memw_locked(%0);\n" 47 47 " R6 = add(R6,#-1);\n" 48 48 " memw_locked(%0,P3) = R6\n" 49 - " if !P3 jump 1b;\n" 49 + " if (!P3) jump 1b;\n" 50 50 : 51 51 : "r" (&lock->lock) 52 52 : "memory", "r6", "p3" ··· 61 61 __asm__ __volatile__( 62 62 " R6 = memw_locked(%1);\n" 63 63 " { %0 = #0; P3 = cmp.ge(R6,#0); R6 = add(R6,#1);}\n" 64 - " { if !P3 jump 1f; }\n" 64 + " { if (!P3) jump 1f; }\n" 65 65 " memw_locked(%1,P3) = R6;\n" 66 66 " { %0 = P3 }\n" 67 67 "1:\n" ··· 78 78 __asm__ __volatile__( 79 79 "1: R6 = memw_locked(%0)\n" 80 80 " { P3 = cmp.eq(R6,#0); R6 = #-1;}\n" 81 - " { if !P3 jump 1b; }\n" 81 + " { if (!P3) jump 1b; }\n" 82 82 " memw_locked(%0,P3) = R6;\n" 83 - " { if !P3 jump 1b; }\n" 83 + " { if (!P3) jump 1b; }\n" 84 84 : 85 85 : "r" (&lock->lock) 86 86 : "memory", "r6", "p3" ··· 94 94 __asm__ __volatile__( 95 95 " R6 = memw_locked(%1)\n" 96 96 " { %0 = #0; P3 = cmp.eq(R6,#0); R6 = #-1;}\n" 97 - " { if !P3 jump 1f; }\n" 97 + " { if (!P3) jump 1f; }\n" 98 98 " memw_locked(%1,P3) = R6;\n" 99 99 " %0 = P3;\n" 100 100 "1:\n" ··· 117 117 __asm__ __volatile__( 118 118 "1: R6 = memw_locked(%0);\n" 119 119 " P3 = cmp.eq(R6,#0);\n" 120 - " { if !P3 jump 1b; R6 = #1; }\n" 120 + " { if (!P3) jump 1b; R6 = #1; }\n" 121 121 " memw_locked(%0,P3) = R6;\n" 122 - " { if !P3 jump 1b; }\n" 122 + " { if (!P3) jump 1b; }\n" 123 123 : 124 124 : "r" (&lock->lock) 125 125 : "memory", "r6", "p3" ··· 139 139 __asm__ __volatile__( 140 140 " R6 = memw_locked(%1);\n" 141 141 " P3 = cmp.eq(R6,#0);\n" 142 - " { if !P3 jump 1f; R6 = #1; %0 = #0; }\n" 142 + " { if (!P3) jump 1f; R6 = #1; %0 = #0; }\n" 143 143 " memw_locked(%1,P3) = R6;\n" 144 144 " %0 = P3;\n" 145 145 "1:\n"
+1 -3
arch/hexagon/kernel/stacktrace.c
··· 11 11 #include <linux/thread_info.h> 12 12 #include <linux/module.h> 13 13 14 - register unsigned long current_frame_pointer asm("r30"); 15 - 16 14 struct stackframe { 17 15 unsigned long fp; 18 16 unsigned long rets; ··· 28 30 29 31 low = (unsigned long)task_stack_page(current); 30 32 high = low + THREAD_SIZE; 31 - fp = current_frame_pointer; 33 + fp = (unsigned long)__builtin_frame_address(0); 32 34 33 35 while (fp >= low && fp <= (high - sizeof(*frame))) { 34 36 frame = (struct stackframe *)fp;
+1 -1
arch/hexagon/kernel/vm_entry.S
··· 369 369 R26.L = #LO(do_work_pending); 370 370 R0 = #VM_INT_DISABLE; 371 371 } 372 - if P0 jump check_work_pending 372 + if (P0) jump check_work_pending 373 373 { 374 374 R0 = R25; 375 375 callr R24
+1 -3
arch/ia64/mm/init.c
··· 689 689 { 690 690 unsigned long start_pfn = start >> PAGE_SHIFT; 691 691 unsigned long nr_pages = size >> PAGE_SHIFT; 692 - struct zone *zone; 693 692 694 - zone = page_zone(pfn_to_page(start_pfn)); 695 - __remove_pages(zone, start_pfn, nr_pages, altmap); 693 + __remove_pages(start_pfn, nr_pages, altmap); 696 694 } 697 695 #endif
+1 -1
arch/mips/Kconfig
··· 47 47 select HAVE_ARCH_TRACEHOOK 48 48 select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES 49 49 select HAVE_ASM_MODVERSIONS 50 - select HAVE_EBPF_JIT if (!CPU_MICROMIPS) 50 + select HAVE_EBPF_JIT if 64BIT && !CPU_MICROMIPS && TARGET_ISA_REV >= 2 51 51 select HAVE_CONTEXT_TRACKING 52 52 select HAVE_COPY_THREAD_TLS 53 53 select HAVE_C_RECORDMCOUNT
+3
arch/mips/boot/compressed/Makefile
··· 29 29 -DBOOT_HEAP_SIZE=$(BOOT_HEAP_SIZE) \ 30 30 -DKERNEL_ENTRY=$(VMLINUX_ENTRY_ADDRESS) 31 31 32 + # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. 33 + KCOV_INSTRUMENT := n 34 + 32 35 # decompressor objects (linked with vmlinuz) 33 36 vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/string.o 34 37
+2 -1
arch/mips/include/asm/cpu-type.h
··· 15 15 static inline int __pure __get_cpu_type(const int cpu_type) 16 16 { 17 17 switch (cpu_type) { 18 - #if defined(CONFIG_SYS_HAS_CPU_LOONGSON2EF) 18 + #if defined(CONFIG_SYS_HAS_CPU_LOONGSON2E) || \ 19 + defined(CONFIG_SYS_HAS_CPU_LOONGSON2F) 19 20 case CPU_LOONGSON2EF: 20 21 #endif 21 22
+19 -1
arch/mips/include/asm/thread_info.h
··· 49 49 .addr_limit = KERNEL_DS, \ 50 50 } 51 51 52 - /* How to get the thread information struct from C. */ 52 + /* 53 + * A pointer to the struct thread_info for the currently executing thread is 54 + * held in register $28/$gp. 55 + * 56 + * We declare __current_thread_info as a global register variable rather than a 57 + * local register variable within current_thread_info() because clang doesn't 58 + * support explicit local register variables. 59 + * 60 + * When building the VDSO we take care not to declare the global register 61 + * variable because this causes GCC to not preserve the value of $28/$gp in 62 + * functions that change its value (which is common in the PIC VDSO when 63 + * accessing the GOT). Since the VDSO shouldn't be accessing 64 + * __current_thread_info anyway we declare it extern in order to cause a link 65 + * failure if it's referenced. 66 + */ 67 + #ifdef __VDSO__ 68 + extern struct thread_info *__current_thread_info; 69 + #else 53 70 register struct thread_info *__current_thread_info __asm__("$28"); 71 + #endif 54 72 55 73 static inline struct thread_info *current_thread_info(void) 56 74 {
-13
arch/mips/include/asm/vdso/gettimeofday.h
··· 26 26 27 27 #define __VDSO_USE_SYSCALL ULLONG_MAX 28 28 29 - #ifdef CONFIG_MIPS_CLOCK_VSYSCALL 30 - 31 29 static __always_inline long gettimeofday_fallback( 32 30 struct __kernel_old_timeval *_tv, 33 31 struct timezone *_tz) ··· 45 47 46 48 return error ? -ret : ret; 47 49 } 48 - 49 - #else 50 - 51 - static __always_inline long gettimeofday_fallback( 52 - struct __kernel_old_timeval *_tv, 53 - struct timezone *_tz) 54 - { 55 - return -1; 56 - } 57 - 58 - #endif 59 50 60 51 static __always_inline long clock_gettime_fallback( 61 52 clockid_t _clkid,
+26 -1
arch/mips/kernel/cacheinfo.c
··· 50 50 return 0; 51 51 } 52 52 53 + static void fill_cpumask_siblings(int cpu, cpumask_t *cpu_map) 54 + { 55 + int cpu1; 56 + 57 + for_each_possible_cpu(cpu1) 58 + if (cpus_are_siblings(cpu, cpu1)) 59 + cpumask_set_cpu(cpu1, cpu_map); 60 + } 61 + 62 + static void fill_cpumask_cluster(int cpu, cpumask_t *cpu_map) 63 + { 64 + int cpu1; 65 + int cluster = cpu_cluster(&cpu_data[cpu]); 66 + 67 + for_each_possible_cpu(cpu1) 68 + if (cpu_cluster(&cpu_data[cpu1]) == cluster) 69 + cpumask_set_cpu(cpu1, cpu_map); 70 + } 71 + 53 72 static int __populate_cache_leaves(unsigned int cpu) 54 73 { 55 74 struct cpuinfo_mips *c = &current_cpu_data; ··· 76 57 struct cacheinfo *this_leaf = this_cpu_ci->info_list; 77 58 78 59 if (c->icache.waysize) { 60 + /* L1 caches are per core */ 61 + fill_cpumask_siblings(cpu, &this_leaf->shared_cpu_map); 79 62 populate_cache(dcache, this_leaf, 1, CACHE_TYPE_DATA); 63 + fill_cpumask_siblings(cpu, &this_leaf->shared_cpu_map); 80 64 populate_cache(icache, this_leaf, 1, CACHE_TYPE_INST); 81 65 } else { 82 66 populate_cache(dcache, this_leaf, 1, CACHE_TYPE_UNIFIED); 83 67 } 84 68 85 - if (c->scache.waysize) 69 + if (c->scache.waysize) { 70 + /* L2 cache is per cluster */ 71 + fill_cpumask_cluster(cpu, &this_leaf->shared_cpu_map); 86 72 populate_cache(scache, this_leaf, 2, CACHE_TYPE_UNIFIED); 73 + } 87 74 88 75 if (c->tcache.waysize) 89 76 populate_cache(tcache, this_leaf, 3, CACHE_TYPE_UNIFIED);
+1 -1
arch/mips/net/ebpf_jit.c
··· 1804 1804 unsigned int image_size; 1805 1805 u8 *image_ptr; 1806 1806 1807 - if (!prog->jit_requested || MIPS_ISA_REV < 2) 1807 + if (!prog->jit_requested) 1808 1808 return prog; 1809 1809 1810 1810 tmp = bpf_jit_blind_constants(prog);
+20
arch/mips/vdso/vgettimeofday.c
··· 17 17 return __cvdso_clock_gettime32(clock, ts); 18 18 } 19 19 20 + #ifdef CONFIG_MIPS_CLOCK_VSYSCALL 21 + 22 + /* 23 + * This is behind the ifdef so that we don't provide the symbol when there's no 24 + * possibility of there being a usable clocksource, because there's nothing we 25 + * can do without it. When libc fails the symbol lookup it should fall back on 26 + * the standard syscall path. 27 + */ 20 28 int __vdso_gettimeofday(struct __kernel_old_timeval *tv, 21 29 struct timezone *tz) 22 30 { 23 31 return __cvdso_gettimeofday(tv, tz); 24 32 } 33 + 34 + #endif /* CONFIG_MIPS_CLOCK_VSYSCALL */ 25 35 26 36 int __vdso_clock_getres(clockid_t clock_id, 27 37 struct old_timespec32 *res) ··· 53 43 return __cvdso_clock_gettime(clock, ts); 54 44 } 55 45 46 + #ifdef CONFIG_MIPS_CLOCK_VSYSCALL 47 + 48 + /* 49 + * This is behind the ifdef so that we don't provide the symbol when there's no 50 + * possibility of there being a usable clocksource, because there's nothing we 51 + * can do without it. When libc fails the symbol lookup it should fall back on 52 + * the standard syscall path. 53 + */ 56 54 int __vdso_gettimeofday(struct __kernel_old_timeval *tv, 57 55 struct timezone *tz) 58 56 { 59 57 return __cvdso_gettimeofday(tv, tz); 60 58 } 59 + 60 + #endif /* CONFIG_MIPS_CLOCK_VSYSCALL */ 61 61 62 62 int __vdso_clock_getres(clockid_t clock_id, 63 63 struct __kernel_timespec *res)
+1
arch/parisc/Kconfig
··· 62 62 select HAVE_FTRACE_MCOUNT_RECORD if HAVE_DYNAMIC_FTRACE 63 63 select HAVE_KPROBES_ON_FTRACE 64 64 select HAVE_DYNAMIC_FTRACE_WITH_REGS 65 + select HAVE_COPY_THREAD_TLS 65 66 66 67 help 67 68 The PA-RISC microprocessor is designed by Hewlett-Packard and used
+4 -4
arch/parisc/kernel/process.c
··· 208 208 * Copy architecture-specific thread state 209 209 */ 210 210 int 211 - copy_thread(unsigned long clone_flags, unsigned long usp, 212 - unsigned long kthread_arg, struct task_struct *p) 211 + copy_thread_tls(unsigned long clone_flags, unsigned long usp, 212 + unsigned long kthread_arg, struct task_struct *p, unsigned long tls) 213 213 { 214 214 struct pt_regs *cregs = &(p->thread.regs); 215 215 void *stack = task_stack_page(p); ··· 254 254 cregs->ksp = (unsigned long)stack + THREAD_SZ_ALGN + FRAME_SIZE; 255 255 cregs->kpc = (unsigned long) &child_return; 256 256 257 - /* Setup thread TLS area from the 4th parameter in clone */ 257 + /* Setup thread TLS area */ 258 258 if (clone_flags & CLONE_SETTLS) 259 - cregs->cr27 = cregs->gr[23]; 259 + cregs->cr27 = tls; 260 260 } 261 261 262 262 return 0;
+1
arch/powerpc/include/asm/spinlock.h
··· 15 15 * 16 16 * (the type definitions are in asm/spinlock_types.h) 17 17 */ 18 + #include <linux/jump_label.h> 18 19 #include <linux/irqflags.h> 19 20 #ifdef CONFIG_PPC64 20 21 #include <asm/paca.h>
+1 -2
arch/powerpc/mm/mem.c
··· 151 151 { 152 152 unsigned long start_pfn = start >> PAGE_SHIFT; 153 153 unsigned long nr_pages = size >> PAGE_SHIFT; 154 - struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap); 155 154 int ret; 156 155 157 - __remove_pages(page_zone(page), start_pfn, nr_pages, altmap); 156 + __remove_pages(start_pfn, nr_pages, altmap); 158 157 159 158 /* Remove htab bolted mappings for this section of memory */ 160 159 start = (unsigned long)__va(start);
+2 -2
arch/powerpc/mm/slice.c
··· 50 50 51 51 #endif 52 52 53 - static inline bool slice_addr_is_low(unsigned long addr) 53 + static inline notrace bool slice_addr_is_low(unsigned long addr) 54 54 { 55 55 u64 tmp = (u64)addr; 56 56 ··· 659 659 mm_ctx_user_psize(&current->mm->context), 1); 660 660 } 661 661 662 - unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr) 662 + unsigned int notrace get_slice_psize(struct mm_struct *mm, unsigned long addr) 663 663 { 664 664 unsigned char *psizes; 665 665 int index, mask_index;
+2
arch/riscv/Kconfig
··· 64 64 select SPARSEMEM_STATIC if 32BIT 65 65 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU 66 66 select HAVE_ARCH_MMAP_RND_BITS if MMU 67 + select ARCH_HAS_GCOV_PROFILE_ALL 68 + select HAVE_COPY_THREAD_TLS 67 69 68 70 config ARCH_MMAP_RND_BITS_MIN 69 71 default 18 if 64BIT
+15
arch/riscv/boot/dts/sifive/fu540-c000.dtsi
··· 54 54 reg = <1>; 55 55 riscv,isa = "rv64imafdc"; 56 56 tlb-split; 57 + next-level-cache = <&l2cache>; 57 58 cpu1_intc: interrupt-controller { 58 59 #interrupt-cells = <1>; 59 60 compatible = "riscv,cpu-intc"; ··· 78 77 reg = <2>; 79 78 riscv,isa = "rv64imafdc"; 80 79 tlb-split; 80 + next-level-cache = <&l2cache>; 81 81 cpu2_intc: interrupt-controller { 82 82 #interrupt-cells = <1>; 83 83 compatible = "riscv,cpu-intc"; ··· 102 100 reg = <3>; 103 101 riscv,isa = "rv64imafdc"; 104 102 tlb-split; 103 + next-level-cache = <&l2cache>; 105 104 cpu3_intc: interrupt-controller { 106 105 #interrupt-cells = <1>; 107 106 compatible = "riscv,cpu-intc"; ··· 126 123 reg = <4>; 127 124 riscv,isa = "rv64imafdc"; 128 125 tlb-split; 126 + next-level-cache = <&l2cache>; 129 127 cpu4_intc: interrupt-controller { 130 128 #interrupt-cells = <1>; 131 129 compatible = "riscv,cpu-intc"; ··· 256 252 clocks = <&prci PRCI_CLK_TLCLK>; 257 253 #pwm-cells = <3>; 258 254 status = "disabled"; 255 + }; 256 + l2cache: cache-controller@2010000 { 257 + compatible = "sifive,fu540-c000-ccache", "cache"; 258 + cache-block-size = <64>; 259 + cache-level = <2>; 260 + cache-sets = <1024>; 261 + cache-size = <2097152>; 262 + cache-unified; 263 + interrupt-parent = <&plic0>; 264 + interrupts = <1 2 3>; 265 + reg = <0x0 0x2010000 0x0 0x1000>; 259 266 }; 260 267 261 268 };
+9 -9
arch/riscv/include/asm/csr.h
··· 116 116 # define SR_PIE SR_MPIE 117 117 # define SR_PP SR_MPP 118 118 119 - # define IRQ_SOFT IRQ_M_SOFT 120 - # define IRQ_TIMER IRQ_M_TIMER 121 - # define IRQ_EXT IRQ_M_EXT 119 + # define RV_IRQ_SOFT IRQ_M_SOFT 120 + # define RV_IRQ_TIMER IRQ_M_TIMER 121 + # define RV_IRQ_EXT IRQ_M_EXT 122 122 #else /* CONFIG_RISCV_M_MODE */ 123 123 # define CSR_STATUS CSR_SSTATUS 124 124 # define CSR_IE CSR_SIE ··· 133 133 # define SR_PIE SR_SPIE 134 134 # define SR_PP SR_SPP 135 135 136 - # define IRQ_SOFT IRQ_S_SOFT 137 - # define IRQ_TIMER IRQ_S_TIMER 138 - # define IRQ_EXT IRQ_S_EXT 136 + # define RV_IRQ_SOFT IRQ_S_SOFT 137 + # define RV_IRQ_TIMER IRQ_S_TIMER 138 + # define RV_IRQ_EXT IRQ_S_EXT 139 139 #endif /* CONFIG_RISCV_M_MODE */ 140 140 141 141 /* IE/IP (Supervisor/Machine Interrupt Enable/Pending) flags */ 142 - #define IE_SIE (_AC(0x1, UL) << IRQ_SOFT) 143 - #define IE_TIE (_AC(0x1, UL) << IRQ_TIMER) 144 - #define IE_EIE (_AC(0x1, UL) << IRQ_EXT) 142 + #define IE_SIE (_AC(0x1, UL) << RV_IRQ_SOFT) 143 + #define IE_TIE (_AC(0x1, UL) << RV_IRQ_TIMER) 144 + #define IE_EIE (_AC(0x1, UL) << RV_IRQ_EXT) 145 145 146 146 #ifndef __ASSEMBLY__ 147 147
+3 -3
arch/riscv/include/asm/sifive_l2_cache.h include/soc/sifive/sifive_l2_cache.h
··· 4 4 * 5 5 */ 6 6 7 - #ifndef _ASM_RISCV_SIFIVE_L2_CACHE_H 8 - #define _ASM_RISCV_SIFIVE_L2_CACHE_H 7 + #ifndef __SOC_SIFIVE_L2_CACHE_H 8 + #define __SOC_SIFIVE_L2_CACHE_H 9 9 10 10 extern int register_sifive_l2_error_notifier(struct notifier_block *nb); 11 11 extern int unregister_sifive_l2_error_notifier(struct notifier_block *nb); ··· 13 13 #define SIFIVE_L2_ERR_TYPE_CE 0 14 14 #define SIFIVE_L2_ERR_TYPE_UE 1 15 15 16 - #endif /* _ASM_RISCV_SIFIVE_L2_CACHE_H */ 16 + #endif /* __SOC_SIFIVE_L2_CACHE_H */
+1
arch/riscv/kernel/entry.S
··· 246 246 */ 247 247 li t1, -1 248 248 beq a7, t1, ret_from_syscall_rejected 249 + blt a7, t1, 1f 249 250 /* Call syscall */ 250 251 la s0, sys_call_table 251 252 slli t0, a7, RISCV_LGPTR
+1 -1
arch/riscv/kernel/ftrace.c
··· 142 142 */ 143 143 old = *parent; 144 144 145 - if (function_graph_enter(old, self_addr, frame_pointer, parent)) 145 + if (!function_graph_enter(old, self_addr, frame_pointer, parent)) 146 146 *parent = return_hooker; 147 147 } 148 148
+1 -1
arch/riscv/kernel/head.S
··· 251 251 #ifdef CONFIG_FPU 252 252 csrr t0, CSR_MISA 253 253 andi t0, t0, (COMPAT_HWCAP_ISA_F | COMPAT_HWCAP_ISA_D) 254 - bnez t0, .Lreset_regs_done 254 + beqz t0, .Lreset_regs_done 255 255 256 256 li t1, SR_FS 257 257 csrs CSR_STATUS, t1
+3 -3
arch/riscv/kernel/irq.c
··· 23 23 24 24 irq_enter(); 25 25 switch (regs->cause & ~CAUSE_IRQ_FLAG) { 26 - case IRQ_TIMER: 26 + case RV_IRQ_TIMER: 27 27 riscv_timer_interrupt(); 28 28 break; 29 29 #ifdef CONFIG_SMP 30 - case IRQ_SOFT: 30 + case RV_IRQ_SOFT: 31 31 /* 32 32 * We only use software interrupts to pass IPIs, so if a non-SMP 33 33 * system gets one, then we don't know what to do. ··· 35 35 riscv_software_interrupt(); 36 36 break; 37 37 #endif 38 - case IRQ_EXT: 38 + case RV_IRQ_EXT: 39 39 handle_arch_irq(regs); 40 40 break; 41 41 default:
+3 -3
arch/riscv/kernel/process.c
··· 99 99 return 0; 100 100 } 101 101 102 - int copy_thread(unsigned long clone_flags, unsigned long usp, 103 - unsigned long arg, struct task_struct *p) 102 + int copy_thread_tls(unsigned long clone_flags, unsigned long usp, 103 + unsigned long arg, struct task_struct *p, unsigned long tls) 104 104 { 105 105 struct pt_regs *childregs = task_pt_regs(p); 106 106 ··· 121 121 if (usp) /* User fork */ 122 122 childregs->sp = usp; 123 123 if (clone_flags & CLONE_SETTLS) 124 - childregs->tp = childregs->a5; 124 + childregs->tp = tls; 125 125 childregs->a0 = 0; /* Return value of fork() */ 126 126 p->thread.ra = (unsigned long)ret_from_fork; 127 127 }
-3
arch/riscv/kernel/riscv_ksyms.c
··· 9 9 /* 10 10 * Assembly functions that may be used (directly or indirectly) by modules 11 11 */ 12 - EXPORT_SYMBOL(__clear_user); 13 - EXPORT_SYMBOL(__asm_copy_to_user); 14 - EXPORT_SYMBOL(__asm_copy_from_user); 15 12 EXPORT_SYMBOL(memset); 16 13 EXPORT_SYMBOL(memcpy);
+4
arch/riscv/lib/uaccess.S
··· 1 1 #include <linux/linkage.h> 2 + #include <asm-generic/export.h> 2 3 #include <asm/asm.h> 3 4 #include <asm/csr.h> 4 5 ··· 67 66 j 3b 68 67 ENDPROC(__asm_copy_to_user) 69 68 ENDPROC(__asm_copy_from_user) 69 + EXPORT_SYMBOL(__asm_copy_to_user) 70 + EXPORT_SYMBOL(__asm_copy_from_user) 70 71 71 72 72 73 ENTRY(__clear_user) ··· 111 108 bltu a0, a3, 5b 112 109 j 3b 113 110 ENDPROC(__clear_user) 111 + EXPORT_SYMBOL(__clear_user) 114 112 115 113 .section .fixup,"ax" 116 114 .balign 4
+1
arch/riscv/mm/cacheflush.c
··· 22 22 else 23 23 on_each_cpu(ipi_remote_fence_i, NULL, 1); 24 24 } 25 + EXPORT_SYMBOL(flush_icache_all); 25 26 26 27 /* 27 28 * Performs an icache flush for the given MM context. RISC-V has no direct
+6 -6
arch/riscv/mm/init.c
··· 99 99 pr_info("initrd not found or empty"); 100 100 goto disable; 101 101 } 102 - if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) { 102 + if (__pa_symbol(initrd_end) > PFN_PHYS(max_low_pfn)) { 103 103 pr_err("initrd extends beyond end of memory"); 104 104 goto disable; 105 105 } 106 106 107 107 size = initrd_end - initrd_start; 108 - memblock_reserve(__pa(initrd_start), size); 108 + memblock_reserve(__pa_symbol(initrd_start), size); 109 109 initrd_below_start_ok = 1; 110 110 111 111 pr_info("Initial ramdisk at: 0x%p (%lu bytes)\n", ··· 124 124 { 125 125 struct memblock_region *reg; 126 126 phys_addr_t mem_size = 0; 127 - phys_addr_t vmlinux_end = __pa(&_end); 128 - phys_addr_t vmlinux_start = __pa(&_start); 127 + phys_addr_t vmlinux_end = __pa_symbol(&_end); 128 + phys_addr_t vmlinux_start = __pa_symbol(&_start); 129 129 130 130 /* Find the memory region containing the kernel */ 131 131 for_each_memblock(memory, reg) { ··· 445 445 446 446 /* Setup swapper PGD for fixmap */ 447 447 create_pgd_mapping(swapper_pg_dir, FIXADDR_START, 448 - __pa(fixmap_pgd_next), 448 + __pa_symbol(fixmap_pgd_next), 449 449 PGDIR_SIZE, PAGE_TABLE); 450 450 451 451 /* Map all memory banks */ ··· 474 474 clear_fixmap(FIX_PMD); 475 475 476 476 /* Move to swapper page table */ 477 - csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | SATP_MODE); 477 + csr_write(CSR_SATP, PFN_DOWN(__pa_symbol(swapper_pg_dir)) | SATP_MODE); 478 478 local_flush_tlb_all(); 479 479 } 480 480 #else
+1 -3
arch/s390/mm/init.c
··· 292 292 { 293 293 unsigned long start_pfn = start >> PAGE_SHIFT; 294 294 unsigned long nr_pages = size >> PAGE_SHIFT; 295 - struct zone *zone; 296 295 297 - zone = page_zone(pfn_to_page(start_pfn)); 298 - __remove_pages(zone, start_pfn, nr_pages, altmap); 296 + __remove_pages(start_pfn, nr_pages, altmap); 299 297 vmem_remove_mapping(start, size); 300 298 } 301 299 #endif /* CONFIG_MEMORY_HOTPLUG */
+1 -3
arch/sh/mm/init.c
··· 434 434 { 435 435 unsigned long start_pfn = PFN_DOWN(start); 436 436 unsigned long nr_pages = size >> PAGE_SHIFT; 437 - struct zone *zone; 438 437 439 - zone = page_zone(pfn_to_page(start_pfn)); 440 - __remove_pages(zone, start_pfn, nr_pages, altmap); 438 + __remove_pages(start_pfn, nr_pages, altmap); 441 439 } 442 440 #endif /* CONFIG_MEMORY_HOTPLUG */
+1
arch/um/Kconfig
··· 14 14 select HAVE_FUTEX_CMPXCHG if FUTEX 15 15 select HAVE_DEBUG_KMEMLEAK 16 16 select HAVE_DEBUG_BUGVERBOSE 17 + select HAVE_COPY_THREAD_TLS 17 18 select GENERIC_IRQ_SHOW 18 19 select GENERIC_CPU_DEVICES 19 20 select GENERIC_CLOCKEVENTS
+1 -1
arch/um/include/asm/ptrace-generic.h
··· 36 36 extern unsigned long getreg(struct task_struct *child, int regno); 37 37 extern int putreg(struct task_struct *child, int regno, unsigned long value); 38 38 39 - extern int arch_copy_tls(struct task_struct *new); 39 + extern int arch_set_tls(struct task_struct *new, unsigned long tls); 40 40 extern void clear_flushed_tls(struct task_struct *task); 41 41 extern int syscall_trace_enter(struct pt_regs *regs); 42 42 extern void syscall_trace_leave(struct pt_regs *regs);
+3 -3
arch/um/kernel/process.c
··· 153 153 userspace(&current->thread.regs.regs, current_thread_info()->aux_fp_regs); 154 154 } 155 155 156 - int copy_thread(unsigned long clone_flags, unsigned long sp, 157 - unsigned long arg, struct task_struct * p) 156 + int copy_thread_tls(unsigned long clone_flags, unsigned long sp, 157 + unsigned long arg, struct task_struct * p, unsigned long tls) 158 158 { 159 159 void (*handler)(void); 160 160 int kthread = current->flags & PF_KTHREAD; ··· 188 188 * Set a new TLS for the child thread? 189 189 */ 190 190 if (clone_flags & CLONE_SETTLS) 191 - ret = arch_copy_tls(p); 191 + ret = arch_set_tls(p, tls); 192 192 } 193 193 194 194 return ret;
+1 -3
arch/x86/mm/init_32.c
··· 865 865 { 866 866 unsigned long start_pfn = start >> PAGE_SHIFT; 867 867 unsigned long nr_pages = size >> PAGE_SHIFT; 868 - struct zone *zone; 869 868 870 - zone = page_zone(pfn_to_page(start_pfn)); 871 - __remove_pages(zone, start_pfn, nr_pages, altmap); 869 + __remove_pages(start_pfn, nr_pages, altmap); 872 870 } 873 871 #endif 874 872
+1 -3
arch/x86/mm/init_64.c
··· 1212 1212 { 1213 1213 unsigned long start_pfn = start >> PAGE_SHIFT; 1214 1214 unsigned long nr_pages = size >> PAGE_SHIFT; 1215 - struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap); 1216 - struct zone *zone = page_zone(page); 1217 1215 1218 - __remove_pages(zone, start_pfn, nr_pages, altmap); 1216 + __remove_pages(start_pfn, nr_pages, altmap); 1219 1217 kernel_physical_mapping_remove(start, start + size); 1220 1218 } 1221 1219 #endif /* CONFIG_MEMORY_HOTPLUG */
+2 -4
arch/x86/um/tls_32.c
··· 215 215 return 0; 216 216 } 217 217 218 - int arch_copy_tls(struct task_struct *new) 218 + int arch_set_tls(struct task_struct *new, unsigned long tls) 219 219 { 220 220 struct user_desc info; 221 221 int idx, ret = -EFAULT; 222 222 223 - if (copy_from_user(&info, 224 - (void __user *) UPT_SI(&new->thread.regs.regs), 225 - sizeof(info))) 223 + if (copy_from_user(&info, (void __user *) tls, sizeof(info))) 226 224 goto out; 227 225 228 226 ret = -EINVAL;
+3 -4
arch/x86/um/tls_64.c
··· 6 6 { 7 7 } 8 8 9 - int arch_copy_tls(struct task_struct *t) 9 + int arch_set_tls(struct task_struct *t, unsigned long tls) 10 10 { 11 11 /* 12 12 * If CLONE_SETTLS is set, we need to save the thread id 13 - * (which is argument 5, child_tid, of clone) so it can be set 14 - * during context switches. 13 + * so it can be set during context switches. 15 14 */ 16 - t->thread.arch.fs = t->thread.regs.regs.gp[R8 / sizeof(long)]; 15 + t->thread.arch.fs = tls; 17 16 18 17 return 0; 19 18 }
+1
arch/xtensa/Kconfig
··· 24 24 select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL 25 25 select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL 26 26 select HAVE_ARCH_TRACEHOOK 27 + select HAVE_COPY_THREAD_TLS 27 28 select HAVE_DEBUG_KMEMLEAK 28 29 select HAVE_DMA_CONTIGUOUS 29 30 select HAVE_EXIT_THREAD
+4 -4
arch/xtensa/kernel/process.c
··· 202 202 * involved. Much simpler to just not copy those live frames across. 203 203 */ 204 204 205 - int copy_thread(unsigned long clone_flags, unsigned long usp_thread_fn, 206 - unsigned long thread_fn_arg, struct task_struct *p) 205 + int copy_thread_tls(unsigned long clone_flags, unsigned long usp_thread_fn, 206 + unsigned long thread_fn_arg, struct task_struct *p, 207 + unsigned long tls) 207 208 { 208 209 struct pt_regs *childregs = task_pt_regs(p); 209 210 ··· 267 266 268 267 childregs->syscall = regs->syscall; 269 268 270 - /* The thread pointer is passed in the '4th argument' (= a5) */ 271 269 if (clone_flags & CLONE_SETTLS) 272 - childregs->threadptr = childregs->areg[5]; 270 + childregs->threadptr = tls; 273 271 } else { 274 272 p->thread.ra = MAKE_RA_FOR_CALL( 275 273 (unsigned long)ret_from_kernel_thread, 1);
+49
block/bio.c
··· 539 539 EXPORT_SYMBOL(zero_fill_bio_iter); 540 540 541 541 /** 542 + * bio_truncate - truncate the bio to small size of @new_size 543 + * @bio: the bio to be truncated 544 + * @new_size: new size for truncating the bio 545 + * 546 + * Description: 547 + * Truncate the bio to new size of @new_size. If bio_op(bio) is 548 + * REQ_OP_READ, zero the truncated part. This function should only 549 + * be used for handling corner cases, such as bio eod. 550 + */ 551 + void bio_truncate(struct bio *bio, unsigned new_size) 552 + { 553 + struct bio_vec bv; 554 + struct bvec_iter iter; 555 + unsigned int done = 0; 556 + bool truncated = false; 557 + 558 + if (new_size >= bio->bi_iter.bi_size) 559 + return; 560 + 561 + if (bio_op(bio) != REQ_OP_READ) 562 + goto exit; 563 + 564 + bio_for_each_segment(bv, bio, iter) { 565 + if (done + bv.bv_len > new_size) { 566 + unsigned offset; 567 + 568 + if (!truncated) 569 + offset = new_size - done; 570 + else 571 + offset = 0; 572 + zero_user(bv.bv_page, offset, bv.bv_len - offset); 573 + truncated = true; 574 + } 575 + done += bv.bv_len; 576 + } 577 + 578 + exit: 579 + /* 580 + * Don't touch bvec table here and make it really immutable, since 581 + * fs bio user has to retrieve all pages via bio_for_each_segment_all 582 + * in its .end_bio() callback. 583 + * 584 + * It is enough to truncate bio by updating .bi_size since we can make 585 + * correct bvec with the updated .bi_size for drivers. 586 + */ 587 + bio->bi_iter.bi_size = new_size; 588 + } 589 + 590 + /** 542 591 * bio_put - release a reference to a bio 543 592 * @bio: bio to release reference to 544 593 *
+9 -9
block/blk-merge.c
··· 157 157 return sectors & (lbs - 1); 158 158 } 159 159 160 - static unsigned get_max_segment_size(const struct request_queue *q, 161 - unsigned offset) 160 + static inline unsigned get_max_segment_size(const struct request_queue *q, 161 + struct page *start_page, 162 + unsigned long offset) 162 163 { 163 164 unsigned long mask = queue_segment_boundary(q); 164 165 165 - /* default segment boundary mask means no boundary limit */ 166 - if (mask == BLK_SEG_BOUNDARY_MASK) 167 - return queue_max_segment_size(q); 168 - 169 - return min_t(unsigned long, mask - (mask & offset) + 1, 166 + offset = mask & (page_to_phys(start_page) + offset); 167 + return min_t(unsigned long, mask - offset + 1, 170 168 queue_max_segment_size(q)); 171 169 } 172 170 ··· 199 201 unsigned seg_size = 0; 200 202 201 203 while (len && *nsegs < max_segs) { 202 - seg_size = get_max_segment_size(q, bv->bv_offset + total_len); 204 + seg_size = get_max_segment_size(q, bv->bv_page, 205 + bv->bv_offset + total_len); 203 206 seg_size = min(seg_size, len); 204 207 205 208 (*nsegs)++; ··· 418 419 419 420 while (nbytes > 0) { 420 421 unsigned offset = bvec->bv_offset + total; 421 - unsigned len = min(get_max_segment_size(q, offset), nbytes); 422 + unsigned len = min(get_max_segment_size(q, bvec->bv_page, 423 + offset), nbytes); 422 424 struct page *page = bvec->bv_page; 423 425 424 426 /*
+16
block/compat_ioctl.c
··· 6 6 #include <linux/compat.h> 7 7 #include <linux/elevator.h> 8 8 #include <linux/hdreg.h> 9 + #include <linux/pr.h> 9 10 #include <linux/slab.h> 10 11 #include <linux/syscalls.h> 11 12 #include <linux/types.h> ··· 355 354 * but we call blkdev_ioctl, which gets the lock for us 356 355 */ 357 356 case BLKRRPART: 357 + case BLKREPORTZONE: 358 + case BLKRESETZONE: 359 + case BLKOPENZONE: 360 + case BLKCLOSEZONE: 361 + case BLKFINISHZONE: 362 + case BLKGETZONESZ: 363 + case BLKGETNRZONES: 358 364 return blkdev_ioctl(bdev, mode, cmd, 359 365 (unsigned long)compat_ptr(arg)); 360 366 case BLKBSZSET_32: ··· 409 401 case BLKTRACETEARDOWN: /* compatible */ 410 402 ret = blk_trace_ioctl(bdev, cmd, compat_ptr(arg)); 411 403 return ret; 404 + case IOC_PR_REGISTER: 405 + case IOC_PR_RESERVE: 406 + case IOC_PR_RELEASE: 407 + case IOC_PR_PREEMPT: 408 + case IOC_PR_PREEMPT_ABORT: 409 + case IOC_PR_CLEAR: 410 + return blkdev_ioctl(bdev, mode, cmd, 411 + (unsigned long)compat_ptr(arg)); 412 412 default: 413 413 if (disk->fops->compat_ioctl) 414 414 ret = disk->fops->compat_ioctl(bdev, mode, cmd, arg);
+95 -40
drivers/ata/ahci_brcm.c
··· 76 76 }; 77 77 78 78 enum brcm_ahci_quirks { 79 - BRCM_AHCI_QUIRK_NO_NCQ = BIT(0), 80 - BRCM_AHCI_QUIRK_SKIP_PHY_ENABLE = BIT(1), 79 + BRCM_AHCI_QUIRK_SKIP_PHY_ENABLE = BIT(0), 81 80 }; 82 81 83 82 struct brcm_ahci_priv { ··· 212 213 brcm_sata_phy_disable(priv, i); 213 214 } 214 215 215 - static u32 brcm_ahci_get_portmask(struct platform_device *pdev, 216 + static u32 brcm_ahci_get_portmask(struct ahci_host_priv *hpriv, 216 217 struct brcm_ahci_priv *priv) 217 218 { 218 - void __iomem *ahci; 219 - struct resource *res; 220 219 u32 impl; 221 220 222 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ahci"); 223 - ahci = devm_ioremap_resource(&pdev->dev, res); 224 - if (IS_ERR(ahci)) 225 - return 0; 226 - 227 - impl = readl(ahci + HOST_PORTS_IMPL); 221 + impl = readl(hpriv->mmio + HOST_PORTS_IMPL); 228 222 229 223 if (fls(impl) > SATA_TOP_MAX_PHYS) 230 224 dev_warn(priv->dev, "warning: more ports than PHYs (%#x)\n", 231 225 impl); 232 226 else if (!impl) 233 227 dev_info(priv->dev, "no ports found\n"); 234 - 235 - devm_iounmap(&pdev->dev, ahci); 236 - devm_release_mem_region(&pdev->dev, res->start, resource_size(res)); 237 228 238 229 return impl; 239 230 } ··· 273 284 274 285 /* Perform the SATA PHY reset sequence */ 275 286 brcm_sata_phy_disable(priv, ap->port_no); 287 + 288 + /* Reset the SATA clock */ 289 + ahci_platform_disable_clks(hpriv); 290 + msleep(10); 291 + 292 + ahci_platform_enable_clks(hpriv); 293 + msleep(10); 276 294 277 295 /* Bring the PHY back on */ 278 296 brcm_sata_phy_enable(priv, ap->port_no); ··· 343 347 struct ata_host *host = dev_get_drvdata(dev); 344 348 struct ahci_host_priv *hpriv = host->private_data; 345 349 struct brcm_ahci_priv *priv = hpriv->plat_data; 346 - int ret; 347 350 348 - ret = ahci_platform_suspend(dev); 349 351 brcm_sata_phys_disable(priv); 350 - return ret; 352 + 353 + return ahci_platform_suspend(dev); 351 354 } 352 355 353 356 static int brcm_ahci_resume(struct device *dev) ··· 354 359 struct ata_host *host = dev_get_drvdata(dev); 355 360 struct ahci_host_priv *hpriv = host->private_data; 356 361 struct brcm_ahci_priv *priv = hpriv->plat_data; 362 + int ret; 363 + 364 + /* Make sure clocks are turned on before re-configuration */ 365 + ret = ahci_platform_enable_clks(hpriv); 366 + if (ret) 367 + return ret; 357 368 358 369 brcm_sata_init(priv); 359 370 brcm_sata_phys_enable(priv); 360 371 brcm_sata_alpm_init(hpriv); 361 - return ahci_platform_resume(dev); 372 + 373 + /* Since we had to enable clocks earlier on, we cannot use 374 + * ahci_platform_resume() as-is since a second call to 375 + * ahci_platform_enable_resources() would bump up the resources 376 + * (regulators, clocks, PHYs) count artificially so we copy the part 377 + * after ahci_platform_enable_resources(). 378 + */ 379 + ret = ahci_platform_enable_phys(hpriv); 380 + if (ret) 381 + goto out_disable_phys; 382 + 383 + ret = ahci_platform_resume_host(dev); 384 + if (ret) 385 + goto out_disable_platform_phys; 386 + 387 + /* We resumed so update PM runtime state */ 388 + pm_runtime_disable(dev); 389 + pm_runtime_set_active(dev); 390 + pm_runtime_enable(dev); 391 + 392 + return 0; 393 + 394 + out_disable_platform_phys: 395 + ahci_platform_disable_phys(hpriv); 396 + out_disable_phys: 397 + brcm_sata_phys_disable(priv); 398 + ahci_platform_disable_clks(hpriv); 399 + return ret; 362 400 } 363 401 #endif 364 402 ··· 438 410 if (!IS_ERR_OR_NULL(priv->rcdev)) 439 411 reset_control_deassert(priv->rcdev); 440 412 441 - if ((priv->version == BRCM_SATA_BCM7425) || 442 - (priv->version == BRCM_SATA_NSP)) { 443 - priv->quirks |= BRCM_AHCI_QUIRK_NO_NCQ; 444 - priv->quirks |= BRCM_AHCI_QUIRK_SKIP_PHY_ENABLE; 413 + hpriv = ahci_platform_get_resources(pdev, 0); 414 + if (IS_ERR(hpriv)) { 415 + ret = PTR_ERR(hpriv); 416 + goto out_reset; 445 417 } 446 418 419 + hpriv->plat_data = priv; 420 + hpriv->flags = AHCI_HFLAG_WAKE_BEFORE_STOP | AHCI_HFLAG_NO_WRITE_TO_RO; 421 + 422 + switch (priv->version) { 423 + case BRCM_SATA_BCM7425: 424 + hpriv->flags |= AHCI_HFLAG_DELAY_ENGINE; 425 + /* fall through */ 426 + case BRCM_SATA_NSP: 427 + hpriv->flags |= AHCI_HFLAG_NO_NCQ; 428 + priv->quirks |= BRCM_AHCI_QUIRK_SKIP_PHY_ENABLE; 429 + break; 430 + default: 431 + break; 432 + } 433 + 434 + ret = ahci_platform_enable_clks(hpriv); 435 + if (ret) 436 + goto out_reset; 437 + 438 + /* Must be first so as to configure endianness including that 439 + * of the standard AHCI register space. 440 + */ 447 441 brcm_sata_init(priv); 448 442 449 - priv->port_mask = brcm_ahci_get_portmask(pdev, priv); 450 - if (!priv->port_mask) 451 - return -ENODEV; 443 + /* Initializes priv->port_mask which is used below */ 444 + priv->port_mask = brcm_ahci_get_portmask(hpriv, priv); 445 + if (!priv->port_mask) { 446 + ret = -ENODEV; 447 + goto out_disable_clks; 448 + } 452 449 450 + /* Must be done before ahci_platform_enable_phys() */ 453 451 brcm_sata_phys_enable(priv); 454 - 455 - hpriv = ahci_platform_get_resources(pdev, 0); 456 - if (IS_ERR(hpriv)) 457 - return PTR_ERR(hpriv); 458 - hpriv->plat_data = priv; 459 - hpriv->flags = AHCI_HFLAG_WAKE_BEFORE_STOP; 460 452 461 453 brcm_sata_alpm_init(hpriv); 462 454 463 - ret = ahci_platform_enable_resources(hpriv); 455 + ret = ahci_platform_enable_phys(hpriv); 464 456 if (ret) 465 - return ret; 466 - 467 - if (priv->quirks & BRCM_AHCI_QUIRK_NO_NCQ) 468 - hpriv->flags |= AHCI_HFLAG_NO_NCQ; 469 - hpriv->flags |= AHCI_HFLAG_NO_WRITE_TO_RO; 457 + goto out_disable_phys; 470 458 471 459 ret = ahci_platform_init_host(pdev, hpriv, &ahci_brcm_port_info, 472 460 &ahci_platform_sht); 473 461 if (ret) 474 - return ret; 462 + goto out_disable_platform_phys; 475 463 476 464 dev_info(dev, "Broadcom AHCI SATA3 registered\n"); 477 465 478 466 return 0; 467 + 468 + out_disable_platform_phys: 469 + ahci_platform_disable_phys(hpriv); 470 + out_disable_phys: 471 + brcm_sata_phys_disable(priv); 472 + out_disable_clks: 473 + ahci_platform_disable_clks(hpriv); 474 + out_reset: 475 + if (!IS_ERR_OR_NULL(priv->rcdev)) 476 + reset_control_assert(priv->rcdev); 477 + return ret; 479 478 } 480 479 481 480 static int brcm_ahci_remove(struct platform_device *pdev) ··· 512 457 struct brcm_ahci_priv *priv = hpriv->plat_data; 513 458 int ret; 514 459 460 + brcm_sata_phys_disable(priv); 461 + 515 462 ret = ata_platform_remove_one(pdev); 516 463 if (ret) 517 464 return ret; 518 - 519 - brcm_sata_phys_disable(priv); 520 465 521 466 return 0; 522 467 }
+4 -2
drivers/ata/libahci_platform.c
··· 43 43 * RETURNS: 44 44 * 0 on success otherwise a negative error code 45 45 */ 46 - static int ahci_platform_enable_phys(struct ahci_host_priv *hpriv) 46 + int ahci_platform_enable_phys(struct ahci_host_priv *hpriv) 47 47 { 48 48 int rc, i; 49 49 ··· 74 74 } 75 75 return rc; 76 76 } 77 + EXPORT_SYMBOL_GPL(ahci_platform_enable_phys); 77 78 78 79 /** 79 80 * ahci_platform_disable_phys - Disable PHYs ··· 82 81 * 83 82 * This function disables all PHYs found in hpriv->phys. 84 83 */ 85 - static void ahci_platform_disable_phys(struct ahci_host_priv *hpriv) 84 + void ahci_platform_disable_phys(struct ahci_host_priv *hpriv) 86 85 { 87 86 int i; 88 87 ··· 91 90 phy_exit(hpriv->phys[i]); 92 91 } 93 92 } 93 + EXPORT_SYMBOL_GPL(ahci_platform_disable_phys); 94 94 95 95 /** 96 96 * ahci_platform_enable_clks - Enable platform clocks
+24
drivers/ata/libata-core.c
··· 5329 5329 } 5330 5330 5331 5331 /** 5332 + * ata_qc_get_active - get bitmask of active qcs 5333 + * @ap: port in question 5334 + * 5335 + * LOCKING: 5336 + * spin_lock_irqsave(host lock) 5337 + * 5338 + * RETURNS: 5339 + * Bitmask of active qcs 5340 + */ 5341 + u64 ata_qc_get_active(struct ata_port *ap) 5342 + { 5343 + u64 qc_active = ap->qc_active; 5344 + 5345 + /* ATA_TAG_INTERNAL is sent to hw as tag 0 */ 5346 + if (qc_active & (1ULL << ATA_TAG_INTERNAL)) { 5347 + qc_active |= (1 << 0); 5348 + qc_active &= ~(1ULL << ATA_TAG_INTERNAL); 5349 + } 5350 + 5351 + return qc_active; 5352 + } 5353 + EXPORT_SYMBOL_GPL(ata_qc_get_active); 5354 + 5355 + /** 5332 5356 * ata_qc_complete_multiple - Complete multiple qcs successfully 5333 5357 * @ap: port in question 5334 5358 * @qc_active: new qc_active mask
+1 -1
drivers/ata/sata_fsl.c
··· 1280 1280 i, ioread32(hcr_base + CC), 1281 1281 ioread32(hcr_base + CA)); 1282 1282 } 1283 - ata_qc_complete_multiple(ap, ap->qc_active ^ done_mask); 1283 + ata_qc_complete_multiple(ap, ata_qc_get_active(ap) ^ done_mask); 1284 1284 return; 1285 1285 1286 1286 } else if ((ap->qc_active & (1ULL << ATA_TAG_INTERNAL))) {
+1 -1
drivers/ata/sata_mv.c
··· 2829 2829 } 2830 2830 2831 2831 if (work_done) { 2832 - ata_qc_complete_multiple(ap, ap->qc_active ^ done_mask); 2832 + ata_qc_complete_multiple(ap, ata_qc_get_active(ap) ^ done_mask); 2833 2833 2834 2834 /* Update the software queue position index in hardware */ 2835 2835 writelfl((pp->crpb_dma & EDMA_RSP_Q_BASE_LO_MASK) |
+1 -1
drivers/ata/sata_nv.c
··· 984 984 check_commands = 0; 985 985 check_commands &= ~(1 << pos); 986 986 } 987 - ata_qc_complete_multiple(ap, ap->qc_active ^ done_mask); 987 + ata_qc_complete_multiple(ap, ata_qc_get_active(ap) ^ done_mask); 988 988 } 989 989 } 990 990
+2 -2
drivers/atm/eni.c
··· 374 374 here = (eni_vcc->descr+skip) & (eni_vcc->words-1); 375 375 dma[j++] = (here << MID_DMA_COUNT_SHIFT) | (vcc->vci 376 376 << MID_DMA_VCI_SHIFT) | MID_DT_JK; 377 - j++; 377 + dma[j++] = 0; 378 378 } 379 379 here = (eni_vcc->descr+size+skip) & (eni_vcc->words-1); 380 380 if (!eff) size += skip; ··· 447 447 if (size != eff) { 448 448 dma[j++] = (here << MID_DMA_COUNT_SHIFT) | 449 449 (vcc->vci << MID_DMA_VCI_SHIFT) | MID_DT_JK; 450 - j++; 450 + dma[j++] = 0; 451 451 } 452 452 if (!j || j > 2*RX_DMA_BUF) { 453 453 printk(KERN_CRIT DEV_LABEL "!j or j too big!!!\n");
+4 -1
drivers/block/null_blk_zoned.c
··· 186 186 if (zone->cond == BLK_ZONE_COND_FULL) 187 187 return BLK_STS_IOERR; 188 188 189 - zone->cond = BLK_ZONE_COND_CLOSED; 189 + if (zone->wp == zone->start) 190 + zone->cond = BLK_ZONE_COND_EMPTY; 191 + else 192 + zone->cond = BLK_ZONE_COND_CLOSED; 190 193 break; 191 194 case REQ_OP_ZONE_FINISH: 192 195 if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+1 -1
drivers/block/pktcdvd.c
··· 2707 2707 .release = pkt_close, 2708 2708 .ioctl = pkt_ioctl, 2709 2709 #ifdef CONFIG_COMPAT 2710 - .ioctl = pkt_compat_ioctl, 2710 + .compat_ioctl = pkt_compat_ioctl, 2711 2711 #endif 2712 2712 .check_events = pkt_check_events, 2713 2713 };
+1 -8
drivers/char/agp/isoch.c
··· 84 84 unsigned int cdev = 0; 85 85 u32 mnistat, tnistat, tstatus, mcmd; 86 86 u16 tnicmd, mnicmd; 87 - u8 mcapndx; 88 87 u32 tot_bw = 0, tot_n = 0, tot_rq = 0, y_max, rq_isoch, rq_async; 89 88 u32 step, rem, rem_isoch, rem_async; 90 89 int ret = 0; ··· 136 137 list_for_each(pos, head) { 137 138 cur = list_entry(pos, struct agp_3_5_dev, list); 138 139 dev = cur->dev; 139 - 140 - mcapndx = cur->capndx; 141 140 142 141 pci_read_config_dword(dev, cur->capndx+AGPNISTAT, &mnistat); 143 142 ··· 248 251 cur = master[cdev].dev; 249 252 dev = cur->dev; 250 253 251 - mcapndx = cur->capndx; 252 - 253 254 master[cdev].rq += (cdev == ndevs - 1) 254 255 ? (rem_async + rem_isoch) : step; 255 256 ··· 314 319 { 315 320 struct pci_dev *td = bridge->dev, *dev = NULL; 316 321 u8 mcapndx; 317 - u32 isoch, arqsz; 322 + u32 isoch; 318 323 u32 tstatus, mstatus, ncapid; 319 324 u32 mmajor; 320 325 u16 mpstat; ··· 328 333 isoch = (tstatus >> 17) & 0x1; 329 334 if (isoch == 0) /* isoch xfers not available, bail out. */ 330 335 return -ENODEV; 331 - 332 - arqsz = (tstatus >> 13) & 0x7; 333 336 334 337 /* 335 338 * Allocate a head for our AGP 3.5 device list
+1 -1
drivers/char/tpm/tpm-dev-common.c
··· 130 130 priv->response_read = true; 131 131 132 132 ret_size = min_t(ssize_t, size, priv->response_length); 133 - if (!ret_size) { 133 + if (ret_size <= 0) { 134 134 priv->response_length = 0; 135 135 goto out; 136 136 }
+1 -1
drivers/char/tpm/tpm-dev.h
··· 14 14 struct work_struct timeout_work; 15 15 struct work_struct async_work; 16 16 wait_queue_head_t async_wait; 17 - size_t response_length; 17 + ssize_t response_length; 18 18 bool response_read; 19 19 bool command_enqueued; 20 20
+15 -19
drivers/char/tpm/tpm_tis_core.c
··· 978 978 979 979 if (wait_startup(chip, 0) != 0) { 980 980 rc = -ENODEV; 981 - goto err_start; 981 + goto out_err; 982 982 } 983 983 984 984 /* Take control of the TPM's interrupt hardware and shut it off */ 985 985 rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask); 986 986 if (rc < 0) 987 - goto err_start; 987 + goto out_err; 988 988 989 989 intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT | 990 990 TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT; ··· 993 993 994 994 rc = tpm_chip_start(chip); 995 995 if (rc) 996 - goto err_start; 997 - 996 + goto out_err; 998 997 rc = tpm2_probe(chip); 998 + tpm_chip_stop(chip); 999 999 if (rc) 1000 - goto err_probe; 1000 + goto out_err; 1001 1001 1002 1002 rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor); 1003 1003 if (rc < 0) 1004 - goto err_probe; 1004 + goto out_err; 1005 1005 1006 1006 priv->manufacturer_id = vendor; 1007 1007 1008 1008 rc = tpm_tis_read8(priv, TPM_RID(0), &rid); 1009 1009 if (rc < 0) 1010 - goto err_probe; 1010 + goto out_err; 1011 1011 1012 1012 dev_info(dev, "%s TPM (device-id 0x%X, rev-id %d)\n", 1013 1013 (chip->flags & TPM_CHIP_FLAG_TPM2) ? "2.0" : "1.2", ··· 1016 1016 probe = probe_itpm(chip); 1017 1017 if (probe < 0) { 1018 1018 rc = -ENODEV; 1019 - goto err_probe; 1019 + goto out_err; 1020 1020 } 1021 1021 1022 1022 /* Figure out the capabilities */ 1023 1023 rc = tpm_tis_read32(priv, TPM_INTF_CAPS(priv->locality), &intfcaps); 1024 1024 if (rc < 0) 1025 - goto err_probe; 1025 + goto out_err; 1026 1026 1027 1027 dev_dbg(dev, "TPM interface capabilities (0x%x):\n", 1028 1028 intfcaps); ··· 1056 1056 if (tpm_get_timeouts(chip)) { 1057 1057 dev_err(dev, "Could not get TPM timeouts and durations\n"); 1058 1058 rc = -ENODEV; 1059 - goto err_probe; 1059 + goto out_err; 1060 1060 } 1061 1061 1062 - chip->flags |= TPM_CHIP_FLAG_IRQ; 1063 1062 if (irq) { 1064 1063 tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED, 1065 1064 irq); ··· 1070 1071 } 1071 1072 } 1072 1073 1073 - tpm_chip_stop(chip); 1074 - 1075 1074 rc = tpm_chip_register(chip); 1076 1075 if (rc) 1077 - goto err_start; 1076 + goto out_err; 1077 + 1078 + if (chip->ops->clk_enable != NULL) 1079 + chip->ops->clk_enable(chip, false); 1078 1080 1079 1081 return 0; 1080 - 1081 - err_probe: 1082 - tpm_chip_stop(chip); 1083 - 1084 - err_start: 1082 + out_err: 1085 1083 if ((chip->ops != NULL) && (chip->ops->clk_enable != NULL)) 1086 1084 chip->ops->clk_enable(chip, false); 1087 1085
+1 -1
drivers/clocksource/timer-riscv.c
··· 56 56 return get_cycles64(); 57 57 } 58 58 59 - static u64 riscv_sched_clock(void) 59 + static u64 notrace riscv_sched_clock(void) 60 60 { 61 61 return get_cycles64(); 62 62 }
+2
drivers/cpufreq/cpufreq-dt-platdev.c
··· 121 121 { .compatible = "mediatek,mt8176", }, 122 122 { .compatible = "mediatek,mt8183", }, 123 123 124 + { .compatible = "nvidia,tegra20", }, 125 + { .compatible = "nvidia,tegra30", }, 124 126 { .compatible = "nvidia,tegra124", }, 125 127 { .compatible = "nvidia,tegra210", }, 126 128
+1 -4
drivers/devfreq/Kconfig
··· 83 83 select DEVFREQ_GOV_PASSIVE 84 84 select DEVFREQ_EVENT_EXYNOS_PPMU 85 85 select PM_DEVFREQ_EVENT 86 - select PM_OPP 87 86 help 88 87 This adds the common DEVFREQ driver for Exynos Memory bus. Exynos 89 88 Memory bus has one more group of memory bus (e.g, MIF and INT block). ··· 97 98 ARCH_TEGRA_132_SOC || ARCH_TEGRA_124_SOC || \ 98 99 ARCH_TEGRA_210_SOC || \ 99 100 COMPILE_TEST 100 - select PM_OPP 101 + depends on COMMON_CLK 101 102 help 102 103 This adds the DEVFREQ driver for the Tegra family of SoCs. 103 104 It reads ACTMON counters of memory controllers and adjusts the ··· 108 109 depends on (TEGRA_MC && TEGRA20_EMC) || COMPILE_TEST 109 110 depends on COMMON_CLK 110 111 select DEVFREQ_GOV_SIMPLE_ONDEMAND 111 - select PM_OPP 112 112 help 113 113 This adds the DEVFREQ driver for the Tegra20 family of SoCs. 114 114 It reads Memory Controller counters and adjusts the operating ··· 119 121 select DEVFREQ_EVENT_ROCKCHIP_DFI 120 122 select DEVFREQ_GOV_SIMPLE_ONDEMAND 121 123 select PM_DEVFREQ_EVENT 122 - select PM_OPP 123 124 help 124 125 This adds the DEVFREQ driver for the RK3399 DMC(Dynamic Memory Controller). 125 126 It sets the frequency for the memory controller and reads the usage counts
+2 -1
drivers/dma/dma-jz4780.c
··· 999 999 static const struct jz4780_dma_soc_data jz4725b_dma_soc_data = { 1000 1000 .nb_channels = 6, 1001 1001 .transfer_ord_max = 5, 1002 - .flags = JZ_SOC_DATA_PER_CHAN_PM | JZ_SOC_DATA_NO_DCKES_DCKEC, 1002 + .flags = JZ_SOC_DATA_PER_CHAN_PM | JZ_SOC_DATA_NO_DCKES_DCKEC | 1003 + JZ_SOC_DATA_BREAK_LINKS, 1003 1004 }; 1004 1005 1005 1006 static const struct jz4780_dma_soc_data jz4770_dma_soc_data = {
+2 -1
drivers/dma/ioat/dma.c
··· 377 377 378 378 descs->virt = dma_alloc_coherent(to_dev(ioat_chan), 379 379 SZ_2M, &descs->hw, flags); 380 - if (!descs->virt && (i > 0)) { 380 + if (!descs->virt) { 381 381 int idx; 382 382 383 383 for (idx = 0; idx < i; idx++) { 384 + descs = &ioat_chan->descs[idx]; 384 385 dma_free_coherent(to_dev(ioat_chan), SZ_2M, 385 386 descs->virt, descs->hw); 386 387 descs->virt = NULL;
+9 -3
drivers/dma/k3dma.c
··· 229 229 c = p->vchan; 230 230 if (c && (tc1 & BIT(i))) { 231 231 spin_lock_irqsave(&c->vc.lock, flags); 232 - vchan_cookie_complete(&p->ds_run->vd); 233 - p->ds_done = p->ds_run; 234 - p->ds_run = NULL; 232 + if (p->ds_run != NULL) { 233 + vchan_cookie_complete(&p->ds_run->vd); 234 + p->ds_done = p->ds_run; 235 + p->ds_run = NULL; 236 + } 235 237 spin_unlock_irqrestore(&c->vc.lock, flags); 236 238 } 237 239 if (c && (tc2 & BIT(i))) { ··· 271 269 return -EAGAIN; 272 270 273 271 if (BIT(c->phy->idx) & k3_dma_get_chan_stat(d)) 272 + return -EAGAIN; 273 + 274 + /* Avoid losing track of ds_run if a transaction is in flight */ 275 + if (c->phy->ds_run) 274 276 return -EAGAIN; 275 277 276 278 if (vd) {
+1 -2
drivers/dma/virt-dma.c
··· 104 104 dmaengine_desc_get_callback(&vd->tx, &cb); 105 105 106 106 list_del(&vd->node); 107 - vchan_vdesc_fini(vd); 108 - 109 107 dmaengine_desc_callback_invoke(&cb, &vd->tx_result); 108 + vchan_vdesc_fini(vd); 110 109 } 111 110 } 112 111
+1 -1
drivers/edac/sifive_edac.c
··· 10 10 #include <linux/edac.h> 11 11 #include <linux/platform_device.h> 12 12 #include "edac_module.h" 13 - #include <asm/sifive_l2_cache.h> 13 + #include <soc/sifive/sifive_l2_cache.h> 14 14 15 15 #define DRVNAME "sifive_edac" 16 16
-1
drivers/firmware/broadcom/tee_bnxt_fw.c
··· 215 215 fw_shm_pool = tee_shm_alloc(pvt_data.ctx, MAX_SHM_MEM_SZ, 216 216 TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); 217 217 if (IS_ERR(fw_shm_pool)) { 218 - tee_client_close_context(pvt_data.ctx); 219 218 dev_err(pvt_data.dev, "tee_shm_alloc failed\n"); 220 219 err = PTR_ERR(fw_shm_pool); 221 220 goto out_sess;
+3 -2
drivers/gpio/Kconfig
··· 553 553 554 554 config GPIO_TEGRA186 555 555 tristate "NVIDIA Tegra186 GPIO support" 556 - default ARCH_TEGRA_186_SOC 557 - depends on ARCH_TEGRA_186_SOC || COMPILE_TEST 556 + default ARCH_TEGRA_186_SOC || ARCH_TEGRA_194_SOC 557 + depends on ARCH_TEGRA_186_SOC || ARCH_TEGRA_194_SOC || COMPILE_TEST 558 558 depends on OF_GPIO 559 559 select GPIOLIB_IRQCHIP 560 560 select IRQ_DOMAIN_HIERARCHY ··· 1148 1148 config GPIO_MAX77620 1149 1149 tristate "GPIO support for PMIC MAX77620 and MAX20024" 1150 1150 depends on MFD_MAX77620 1151 + select GPIOLIB_IRQCHIP 1151 1152 help 1152 1153 GPIO driver for MAX77620 and MAX20024 PMIC from Maxim Semiconductor. 1153 1154 MAX77620 PMIC has 8 pins that can be configured as GPIOs. The
+1 -1
drivers/gpio/gpio-aspeed-sgpio.c
··· 107 107 return gpio->base + bank->irq_regs + GPIO_IRQ_STATUS; 108 108 default: 109 109 /* acturally if code runs to here, it's an error case */ 110 - BUG_ON(1); 110 + BUG(); 111 111 } 112 112 } 113 113
+7 -4
drivers/gpio/gpio-mockup.c
··· 156 156 mutex_lock(&chip->lock); 157 157 158 158 if (test_bit(FLAG_REQUESTED, &desc->flags) && 159 - !test_bit(FLAG_IS_OUT, &desc->flags)) { 159 + !test_bit(FLAG_IS_OUT, &desc->flags)) { 160 160 curr = __gpio_mockup_get(chip, offset); 161 161 if (curr == value) 162 162 goto out; ··· 165 165 irq_type = irq_get_trigger_type(irq); 166 166 167 167 if ((value == 1 && (irq_type & IRQ_TYPE_EDGE_RISING)) || 168 - (value == 0 && (irq_type & IRQ_TYPE_EDGE_FALLING))) 168 + (value == 0 && (irq_type & IRQ_TYPE_EDGE_FALLING))) 169 169 irq_sim_fire(sim, offset); 170 170 } 171 171 ··· 226 226 int direction; 227 227 228 228 mutex_lock(&chip->lock); 229 - direction = !chip->lines[offset].dir; 229 + direction = chip->lines[offset].dir; 230 230 mutex_unlock(&chip->lock); 231 231 232 232 return direction; ··· 395 395 struct gpio_chip *gc; 396 396 struct device *dev; 397 397 const char *name; 398 - int rv, base; 398 + int rv, base, i; 399 399 u16 ngpio; 400 400 401 401 dev = &pdev->dev; ··· 446 446 sizeof(*chip->lines), GFP_KERNEL); 447 447 if (!chip->lines) 448 448 return -ENOMEM; 449 + 450 + for (i = 0; i < gc->ngpio; i++) 451 + chip->lines[i].dir = GPIO_LINE_DIRECTION_IN; 449 452 450 453 if (device_property_read_bool(dev, "named-gpio-lines")) { 451 454 rv = gpio_mockup_name_lines(dev, chip);
+1
drivers/gpio/gpio-mpc8xxx.c
··· 346 346 return -ENOMEM; 347 347 348 348 gc = &mpc8xxx_gc->gc; 349 + gc->parent = &pdev->dev; 349 350 350 351 if (of_property_read_bool(np, "little-endian")) { 351 352 ret = bgpio_init(gc, &pdev->dev, 4,
+10 -16
drivers/gpio/gpio-pca953x.c
··· 568 568 { 569 569 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 570 570 struct pca953x_chip *chip = gpiochip_get_data(gc); 571 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 571 572 572 - chip->irq_mask[d->hwirq / BANK_SZ] &= ~BIT(d->hwirq % BANK_SZ); 573 + clear_bit(hwirq, chip->irq_mask); 573 574 } 574 575 575 576 static void pca953x_irq_unmask(struct irq_data *d) 576 577 { 577 578 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 578 579 struct pca953x_chip *chip = gpiochip_get_data(gc); 580 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 579 581 580 - chip->irq_mask[d->hwirq / BANK_SZ] |= BIT(d->hwirq % BANK_SZ); 582 + set_bit(hwirq, chip->irq_mask); 581 583 } 582 584 583 585 static int pca953x_irq_set_wake(struct irq_data *d, unsigned int on) ··· 637 635 { 638 636 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 639 637 struct pca953x_chip *chip = gpiochip_get_data(gc); 640 - int bank_nb = d->hwirq / BANK_SZ; 641 - u8 mask = BIT(d->hwirq % BANK_SZ); 638 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 642 639 643 640 if (!(type & IRQ_TYPE_EDGE_BOTH)) { 644 641 dev_err(&chip->client->dev, "irq %d: unsupported type %d\n", ··· 645 644 return -EINVAL; 646 645 } 647 646 648 - if (type & IRQ_TYPE_EDGE_FALLING) 649 - chip->irq_trig_fall[bank_nb] |= mask; 650 - else 651 - chip->irq_trig_fall[bank_nb] &= ~mask; 652 - 653 - if (type & IRQ_TYPE_EDGE_RISING) 654 - chip->irq_trig_raise[bank_nb] |= mask; 655 - else 656 - chip->irq_trig_raise[bank_nb] &= ~mask; 647 + assign_bit(hwirq, chip->irq_trig_fall, type & IRQ_TYPE_EDGE_FALLING); 648 + assign_bit(hwirq, chip->irq_trig_raise, type & IRQ_TYPE_EDGE_RISING); 657 649 658 650 return 0; 659 651 } ··· 655 661 { 656 662 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 657 663 struct pca953x_chip *chip = gpiochip_get_data(gc); 658 - u8 mask = BIT(d->hwirq % BANK_SZ); 664 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 659 665 660 - chip->irq_trig_raise[d->hwirq / BANK_SZ] &= ~mask; 661 - chip->irq_trig_fall[d->hwirq / BANK_SZ] &= ~mask; 666 + clear_bit(hwirq, chip->irq_trig_raise); 667 + clear_bit(hwirq, chip->irq_trig_fall); 662 668 } 663 669 664 670 static bool pca953x_irq_pending(struct pca953x_chip *chip, unsigned long *pending)
+1 -1
drivers/gpio/gpio-xgs-iproc.c
··· 280 280 return 0; 281 281 } 282 282 283 - static int __exit iproc_gpio_remove(struct platform_device *pdev) 283 + static int iproc_gpio_remove(struct platform_device *pdev) 284 284 { 285 285 struct iproc_gpio_chip *chip; 286 286
+3 -4
drivers/gpio/gpio-xtensa.c
··· 44 44 unsigned long flags; 45 45 46 46 local_irq_save(flags); 47 - RSR_CPENABLE(*cpenable); 48 - WSR_CPENABLE(*cpenable | BIT(XCHAL_CP_ID_XTIOP)); 49 - 47 + *cpenable = xtensa_get_sr(cpenable); 48 + xtensa_set_sr(*cpenable | BIT(XCHAL_CP_ID_XTIOP), cpenable); 50 49 return flags; 51 50 } 52 51 53 52 static inline void disable_cp(unsigned long flags, unsigned long cpenable) 54 53 { 55 - WSR_CPENABLE(cpenable); 54 + xtensa_set_sr(cpenable, cpenable); 56 55 local_irq_restore(flags); 57 56 } 58 57
+5 -3
drivers/gpio/gpio-zynq.c
··· 684 684 unsigned int bank_num; 685 685 686 686 for (bank_num = 0; bank_num < gpio->p_data->max_bank; bank_num++) { 687 + writel_relaxed(ZYNQ_GPIO_IXR_DISABLE_ALL, gpio->base_addr + 688 + ZYNQ_GPIO_INTDIS_OFFSET(bank_num)); 687 689 writel_relaxed(gpio->context.datalsw[bank_num], 688 690 gpio->base_addr + 689 691 ZYNQ_GPIO_DATA_LSW_OFFSET(bank_num)); ··· 695 693 writel_relaxed(gpio->context.dirm[bank_num], 696 694 gpio->base_addr + 697 695 ZYNQ_GPIO_DIRM_OFFSET(bank_num)); 698 - writel_relaxed(gpio->context.int_en[bank_num], 699 - gpio->base_addr + 700 - ZYNQ_GPIO_INTEN_OFFSET(bank_num)); 701 696 writel_relaxed(gpio->context.int_type[bank_num], 702 697 gpio->base_addr + 703 698 ZYNQ_GPIO_INTTYPE_OFFSET(bank_num)); ··· 704 705 writel_relaxed(gpio->context.int_any[bank_num], 705 706 gpio->base_addr + 706 707 ZYNQ_GPIO_INTANY_OFFSET(bank_num)); 708 + writel_relaxed(~(gpio->context.int_en[bank_num]), 709 + gpio->base_addr + 710 + ZYNQ_GPIO_INTEN_OFFSET(bank_num)); 707 711 } 708 712 } 709 713
+46 -5
drivers/gpio/gpiolib-acpi.c
··· 21 21 #include "gpiolib.h" 22 22 #include "gpiolib-acpi.h" 23 23 24 + #define QUIRK_NO_EDGE_EVENTS_ON_BOOT 0x01l 25 + #define QUIRK_NO_WAKEUP 0x02l 26 + 24 27 static int run_edge_events_on_boot = -1; 25 28 module_param(run_edge_events_on_boot, int, 0444); 26 29 MODULE_PARM_DESC(run_edge_events_on_boot, 27 30 "Run edge _AEI event-handlers at boot: 0=no, 1=yes, -1=auto"); 31 + 32 + static int honor_wakeup = -1; 33 + module_param(honor_wakeup, int, 0444); 34 + MODULE_PARM_DESC(honor_wakeup, 35 + "Honor the ACPI wake-capable flag: 0=no, 1=yes, -1=auto"); 28 36 29 37 /** 30 38 * struct acpi_gpio_event - ACPI GPIO event handler data ··· 289 281 event->handle = evt_handle; 290 282 event->handler = handler; 291 283 event->irq = irq; 292 - event->irq_is_wake = agpio->wake_capable == ACPI_WAKE_CAPABLE; 284 + event->irq_is_wake = honor_wakeup && agpio->wake_capable == ACPI_WAKE_CAPABLE; 293 285 event->pin = pin; 294 286 event->desc = desc; 295 287 ··· 1317 1309 /* We must use _sync so that this runs after the first deferred_probe run */ 1318 1310 late_initcall_sync(acpi_gpio_handle_deferred_request_irqs); 1319 1311 1320 - static const struct dmi_system_id run_edge_events_on_boot_blacklist[] = { 1312 + static const struct dmi_system_id gpiolib_acpi_quirks[] = { 1321 1313 { 1322 1314 /* 1323 1315 * The Minix Neo Z83-4 has a micro-USB-B id-pin handler for ··· 1327 1319 .matches = { 1328 1320 DMI_MATCH(DMI_SYS_VENDOR, "MINIX"), 1329 1321 DMI_MATCH(DMI_PRODUCT_NAME, "Z83-4"), 1330 - } 1322 + }, 1323 + .driver_data = (void *)QUIRK_NO_EDGE_EVENTS_ON_BOOT, 1331 1324 }, 1332 1325 { 1333 1326 /* ··· 1340 1331 .matches = { 1341 1332 DMI_MATCH(DMI_SYS_VENDOR, "Wortmann_AG"), 1342 1333 DMI_MATCH(DMI_PRODUCT_NAME, "TERRA_PAD_1061"), 1343 - } 1334 + }, 1335 + .driver_data = (void *)QUIRK_NO_EDGE_EVENTS_ON_BOOT, 1336 + }, 1337 + { 1338 + /* 1339 + * Various HP X2 10 Cherry Trail models use an external 1340 + * embedded-controller connected via I2C + an ACPI GPIO 1341 + * event handler. The embedded controller generates various 1342 + * spurious wakeup events when suspended. So disable wakeup 1343 + * for its handler (it uses the only ACPI GPIO event handler). 1344 + * This breaks wakeup when opening the lid, the user needs 1345 + * to press the power-button to wakeup the system. The 1346 + * alternative is suspend simply not working, which is worse. 1347 + */ 1348 + .matches = { 1349 + DMI_MATCH(DMI_SYS_VENDOR, "HP"), 1350 + DMI_MATCH(DMI_PRODUCT_NAME, "HP x2 Detachable 10-p0XX"), 1351 + }, 1352 + .driver_data = (void *)QUIRK_NO_WAKEUP, 1344 1353 }, 1345 1354 {} /* Terminating entry */ 1346 1355 }; 1347 1356 1348 1357 static int acpi_gpio_setup_params(void) 1349 1358 { 1359 + const struct dmi_system_id *id; 1360 + long quirks = 0; 1361 + 1362 + id = dmi_first_match(gpiolib_acpi_quirks); 1363 + if (id) 1364 + quirks = (long)id->driver_data; 1365 + 1350 1366 if (run_edge_events_on_boot < 0) { 1351 - if (dmi_check_system(run_edge_events_on_boot_blacklist)) 1367 + if (quirks & QUIRK_NO_EDGE_EVENTS_ON_BOOT) 1352 1368 run_edge_events_on_boot = 0; 1353 1369 else 1354 1370 run_edge_events_on_boot = 1; 1371 + } 1372 + 1373 + if (honor_wakeup < 0) { 1374 + if (quirks & QUIRK_NO_WAKEUP) 1375 + honor_wakeup = 0; 1376 + else 1377 + honor_wakeup = 1; 1355 1378 } 1356 1379 1357 1380 return 0;
+11 -2
drivers/gpio/gpiolib.c
··· 220 220 chip = gpiod_to_chip(desc); 221 221 offset = gpio_chip_hwgpio(desc); 222 222 223 + /* 224 + * Open drain emulation using input mode may incorrectly report 225 + * input here, fix that up. 226 + */ 227 + if (test_bit(FLAG_OPEN_DRAIN, &desc->flags) && 228 + test_bit(FLAG_IS_OUT, &desc->flags)) 229 + return 0; 230 + 223 231 if (!chip->get_direction) 224 232 return -ENOTSUPP; 225 233 ··· 4480 4472 4481 4473 if (chip->ngpio <= p->chip_hwnum) { 4482 4474 dev_err(dev, 4483 - "requested GPIO %d is out of range [0..%d] for chip %s\n", 4484 - idx, chip->ngpio, chip->label); 4475 + "requested GPIO %u (%u) is out of range [0..%u] for chip %s\n", 4476 + idx, p->chip_hwnum, chip->ngpio - 1, 4477 + chip->label); 4485 4478 return ERR_PTR(-EINVAL); 4486 4479 } 4487 4480
+11 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
··· 613 613 bool d3_supported = false; 614 614 struct pci_dev *parent_pdev; 615 615 616 - while ((pdev = pci_get_class(PCI_BASE_CLASS_DISPLAY << 16, pdev)) != NULL) { 616 + while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { 617 + vga_count++; 618 + 619 + has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true); 620 + 621 + parent_pdev = pci_upstream_bridge(pdev); 622 + d3_supported |= parent_pdev && parent_pdev->bridge_d3; 623 + amdgpu_atpx_get_quirks(pdev); 624 + } 625 + 626 + while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { 617 627 vga_count++; 618 628 619 629 has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true);
+4 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 142 142 int amdgpu_mcbp = 0; 143 143 int amdgpu_discovery = -1; 144 144 int amdgpu_mes = 0; 145 - int amdgpu_noretry = 1; 145 + int amdgpu_noretry; 146 146 int amdgpu_force_asic_type = -1; 147 147 148 148 struct amdgpu_mgpu_info mgpu_info = { ··· 588 588 module_param_named(mes, amdgpu_mes, int, 0444); 589 589 590 590 MODULE_PARM_DESC(noretry, 591 - "Disable retry faults (0 = retry enabled, 1 = retry disabled (default))"); 591 + "Disable retry faults (0 = retry enabled (default), 1 = retry disabled)"); 592 592 module_param_named(noretry, amdgpu_noretry, int, 0644); 593 593 594 594 /** ··· 1359 1359 .driver_features = 1360 1360 DRIVER_USE_AGP | DRIVER_ATOMIC | 1361 1361 DRIVER_GEM | 1362 - DRIVER_RENDER | DRIVER_MODESET | DRIVER_SYNCOBJ, 1362 + DRIVER_RENDER | DRIVER_MODESET | DRIVER_SYNCOBJ | 1363 + DRIVER_SYNCOBJ_TIMELINE, 1363 1364 .load = amdgpu_driver_load_kms, 1364 1365 .open = amdgpu_driver_open_kms, 1365 1366 .postclose = amdgpu_driver_postclose_kms,
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
··· 1488 1488 1489 1489 /* Start rlc autoload after psp recieved all the gfx firmware */ 1490 1490 if (psp->autoload_supported && ucode->ucode_id == (amdgpu_sriov_vf(adev) ? 1491 - AMDGPU_UCODE_ID_CP_MEC2 : AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM)) { 1491 + AMDGPU_UCODE_ID_CP_MEC2 : AMDGPU_UCODE_ID_RLC_G)) { 1492 1492 ret = psp_rlc_autoload(psp); 1493 1493 if (ret) { 1494 1494 DRM_ERROR("Failed to start rlc autoload\n");
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
··· 292 292 AMDGPU_UCODE_ID_CP_MEC2_JT, 293 293 AMDGPU_UCODE_ID_CP_MES, 294 294 AMDGPU_UCODE_ID_CP_MES_DATA, 295 - AMDGPU_UCODE_ID_RLC_G, 296 295 AMDGPU_UCODE_ID_RLC_RESTORE_LIST_CNTL, 297 296 AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM, 298 297 AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM, 298 + AMDGPU_UCODE_ID_RLC_G, 299 299 AMDGPU_UCODE_ID_STORAGE, 300 300 AMDGPU_UCODE_ID_SMC, 301 301 AMDGPU_UCODE_ID_UVD,
+4 -11
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 1052 1052 case CHIP_VEGA20: 1053 1053 break; 1054 1054 case CHIP_RAVEN: 1055 - /* Disable GFXOFF on original raven. There are combinations 1056 - * of sbios and platforms that are not stable. 1057 - */ 1058 - if (!(adev->rev_id >= 0x8 || adev->pdev->device == 0x15d8)) 1059 - adev->pm.pp_feature &= ~PP_GFXOFF_MASK; 1060 - else if (!(adev->rev_id >= 0x8 || adev->pdev->device == 0x15d8) 1061 - &&((adev->gfx.rlc_fw_version != 106 && 1062 - adev->gfx.rlc_fw_version < 531) || 1063 - (adev->gfx.rlc_fw_version == 53815) || 1064 - (adev->gfx.rlc_feature_version < 1) || 1065 - !adev->gfx.rlc.is_rlc_v2_1)) 1055 + if (!(adev->rev_id >= 0x8 || 1056 + adev->pdev->device == 0x15d8) && 1057 + (adev->pm.fw_version < 0x41e2b || /* not raven1 fresh */ 1058 + !adev->gfx.rlc.is_rlc_v2_1)) /* without rlc save restore ucodes */ 1066 1059 adev->pm.pp_feature &= ~PP_GFXOFF_MASK; 1067 1060 1068 1061 if (adev->pm.pp_feature & PP_GFXOFF_MASK)
+23 -22
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 3356 3356 return color_space; 3357 3357 } 3358 3358 3359 - static void reduce_mode_colour_depth(struct dc_crtc_timing *timing_out) 3359 + static bool adjust_colour_depth_from_display_info( 3360 + struct dc_crtc_timing *timing_out, 3361 + const struct drm_display_info *info) 3360 3362 { 3361 - if (timing_out->display_color_depth <= COLOR_DEPTH_888) 3362 - return; 3363 - 3364 - timing_out->display_color_depth--; 3365 - } 3366 - 3367 - static void adjust_colour_depth_from_display_info(struct dc_crtc_timing *timing_out, 3368 - const struct drm_display_info *info) 3369 - { 3363 + enum dc_color_depth depth = timing_out->display_color_depth; 3370 3364 int normalized_clk; 3371 - if (timing_out->display_color_depth <= COLOR_DEPTH_888) 3372 - return; 3373 3365 do { 3374 3366 normalized_clk = timing_out->pix_clk_100hz / 10; 3375 3367 /* YCbCr 4:2:0 requires additional adjustment of 1/2 */ 3376 3368 if (timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420) 3377 3369 normalized_clk /= 2; 3378 3370 /* Adjusting pix clock following on HDMI spec based on colour depth */ 3379 - switch (timing_out->display_color_depth) { 3371 + switch (depth) { 3372 + case COLOR_DEPTH_888: 3373 + break; 3380 3374 case COLOR_DEPTH_101010: 3381 3375 normalized_clk = (normalized_clk * 30) / 24; 3382 3376 break; ··· 3381 3387 normalized_clk = (normalized_clk * 48) / 24; 3382 3388 break; 3383 3389 default: 3384 - return; 3390 + /* The above depths are the only ones valid for HDMI. */ 3391 + return false; 3385 3392 } 3386 - if (normalized_clk <= info->max_tmds_clock) 3387 - return; 3388 - reduce_mode_colour_depth(timing_out); 3389 - 3390 - } while (timing_out->display_color_depth > COLOR_DEPTH_888); 3391 - 3393 + if (normalized_clk <= info->max_tmds_clock) { 3394 + timing_out->display_color_depth = depth; 3395 + return true; 3396 + } 3397 + } while (--depth > COLOR_DEPTH_666); 3398 + return false; 3392 3399 } 3393 3400 3394 3401 static void fill_stream_properties_from_drm_display_mode( ··· 3469 3474 3470 3475 stream->out_transfer_func->type = TF_TYPE_PREDEFINED; 3471 3476 stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB; 3472 - if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) 3473 - adjust_colour_depth_from_display_info(timing_out, info); 3477 + if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) { 3478 + if (!adjust_colour_depth_from_display_info(timing_out, info) && 3479 + drm_mode_is_420_also(info, mode_in) && 3480 + timing_out->pixel_encoding != PIXEL_ENCODING_YCBCR420) { 3481 + timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420; 3482 + adjust_colour_depth_from_display_info(timing_out, info); 3483 + } 3484 + } 3474 3485 } 3475 3486 3476 3487 static void fill_audio_info(struct audio_info *audio_info,
+1
drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
··· 866 866 smu->smu_baco.platform_support = false; 867 867 868 868 mutex_init(&smu->sensor_lock); 869 + mutex_init(&smu->metrics_lock); 869 870 870 871 smu->watermarks_bitmap = 0; 871 872 smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+3
drivers/gpu/drm/amd/powerplay/arcturus_ppt.c
··· 862 862 struct smu_table_context *smu_table= &smu->smu_table; 863 863 int ret = 0; 864 864 865 + mutex_lock(&smu->metrics_lock); 865 866 if (!smu_table->metrics_time || 866 867 time_after(jiffies, smu_table->metrics_time + HZ / 1000)) { 867 868 ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, 0, 868 869 (void *)smu_table->metrics_table, false); 869 870 if (ret) { 870 871 pr_info("Failed to export SMU metrics table!\n"); 872 + mutex_unlock(&smu->metrics_lock); 871 873 return ret; 872 874 } 873 875 smu_table->metrics_time = jiffies; 874 876 } 875 877 876 878 memcpy(metrics_table, smu_table->metrics_table, sizeof(SmuMetrics_t)); 879 + mutex_unlock(&smu->metrics_lock); 877 880 878 881 return ret; 879 882 }
+1
drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
··· 349 349 const struct pptable_funcs *ppt_funcs; 350 350 struct mutex mutex; 351 351 struct mutex sensor_lock; 352 + struct mutex metrics_lock; 352 353 uint64_t pool_size; 353 354 354 355 struct smu_table_context smu_table;
+3
drivers/gpu/drm/amd/powerplay/navi10_ppt.c
··· 562 562 struct smu_table_context *smu_table= &smu->smu_table; 563 563 int ret = 0; 564 564 565 + mutex_lock(&smu->metrics_lock); 565 566 if (!smu_table->metrics_time || time_after(jiffies, smu_table->metrics_time + msecs_to_jiffies(100))) { 566 567 ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, 0, 567 568 (void *)smu_table->metrics_table, false); 568 569 if (ret) { 569 570 pr_info("Failed to export SMU metrics table!\n"); 571 + mutex_unlock(&smu->metrics_lock); 570 572 return ret; 571 573 } 572 574 smu_table->metrics_time = jiffies; 573 575 } 574 576 575 577 memcpy(metrics_table, smu_table->metrics_table, sizeof(SmuMetrics_t)); 578 + mutex_unlock(&smu->metrics_lock); 576 579 577 580 return ret; 578 581 }
+3
drivers/gpu/drm/amd/powerplay/vega20_ppt.c
··· 1678 1678 struct smu_table_context *smu_table= &smu->smu_table; 1679 1679 int ret = 0; 1680 1680 1681 + mutex_lock(&smu->metrics_lock); 1681 1682 if (!smu_table->metrics_time || time_after(jiffies, smu_table->metrics_time + HZ / 1000)) { 1682 1683 ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, 0, 1683 1684 (void *)smu_table->metrics_table, false); 1684 1685 if (ret) { 1685 1686 pr_info("Failed to export SMU metrics table!\n"); 1687 + mutex_unlock(&smu->metrics_lock); 1686 1688 return ret; 1687 1689 } 1688 1690 smu_table->metrics_time = jiffies; 1689 1691 } 1690 1692 1691 1693 memcpy(metrics_table, smu_table->metrics_table, sizeof(SmuMetrics_t)); 1694 + mutex_unlock(&smu->metrics_lock); 1692 1695 1693 1696 return ret; 1694 1697 }
+1 -1
drivers/gpu/drm/arm/malidp_mw.c
··· 56 56 return MODE_OK; 57 57 } 58 58 59 - const struct drm_connector_helper_funcs malidp_mw_connector_helper_funcs = { 59 + static const struct drm_connector_helper_funcs malidp_mw_connector_helper_funcs = { 60 60 .get_modes = malidp_mw_connector_get_modes, 61 61 .mode_valid = malidp_mw_connector_mode_valid, 62 62 };
+1 -1
drivers/gpu/drm/drm_dp_mst_topology.c
··· 393 393 memcpy(&buf[idx], req->u.i2c_read.transactions[i].bytes, req->u.i2c_read.transactions[i].num_bytes); 394 394 idx += req->u.i2c_read.transactions[i].num_bytes; 395 395 396 - buf[idx] = (req->u.i2c_read.transactions[i].no_stop_bit & 0x1) << 5; 396 + buf[idx] = (req->u.i2c_read.transactions[i].no_stop_bit & 0x1) << 4; 397 397 buf[idx] |= (req->u.i2c_read.transactions[i].i2c_transaction_delay & 0xf); 398 398 idx++; 399 399 }
+6 -1
drivers/gpu/drm/drm_fb_helper.c
··· 1283 1283 * Changes struct fb_var_screeninfo are currently not pushed back 1284 1284 * to KMS, hence fail if different settings are requested. 1285 1285 */ 1286 - if (var->bits_per_pixel != fb->format->cpp[0] * 8 || 1286 + if (var->bits_per_pixel > fb->format->cpp[0] * 8 || 1287 1287 var->xres > fb->width || var->yres > fb->height || 1288 1288 var->xres_virtual > fb->width || var->yres_virtual > fb->height) { 1289 1289 DRM_DEBUG("fb requested width/height/bpp can't fit in current fb " ··· 1307 1307 !var->blue.msb_right && !var->transp.msb_right) { 1308 1308 drm_fb_helper_fill_pixel_fmt(var, fb->format->depth); 1309 1309 } 1310 + 1311 + /* 1312 + * Likewise, bits_per_pixel should be rounded up to a supported value. 1313 + */ 1314 + var->bits_per_pixel = fb->format->cpp[0] * 8; 1310 1315 1311 1316 /* 1312 1317 * drm fbdev emulation doesn't support changing the pixel format at all,
+2 -2
drivers/gpu/drm/i915/display/intel_audio.c
··· 856 856 } 857 857 858 858 /* Force CDCLK to 2*BCLK as long as we need audio powered. */ 859 - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) 859 + if (IS_GEMINILAKE(dev_priv)) 860 860 glk_force_audio_cdclk(dev_priv, true); 861 861 862 862 if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) ··· 875 875 876 876 /* Stop forcing CDCLK to 2*BCLK if no need for audio to be powered. */ 877 877 if (--dev_priv->audio_power_refcount == 0) 878 - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) 878 + if (IS_GEMINILAKE(dev_priv)) 879 879 glk_force_audio_cdclk(dev_priv, false); 880 880 881 881 intel_display_power_put(dev_priv, POWER_DOMAIN_AUDIO, cookie);
+2 -7
drivers/gpu/drm/i915/display/intel_display.c
··· 4515 4515 { 4516 4516 struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc); 4517 4517 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 4518 - i915_reg_t reg; 4519 - u32 trans_ddi_func_ctl2_val; 4520 4518 4521 4519 if (old_crtc_state->master_transcoder == INVALID_TRANSCODER) 4522 4520 return; ··· 4522 4524 DRM_DEBUG_KMS("Disabling Transcoder Port Sync on Slave Transcoder %s\n", 4523 4525 transcoder_name(old_crtc_state->cpu_transcoder)); 4524 4526 4525 - reg = TRANS_DDI_FUNC_CTL2(old_crtc_state->cpu_transcoder); 4526 - trans_ddi_func_ctl2_val = ~(PORT_SYNC_MODE_ENABLE | 4527 - PORT_SYNC_MODE_MASTER_SELECT_MASK); 4528 - I915_WRITE(reg, trans_ddi_func_ctl2_val); 4527 + I915_WRITE(TRANS_DDI_FUNC_CTL2(old_crtc_state->cpu_transcoder), 0); 4529 4528 } 4530 4529 4531 4530 static void intel_fdi_normal_train(struct intel_crtc *crtc) ··· 15107 15112 return ret; 15108 15113 15109 15114 fb_obj_bump_render_priority(obj); 15110 - intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_DIRTYFB); 15115 + i915_gem_object_flush_frontbuffer(obj, ORIGIN_DIRTYFB); 15111 15116 15112 15117 if (!new_plane_state->base.fence) { /* implicit fencing */ 15113 15118 struct dma_fence *fence;
+6 -10
drivers/gpu/drm/i915/display/intel_frontbuffer.c
··· 229 229 vma->display_alignment = I915_GTT_MIN_ALIGNMENT; 230 230 spin_unlock(&obj->vma.lock); 231 231 232 - obj->frontbuffer = NULL; 232 + RCU_INIT_POINTER(obj->frontbuffer, NULL); 233 233 spin_unlock(&to_i915(obj->base.dev)->fb_tracking.lock); 234 234 235 235 i915_gem_object_put(obj); 236 - kfree(front); 236 + kfree_rcu(front, rcu); 237 237 } 238 238 239 239 struct intel_frontbuffer * ··· 242 242 struct drm_i915_private *i915 = to_i915(obj->base.dev); 243 243 struct intel_frontbuffer *front; 244 244 245 - spin_lock(&i915->fb_tracking.lock); 246 - front = obj->frontbuffer; 247 - if (front) 248 - kref_get(&front->ref); 249 - spin_unlock(&i915->fb_tracking.lock); 245 + front = __intel_frontbuffer_get(obj); 250 246 if (front) 251 247 return front; 252 248 ··· 258 262 i915_active_may_sleep(frontbuffer_retire)); 259 263 260 264 spin_lock(&i915->fb_tracking.lock); 261 - if (obj->frontbuffer) { 265 + if (rcu_access_pointer(obj->frontbuffer)) { 262 266 kfree(front); 263 - front = obj->frontbuffer; 267 + front = rcu_dereference_protected(obj->frontbuffer, true); 264 268 kref_get(&front->ref); 265 269 } else { 266 270 i915_gem_object_get(obj); 267 - obj->frontbuffer = front; 271 + rcu_assign_pointer(obj->frontbuffer, front); 268 272 } 269 273 spin_unlock(&i915->fb_tracking.lock); 270 274
+31 -3
drivers/gpu/drm/i915/display/intel_frontbuffer.h
··· 27 27 #include <linux/atomic.h> 28 28 #include <linux/kref.h> 29 29 30 + #include "gem/i915_gem_object_types.h" 30 31 #include "i915_active.h" 31 32 32 33 struct drm_i915_private; 33 - struct drm_i915_gem_object; 34 34 35 35 enum fb_op_origin { 36 36 ORIGIN_GTT, ··· 45 45 atomic_t bits; 46 46 struct i915_active write; 47 47 struct drm_i915_gem_object *obj; 48 + struct rcu_head rcu; 48 49 }; 49 50 50 51 void intel_frontbuffer_flip_prepare(struct drm_i915_private *i915, ··· 54 53 unsigned frontbuffer_bits); 55 54 void intel_frontbuffer_flip(struct drm_i915_private *i915, 56 55 unsigned frontbuffer_bits); 56 + 57 + void intel_frontbuffer_put(struct intel_frontbuffer *front); 58 + 59 + static inline struct intel_frontbuffer * 60 + __intel_frontbuffer_get(const struct drm_i915_gem_object *obj) 61 + { 62 + struct intel_frontbuffer *front; 63 + 64 + if (likely(!rcu_access_pointer(obj->frontbuffer))) 65 + return NULL; 66 + 67 + rcu_read_lock(); 68 + do { 69 + front = rcu_dereference(obj->frontbuffer); 70 + if (!front) 71 + break; 72 + 73 + if (unlikely(!kref_get_unless_zero(&front->ref))) 74 + continue; 75 + 76 + if (likely(front == rcu_access_pointer(obj->frontbuffer))) 77 + break; 78 + 79 + intel_frontbuffer_put(front); 80 + } while (1); 81 + rcu_read_unlock(); 82 + 83 + return front; 84 + } 57 85 58 86 struct intel_frontbuffer * 59 87 intel_frontbuffer_get(struct drm_i915_gem_object *obj); ··· 148 118 void intel_frontbuffer_track(struct intel_frontbuffer *old, 149 119 struct intel_frontbuffer *new, 150 120 unsigned int frontbuffer_bits); 151 - 152 - void intel_frontbuffer_put(struct intel_frontbuffer *front); 153 121 154 122 #endif /* __INTEL_FRONTBUFFER_H__ */
+13 -4
drivers/gpu/drm/i915/display/intel_overlay.c
··· 279 279 struct i915_vma *vma) 280 280 { 281 281 enum pipe pipe = overlay->crtc->pipe; 282 + struct intel_frontbuffer *from = NULL, *to = NULL; 282 283 283 284 WARN_ON(overlay->old_vma); 284 285 285 - intel_frontbuffer_track(overlay->vma ? overlay->vma->obj->frontbuffer : NULL, 286 - vma ? vma->obj->frontbuffer : NULL, 287 - INTEL_FRONTBUFFER_OVERLAY(pipe)); 286 + if (overlay->vma) 287 + from = intel_frontbuffer_get(overlay->vma->obj); 288 + if (vma) 289 + to = intel_frontbuffer_get(vma->obj); 290 + 291 + intel_frontbuffer_track(from, to, INTEL_FRONTBUFFER_OVERLAY(pipe)); 292 + 293 + if (to) 294 + intel_frontbuffer_put(to); 295 + if (from) 296 + intel_frontbuffer_put(from); 288 297 289 298 intel_frontbuffer_flip_prepare(overlay->i915, 290 299 INTEL_FRONTBUFFER_OVERLAY(pipe)); ··· 775 766 ret = PTR_ERR(vma); 776 767 goto out_pin_section; 777 768 } 778 - intel_frontbuffer_flush(new_bo->frontbuffer, ORIGIN_DIRTYFB); 769 + i915_gem_object_flush_frontbuffer(new_bo, ORIGIN_DIRTYFB); 779 770 780 771 if (!overlay->active) { 781 772 u32 oconfig;
+2 -1
drivers/gpu/drm/i915/gem/i915_gem_clflush.c
··· 20 20 { 21 21 GEM_BUG_ON(!i915_gem_object_has_pages(obj)); 22 22 drm_clflush_sg(obj->mm.pages); 23 - intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); 23 + 24 + i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); 24 25 } 25 26 26 27 static int clflush_work(struct dma_fence_work *base)
+2 -2
drivers/gpu/drm/i915/gem/i915_gem_domain.c
··· 664 664 i915_gem_object_unlock(obj); 665 665 666 666 if (write_domain) 667 - intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU); 667 + i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU); 668 668 669 669 out_unpin: 670 670 i915_gem_object_unpin_pages(obj); ··· 784 784 } 785 785 786 786 out: 787 - intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU); 787 + i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU); 788 788 obj->mm.dirty = true; 789 789 /* return with the pages pinned */ 790 790 return 0;
+25 -1
drivers/gpu/drm/i915/gem/i915_gem_object.c
··· 280 280 for_each_ggtt_vma(vma, obj) 281 281 intel_gt_flush_ggtt_writes(vma->vm->gt); 282 282 283 - intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); 283 + i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); 284 284 285 285 for_each_ggtt_vma(vma, obj) { 286 286 if (vma->iomap) ··· 306 306 } 307 307 308 308 obj->write_domain = 0; 309 + } 310 + 311 + void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, 312 + enum fb_op_origin origin) 313 + { 314 + struct intel_frontbuffer *front; 315 + 316 + front = __intel_frontbuffer_get(obj); 317 + if (front) { 318 + intel_frontbuffer_flush(front, origin); 319 + intel_frontbuffer_put(front); 320 + } 321 + } 322 + 323 + void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, 324 + enum fb_op_origin origin) 325 + { 326 + struct intel_frontbuffer *front; 327 + 328 + front = __intel_frontbuffer_get(obj); 329 + if (front) { 330 + intel_frontbuffer_invalidate(front, origin); 331 + intel_frontbuffer_put(front); 332 + } 309 333 } 310 334 311 335 void i915_gem_init__objects(struct drm_i915_private *i915)
+22 -1
drivers/gpu/drm/i915/gem/i915_gem_object.h
··· 13 13 14 14 #include <drm/i915_drm.h> 15 15 16 + #include "display/intel_frontbuffer.h" 16 17 #include "i915_gem_object_types.h" 17 - 18 18 #include "i915_gem_gtt.h" 19 19 20 20 void i915_gem_init__objects(struct drm_i915_private *i915); ··· 462 462 int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj, 463 463 unsigned int flags, 464 464 const struct i915_sched_attr *attr); 465 + 466 + void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, 467 + enum fb_op_origin origin); 468 + void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, 469 + enum fb_op_origin origin); 470 + 471 + static inline void 472 + i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, 473 + enum fb_op_origin origin) 474 + { 475 + if (unlikely(rcu_access_pointer(obj->frontbuffer))) 476 + __i915_gem_object_flush_frontbuffer(obj, origin); 477 + } 478 + 479 + static inline void 480 + i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, 481 + enum fb_op_origin origin) 482 + { 483 + if (unlikely(rcu_access_pointer(obj->frontbuffer))) 484 + __i915_gem_object_invalidate_frontbuffer(obj, origin); 485 + } 465 486 466 487 #endif
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_object_types.h
··· 150 150 */ 151 151 u16 write_domain; 152 152 153 - struct intel_frontbuffer *frontbuffer; 153 + struct intel_frontbuffer __rcu *frontbuffer; 154 154 155 155 /** Current tiling stride for the object, if it's tiled. */ 156 156 unsigned int tiling_and_stride;
+2 -1
drivers/gpu/drm/i915/gt/intel_gt_pm.c
··· 94 94 intel_uncore_forcewake_put(&i915->uncore, FORCEWAKE_ALL); 95 95 } 96 96 97 + /* Defer dropping the display power well for 100ms, it's slow! */ 97 98 GEM_BUG_ON(!wakeref); 98 - intel_display_power_put(i915, POWER_DOMAIN_GT_IRQ, wakeref); 99 + intel_display_power_put_async(i915, POWER_DOMAIN_GT_IRQ, wakeref); 99 100 100 101 i915_globals_park(); 101 102
+2
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 4416 4416 ve->base.gt = siblings[0]->gt; 4417 4417 ve->base.uncore = siblings[0]->uncore; 4418 4418 ve->base.id = -1; 4419 + 4419 4420 ve->base.class = OTHER_CLASS; 4420 4421 ve->base.uabi_class = I915_ENGINE_CLASS_INVALID; 4421 4422 ve->base.instance = I915_ENGINE_CLASS_INVALID_VIRTUAL; 4423 + ve->base.uabi_instance = I915_ENGINE_CLASS_INVALID_VIRTUAL; 4422 4424 4423 4425 /* 4424 4426 * The decision on whether to submit a request using semaphores
+11 -20
drivers/gpu/drm/i915/gt/intel_ring_submission.c
··· 1413 1413 int len; 1414 1414 u32 *cs; 1415 1415 1416 - flags |= MI_MM_SPACE_GTT; 1417 - if (IS_HASWELL(i915)) 1418 - /* These flags are for resource streamer on HSW+ */ 1419 - flags |= HSW_MI_RS_SAVE_STATE_EN | HSW_MI_RS_RESTORE_STATE_EN; 1420 - else 1421 - /* We need to save the extended state for powersaving modes */ 1422 - flags |= MI_SAVE_EXT_STATE_EN | MI_RESTORE_EXT_STATE_EN; 1423 - 1424 1416 len = 4; 1425 1417 if (IS_GEN(i915, 7)) 1426 1418 len += 2 + (num_engines ? 4 * num_engines + 6 : 0); ··· 1581 1589 } 1582 1590 1583 1591 if (ce->state) { 1584 - u32 hw_flags; 1592 + u32 flags; 1585 1593 1586 1594 GEM_BUG_ON(rq->engine->id != RCS0); 1587 1595 1588 - /* 1589 - * The kernel context(s) is treated as pure scratch and is not 1590 - * expected to retain any state (as we sacrifice it during 1591 - * suspend and on resume it may be corrupted). This is ok, 1592 - * as nothing actually executes using the kernel context; it 1593 - * is purely used for flushing user contexts. 1594 - */ 1595 - hw_flags = 0; 1596 - if (i915_gem_context_is_kernel(rq->gem_context)) 1597 - hw_flags = MI_RESTORE_INHIBIT; 1596 + /* For resource streamer on HSW+ and power context elsewhere */ 1597 + BUILD_BUG_ON(HSW_MI_RS_SAVE_STATE_EN != MI_SAVE_EXT_STATE_EN); 1598 + BUILD_BUG_ON(HSW_MI_RS_RESTORE_STATE_EN != MI_RESTORE_EXT_STATE_EN); 1598 1599 1599 - ret = mi_set_context(rq, hw_flags); 1600 + flags = MI_SAVE_EXT_STATE_EN | MI_MM_SPACE_GTT; 1601 + if (!i915_gem_context_is_kernel(rq->gem_context)) 1602 + flags |= MI_RESTORE_EXT_STATE_EN; 1603 + else 1604 + flags |= MI_RESTORE_INHIBIT; 1605 + 1606 + ret = mi_set_context(rq, flags); 1600 1607 if (ret) 1601 1608 return ret; 1602 1609 }
+4 -2
drivers/gpu/drm/i915/i915_drv.h
··· 1660 1660 (IS_BROADWELL(dev_priv) || IS_GEN(dev_priv, 9)) 1661 1661 1662 1662 /* WaRsDisableCoarsePowerGating:skl,cnl */ 1663 - #define NEEDS_WaRsDisableCoarsePowerGating(dev_priv) \ 1664 - (IS_CANNONLAKE(dev_priv) || IS_GEN(dev_priv, 9)) 1663 + #define NEEDS_WaRsDisableCoarsePowerGating(dev_priv) \ 1664 + (IS_CANNONLAKE(dev_priv) || \ 1665 + IS_SKL_GT3(dev_priv) || \ 1666 + IS_SKL_GT4(dev_priv)) 1665 1667 1666 1668 #define HAS_GMBUS_IRQ(dev_priv) (INTEL_GEN(dev_priv) >= 4) 1667 1669 #define HAS_GMBUS_BURST_READ(dev_priv) (INTEL_GEN(dev_priv) >= 10 || \
+5 -5
drivers/gpu/drm/i915/i915_gem.c
··· 161 161 * We manually control the domain here and pretend that it 162 162 * remains coherent i.e. in the GTT domain, like shmem_pwrite. 163 163 */ 164 - intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU); 164 + i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU); 165 165 166 166 if (copy_from_user(vaddr, user_data, args->size)) 167 167 return -EFAULT; ··· 169 169 drm_clflush_virt_range(vaddr, args->size); 170 170 intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt); 171 171 172 - intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); 172 + i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); 173 173 return 0; 174 174 } 175 175 ··· 589 589 goto out_unpin; 590 590 } 591 591 592 - intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU); 592 + i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU); 593 593 594 594 user_data = u64_to_user_ptr(args->data_ptr); 595 595 offset = args->offset; ··· 631 631 user_data += page_length; 632 632 offset += page_length; 633 633 } 634 - intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); 634 + i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); 635 635 636 636 i915_gem_object_unlock_fence(obj, fence); 637 637 out_unpin: ··· 721 721 offset = 0; 722 722 } 723 723 724 - intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); 724 + i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); 725 725 i915_gem_object_unlock_fence(obj, fence); 726 726 727 727 return ret;
+20 -53
drivers/gpu/drm/i915/i915_pmu.c
··· 144 144 return ktime_to_ns(ktime_sub(ktime_get(), kt)); 145 145 } 146 146 147 - static u64 __pmu_estimate_rc6(struct i915_pmu *pmu) 148 - { 149 - u64 val; 150 - 151 - /* 152 - * We think we are runtime suspended. 153 - * 154 - * Report the delta from when the device was suspended to now, 155 - * on top of the last known real value, as the approximated RC6 156 - * counter value. 157 - */ 158 - val = ktime_since(pmu->sleep_last); 159 - val += pmu->sample[__I915_SAMPLE_RC6].cur; 160 - 161 - pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur = val; 162 - 163 - return val; 164 - } 165 - 166 - static u64 __pmu_update_rc6(struct i915_pmu *pmu, u64 val) 167 - { 168 - /* 169 - * If we are coming back from being runtime suspended we must 170 - * be careful not to report a larger value than returned 171 - * previously. 172 - */ 173 - if (val >= pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur) { 174 - pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur = 0; 175 - pmu->sample[__I915_SAMPLE_RC6].cur = val; 176 - } else { 177 - val = pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur; 178 - } 179 - 180 - return val; 181 - } 182 - 183 147 static u64 get_rc6(struct intel_gt *gt) 184 148 { 185 149 struct drm_i915_private *i915 = gt->i915; 186 150 struct i915_pmu *pmu = &i915->pmu; 187 151 unsigned long flags; 152 + bool awake = false; 188 153 u64 val; 189 154 190 - val = 0; 191 155 if (intel_gt_pm_get_if_awake(gt)) { 192 156 val = __get_rc6(gt); 193 157 intel_gt_pm_put_async(gt); 158 + awake = true; 194 159 } 195 160 196 161 spin_lock_irqsave(&pmu->lock, flags); 197 162 198 - if (val) 199 - val = __pmu_update_rc6(pmu, val); 163 + if (awake) { 164 + pmu->sample[__I915_SAMPLE_RC6].cur = val; 165 + } else { 166 + /* 167 + * We think we are runtime suspended. 168 + * 169 + * Report the delta from when the device was suspended to now, 170 + * on top of the last known real value, as the approximated RC6 171 + * counter value. 172 + */ 173 + val = ktime_since(pmu->sleep_last); 174 + val += pmu->sample[__I915_SAMPLE_RC6].cur; 175 + } 176 + 177 + if (val < pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur) 178 + val = pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur; 200 179 else 201 - val = __pmu_estimate_rc6(pmu); 180 + pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur = val; 202 181 203 182 spin_unlock_irqrestore(&pmu->lock, flags); 204 183 ··· 189 210 struct i915_pmu *pmu = &i915->pmu; 190 211 191 212 if (pmu->enable & config_enabled_mask(I915_PMU_RC6_RESIDENCY)) 192 - __pmu_update_rc6(pmu, __get_rc6(&i915->gt)); 213 + pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt); 193 214 194 215 pmu->sleep_last = ktime_get(); 195 - } 196 - 197 - static void unpark_rc6(struct drm_i915_private *i915) 198 - { 199 - struct i915_pmu *pmu = &i915->pmu; 200 - 201 - /* Estimate how long we slept and accumulate that into rc6 counters */ 202 - if (pmu->enable & config_enabled_mask(I915_PMU_RC6_RESIDENCY)) 203 - __pmu_estimate_rc6(pmu); 204 216 } 205 217 206 218 #else ··· 202 232 } 203 233 204 234 static void park_rc6(struct drm_i915_private *i915) {} 205 - static void unpark_rc6(struct drm_i915_private *i915) {} 206 235 207 236 #endif 208 237 ··· 249 280 * Re-enable sampling timer when GPU goes active. 250 281 */ 251 282 __i915_pmu_maybe_start_timer(pmu); 252 - 253 - unpark_rc6(i915); 254 283 255 284 spin_unlock_irq(&pmu->lock); 256 285 }
+1 -1
drivers/gpu/drm/i915/i915_pmu.h
··· 18 18 __I915_SAMPLE_FREQ_ACT = 0, 19 19 __I915_SAMPLE_FREQ_REQ, 20 20 __I915_SAMPLE_RC6, 21 - __I915_SAMPLE_RC6_ESTIMATED, 21 + __I915_SAMPLE_RC6_LAST_REPORTED, 22 22 __I915_NUM_PMU_SAMPLERS 23 23 }; 24 24
+7 -1
drivers/gpu/drm/i915/i915_reg.h
··· 4177 4177 #define CPSSUNIT_CLKGATE_DIS REG_BIT(9) 4178 4178 4179 4179 #define UNSLICE_UNIT_LEVEL_CLKGATE _MMIO(0x9434) 4180 - #define VFUNIT_CLKGATE_DIS (1 << 20) 4180 + #define VFUNIT_CLKGATE_DIS REG_BIT(20) 4181 + #define HSUNIT_CLKGATE_DIS REG_BIT(8) 4182 + #define VSUNIT_CLKGATE_DIS REG_BIT(3) 4183 + 4184 + #define UNSLICE_UNIT_LEVEL_CLKGATE2 _MMIO(0x94e4) 4185 + #define VSUNIT_CLKGATE_DIS_TGL REG_BIT(19) 4186 + #define PSDUNIT_CLKGATE_DIS REG_BIT(5) 4181 4187 4182 4188 #define INF_UNIT_LEVEL_CLKGATE _MMIO(0x9560) 4183 4189 #define CGPSF_CLKGATE_DIS (1 << 3)
+8 -2
drivers/gpu/drm/i915/i915_vma.c
··· 1104 1104 return err; 1105 1105 1106 1106 if (flags & EXEC_OBJECT_WRITE) { 1107 - if (intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CS)) 1108 - i915_active_add_request(&obj->frontbuffer->write, rq); 1107 + struct intel_frontbuffer *front; 1108 + 1109 + front = __intel_frontbuffer_get(obj); 1110 + if (unlikely(front)) { 1111 + if (intel_frontbuffer_invalidate(front, ORIGIN_CS)) 1112 + i915_active_add_request(&front->write, rq); 1113 + intel_frontbuffer_put(front); 1114 + } 1109 1115 1110 1116 dma_resv_add_excl_fence(vma->resv, &rq->fence); 1111 1117 obj->write_domain = I915_GEM_DOMAIN_RENDER;
+11
drivers/gpu/drm/i915/intel_pm.c
··· 6565 6565 /* WaEnable32PlaneMode:icl */ 6566 6566 I915_WRITE(GEN9_CSFE_CHICKEN1_RCS, 6567 6567 _MASKED_BIT_ENABLE(GEN11_ENABLE_32_PLANE_MODE)); 6568 + 6569 + /* 6570 + * Wa_1408615072:icl,ehl (vsunit) 6571 + * Wa_1407596294:icl,ehl (hsunit) 6572 + */ 6573 + intel_uncore_rmw(&dev_priv->uncore, UNSLICE_UNIT_LEVEL_CLKGATE, 6574 + 0, VSUNIT_CLKGATE_DIS | HSUNIT_CLKGATE_DIS); 6575 + 6576 + /* Wa_1407352427:icl,ehl */ 6577 + intel_uncore_rmw(&dev_priv->uncore, UNSLICE_UNIT_LEVEL_CLKGATE2, 6578 + 0, PSDUNIT_CLKGATE_DIS); 6568 6579 } 6569 6580 6570 6581 static void tgl_init_clock_gating(struct drm_i915_private *dev_priv)
+12 -6
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
··· 215 215 struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); 216 216 struct mtk_ddp_comp *comp; 217 217 int i, count = 0; 218 + unsigned int local_index = plane - mtk_crtc->planes; 218 219 219 220 for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) { 220 221 comp = mtk_crtc->ddp_comp[i]; 221 - if (plane->index < (count + mtk_ddp_comp_layer_nr(comp))) { 222 - *local_layer = plane->index - count; 222 + if (local_index < (count + mtk_ddp_comp_layer_nr(comp))) { 223 + *local_layer = local_index - count; 223 224 return comp; 224 225 } 225 226 count += mtk_ddp_comp_layer_nr(comp); ··· 311 310 312 311 plane_state = to_mtk_plane_state(plane->state); 313 312 comp = mtk_drm_ddp_comp_for_plane(crtc, plane, &local_layer); 314 - mtk_ddp_comp_layer_config(comp, local_layer, plane_state); 313 + if (comp) 314 + mtk_ddp_comp_layer_config(comp, local_layer, 315 + plane_state); 315 316 } 316 317 317 318 return 0; ··· 389 386 comp = mtk_drm_ddp_comp_for_plane(crtc, plane, 390 387 &local_layer); 391 388 392 - mtk_ddp_comp_layer_config(comp, local_layer, 393 - plane_state); 389 + if (comp) 390 + mtk_ddp_comp_layer_config(comp, local_layer, 391 + plane_state); 394 392 plane_state->pending.config = false; 395 393 } 396 394 mtk_crtc->pending_planes = false; ··· 405 401 struct mtk_ddp_comp *comp; 406 402 407 403 comp = mtk_drm_ddp_comp_for_plane(crtc, plane, &local_layer); 408 - return mtk_ddp_comp_layer_check(comp, local_layer, state); 404 + if (comp) 405 + return mtk_ddp_comp_layer_check(comp, local_layer, state); 406 + return 0; 409 407 } 410 408 411 409 static void mtk_drm_crtc_atomic_enable(struct drm_crtc *crtc,
+38 -29
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 230 230 static void mtk_dsi_phy_timconfig(struct mtk_dsi *dsi) 231 231 { 232 232 u32 timcon0, timcon1, timcon2, timcon3; 233 - u32 ui, cycle_time; 233 + u32 data_rate_mhz = DIV_ROUND_UP(dsi->data_rate, 1000000); 234 234 struct mtk_phy_timing *timing = &dsi->phy_timing; 235 235 236 - ui = DIV_ROUND_UP(1000000000, dsi->data_rate); 237 - cycle_time = div_u64(8000000000ULL, dsi->data_rate); 236 + timing->lpx = (60 * data_rate_mhz / (8 * 1000)) + 1; 237 + timing->da_hs_prepare = (80 * data_rate_mhz + 4 * 1000) / 8000; 238 + timing->da_hs_zero = (170 * data_rate_mhz + 10 * 1000) / 8000 + 1 - 239 + timing->da_hs_prepare; 240 + timing->da_hs_trail = timing->da_hs_prepare + 1; 238 241 239 - timing->lpx = NS_TO_CYCLE(60, cycle_time); 240 - timing->da_hs_prepare = NS_TO_CYCLE(50 + 5 * ui, cycle_time); 241 - timing->da_hs_zero = NS_TO_CYCLE(110 + 6 * ui, cycle_time); 242 - timing->da_hs_trail = NS_TO_CYCLE(77 + 4 * ui, cycle_time); 242 + timing->ta_go = 4 * timing->lpx - 2; 243 + timing->ta_sure = timing->lpx + 2; 244 + timing->ta_get = 4 * timing->lpx; 245 + timing->da_hs_exit = 2 * timing->lpx + 1; 243 246 244 - timing->ta_go = 4 * timing->lpx; 245 - timing->ta_sure = 3 * timing->lpx / 2; 246 - timing->ta_get = 5 * timing->lpx; 247 - timing->da_hs_exit = 2 * timing->lpx; 248 - 249 - timing->clk_hs_zero = NS_TO_CYCLE(336, cycle_time); 250 - timing->clk_hs_trail = NS_TO_CYCLE(100, cycle_time) + 10; 251 - 252 - timing->clk_hs_prepare = NS_TO_CYCLE(64, cycle_time); 253 - timing->clk_hs_post = NS_TO_CYCLE(80 + 52 * ui, cycle_time); 254 - timing->clk_hs_exit = 2 * timing->lpx; 247 + timing->clk_hs_prepare = 70 * data_rate_mhz / (8 * 1000); 248 + timing->clk_hs_post = timing->clk_hs_prepare + 8; 249 + timing->clk_hs_trail = timing->clk_hs_prepare; 250 + timing->clk_hs_zero = timing->clk_hs_trail * 4; 251 + timing->clk_hs_exit = 2 * timing->clk_hs_trail; 255 252 256 253 timcon0 = timing->lpx | timing->da_hs_prepare << 8 | 257 254 timing->da_hs_zero << 16 | timing->da_hs_trail << 24; ··· 479 482 dsi_tmp_buf_bpp - 10); 480 483 481 484 data_phy_cycles = timing->lpx + timing->da_hs_prepare + 482 - timing->da_hs_zero + timing->da_hs_exit + 2; 485 + timing->da_hs_zero + timing->da_hs_exit + 3; 483 486 484 487 if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST) { 485 - if (vm->hfront_porch * dsi_tmp_buf_bpp > 488 + if ((vm->hfront_porch + vm->hback_porch) * dsi_tmp_buf_bpp > 486 489 data_phy_cycles * dsi->lanes + 18) { 487 - horizontal_frontporch_byte = vm->hfront_porch * 488 - dsi_tmp_buf_bpp - 489 - data_phy_cycles * 490 - dsi->lanes - 18; 490 + horizontal_frontporch_byte = 491 + vm->hfront_porch * dsi_tmp_buf_bpp - 492 + (data_phy_cycles * dsi->lanes + 18) * 493 + vm->hfront_porch / 494 + (vm->hfront_porch + vm->hback_porch); 495 + 496 + horizontal_backporch_byte = 497 + horizontal_backporch_byte - 498 + (data_phy_cycles * dsi->lanes + 18) * 499 + vm->hback_porch / 500 + (vm->hfront_porch + vm->hback_porch); 491 501 } else { 492 502 DRM_WARN("HFP less than d-phy, FPS will under 60Hz\n"); 493 503 horizontal_frontporch_byte = vm->hfront_porch * 494 504 dsi_tmp_buf_bpp; 495 505 } 496 506 } else { 497 - if (vm->hfront_porch * dsi_tmp_buf_bpp > 507 + if ((vm->hfront_porch + vm->hback_porch) * dsi_tmp_buf_bpp > 498 508 data_phy_cycles * dsi->lanes + 12) { 499 - horizontal_frontporch_byte = vm->hfront_porch * 500 - dsi_tmp_buf_bpp - 501 - data_phy_cycles * 502 - dsi->lanes - 12; 509 + horizontal_frontporch_byte = 510 + vm->hfront_porch * dsi_tmp_buf_bpp - 511 + (data_phy_cycles * dsi->lanes + 12) * 512 + vm->hfront_porch / 513 + (vm->hfront_porch + vm->hback_porch); 514 + horizontal_backporch_byte = horizontal_backporch_byte - 515 + (data_phy_cycles * dsi->lanes + 12) * 516 + vm->hback_porch / 517 + (vm->hfront_porch + vm->hback_porch); 503 518 } else { 504 519 DRM_WARN("HFP less than d-phy, FPS will under 60Hz\n"); 505 520 horizontal_frontporch_byte = vm->hfront_porch *
-2
drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
··· 685 685 struct sun4i_hdmi *hdmi = dev_get_drvdata(dev); 686 686 687 687 cec_unregister_adapter(hdmi->cec_adap); 688 - drm_connector_cleanup(&hdmi->connector); 689 - drm_encoder_cleanup(&hdmi->encoder); 690 688 i2c_del_adapter(hdmi->i2c); 691 689 i2c_put_adapter(hdmi->ddc_i2c); 692 690 clk_disable_unprepare(hdmi->mod_clk);
+12 -3
drivers/gpu/drm/sun4i/sun4i_tcon.c
··· 489 489 490 490 WARN_ON(!tcon->quirks->has_channel_0); 491 491 492 - tcon->dclk_min_div = 1; 492 + tcon->dclk_min_div = tcon->quirks->dclk_min_div; 493 493 tcon->dclk_max_div = 127; 494 494 sun4i_tcon0_mode_set_common(tcon, mode); 495 495 ··· 1426 1426 static const struct sun4i_tcon_quirks sun4i_a10_quirks = { 1427 1427 .has_channel_0 = true, 1428 1428 .has_channel_1 = true, 1429 + .dclk_min_div = 4, 1429 1430 .set_mux = sun4i_a10_tcon_set_mux, 1430 1431 }; 1431 1432 1432 1433 static const struct sun4i_tcon_quirks sun5i_a13_quirks = { 1433 1434 .has_channel_0 = true, 1434 1435 .has_channel_1 = true, 1436 + .dclk_min_div = 4, 1435 1437 .set_mux = sun5i_a13_tcon_set_mux, 1436 1438 }; 1437 1439 ··· 1442 1440 .has_channel_1 = true, 1443 1441 .has_lvds_alt = true, 1444 1442 .needs_de_be_mux = true, 1443 + .dclk_min_div = 1, 1445 1444 .set_mux = sun6i_tcon_set_mux, 1446 1445 }; 1447 1446 ··· 1450 1447 .has_channel_0 = true, 1451 1448 .has_channel_1 = true, 1452 1449 .needs_de_be_mux = true, 1450 + .dclk_min_div = 1, 1453 1451 }; 1454 1452 1455 1453 static const struct sun4i_tcon_quirks sun7i_a20_quirks = { 1456 1454 .has_channel_0 = true, 1457 1455 .has_channel_1 = true, 1456 + .dclk_min_div = 4, 1458 1457 /* Same display pipeline structure as A10 */ 1459 1458 .set_mux = sun4i_a10_tcon_set_mux, 1460 1459 }; ··· 1464 1459 static const struct sun4i_tcon_quirks sun8i_a33_quirks = { 1465 1460 .has_channel_0 = true, 1466 1461 .has_lvds_alt = true, 1462 + .dclk_min_div = 1, 1467 1463 }; 1468 1464 1469 1465 static const struct sun4i_tcon_quirks sun8i_a83t_lcd_quirks = { 1470 1466 .supports_lvds = true, 1471 1467 .has_channel_0 = true, 1468 + .dclk_min_div = 1, 1472 1469 }; 1473 1470 1474 1471 static const struct sun4i_tcon_quirks sun8i_a83t_tv_quirks = { ··· 1484 1477 1485 1478 static const struct sun4i_tcon_quirks sun8i_v3s_quirks = { 1486 1479 .has_channel_0 = true, 1480 + .dclk_min_div = 1, 1487 1481 }; 1488 1482 1489 1483 static const struct sun4i_tcon_quirks sun9i_a80_tcon_lcd_quirks = { 1490 - .has_channel_0 = true, 1491 - .needs_edp_reset = true, 1484 + .has_channel_0 = true, 1485 + .needs_edp_reset = true, 1486 + .dclk_min_div = 1, 1492 1487 }; 1493 1488 1494 1489 static const struct sun4i_tcon_quirks sun9i_a80_tcon_tv_quirks = {
+1
drivers/gpu/drm/sun4i/sun4i_tcon.h
··· 224 224 bool needs_de_be_mux; /* sun6i needs mux to select backend */ 225 225 bool needs_edp_reset; /* a80 edp reset needed for tcon0 access */ 226 226 bool supports_lvds; /* Does the TCON support an LVDS output? */ 227 + u8 dclk_min_div; /* minimum divider for TCON0 DCLK */ 227 228 228 229 /* callback to handle tcon muxing options */ 229 230 int (*set_mux)(struct sun4i_tcon *, const struct drm_encoder *);
+2 -1
drivers/hid/hid-asus.c
··· 261 261 struct hid_usage *usage, __s32 value) 262 262 { 263 263 if ((usage->hid & HID_USAGE_PAGE) == 0xff310000 && 264 - (usage->hid & HID_USAGE) != 0x00 && !usage->type) { 264 + (usage->hid & HID_USAGE) != 0x00 && 265 + (usage->hid & HID_USAGE) != 0xff && !usage->type) { 265 266 hid_warn(hdev, "Unmapped Asus vendor usagepage code 0x%02x\n", 266 267 usage->hid & HID_USAGE); 267 268 }
+6
drivers/hid/hid-core.c
··· 288 288 offset = report->size; 289 289 report->size += parser->global.report_size * parser->global.report_count; 290 290 291 + /* Total size check: Allow for possible report index byte */ 292 + if (report->size > (HID_MAX_BUFFER_SIZE - 1) << 3) { 293 + hid_err(parser->device, "report is too long\n"); 294 + return -1; 295 + } 296 + 291 297 if (!parser->local.usage_index) /* Ignore padding fields */ 292 298 return 0; 293 299
+3
drivers/hid/hid-ids.h
··· 631 631 #define USB_VENDOR_ID_ITE 0x048d 632 632 #define USB_DEVICE_ID_ITE_LENOVO_YOGA 0x8386 633 633 #define USB_DEVICE_ID_ITE_LENOVO_YOGA2 0x8350 634 + #define I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720 0x837a 634 635 #define USB_DEVICE_ID_ITE_LENOVO_YOGA900 0x8396 635 636 #define USB_DEVICE_ID_ITE8595 0x8595 636 637 ··· 731 730 #define USB_DEVICE_ID_LG_MULTITOUCH 0x0064 732 731 #define USB_DEVICE_ID_LG_MELFAS_MT 0x6007 733 732 #define I2C_DEVICE_ID_LG_8001 0x8001 733 + #define I2C_DEVICE_ID_LG_7010 0x7010 734 734 735 735 #define USB_VENDOR_ID_LOGITECH 0x046d 736 736 #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e ··· 1104 1102 #define USB_DEVICE_ID_SYNAPTICS_LTS2 0x1d10 1105 1103 #define USB_DEVICE_ID_SYNAPTICS_HD 0x0ac3 1106 1104 #define USB_DEVICE_ID_SYNAPTICS_QUAD_HD 0x1ac3 1105 + #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012 0x2968 1107 1106 #define USB_DEVICE_ID_SYNAPTICS_TP_V103 0x5710 1108 1107 #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5 0x81a7 1109 1108
+12 -4
drivers/hid/hid-input.c
··· 1132 1132 } 1133 1133 1134 1134 mapped: 1135 - if (device->driver->input_mapped && device->driver->input_mapped(device, 1136 - hidinput, field, usage, &bit, &max) < 0) 1137 - goto ignore; 1135 + if (device->driver->input_mapped && 1136 + device->driver->input_mapped(device, hidinput, field, usage, 1137 + &bit, &max) < 0) { 1138 + /* 1139 + * The driver indicated that no further generic handling 1140 + * of the usage is desired. 1141 + */ 1142 + return; 1143 + } 1138 1144 1139 1145 set_bit(usage->type, input->evbit); 1140 1146 ··· 1221 1215 set_bit(MSC_SCAN, input->mscbit); 1222 1216 } 1223 1217 1224 - ignore: 1225 1218 return; 1226 1219 1220 + ignore: 1221 + usage->type = 0; 1222 + usage->code = 0; 1227 1223 } 1228 1224 1229 1225 static void hidinput_handle_scroll(struct hid_usage *usage,
+3
drivers/hid/hid-ite.c
··· 40 40 static const struct hid_device_id ite_devices[] = { 41 41 { HID_USB_DEVICE(USB_VENDOR_ID_ITE, USB_DEVICE_ID_ITE8595) }, 42 42 { HID_USB_DEVICE(USB_VENDOR_ID_258A, USB_DEVICE_ID_258A_6A88) }, 43 + /* ITE8595 USB kbd ctlr, with Synaptics touchpad connected to it. */ 44 + { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, 45 + USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012) }, 43 46 { } 44 47 }; 45 48 MODULE_DEVICE_TABLE(hid, ite_devices);
+4 -1
drivers/hid/hid-multitouch.c
··· 1019 1019 tool = MT_TOOL_DIAL; 1020 1020 else if (unlikely(!confidence_state)) { 1021 1021 tool = MT_TOOL_PALM; 1022 - if (!active && 1022 + if (!active && mt && 1023 1023 input_mt_is_active(&mt->slots[slotnum])) { 1024 1024 /* 1025 1025 * The non-confidence was reported for ··· 1985 1985 { .driver_data = MT_CLS_LG, 1986 1986 HID_USB_DEVICE(USB_VENDOR_ID_LG, 1987 1987 USB_DEVICE_ID_LG_MELFAS_MT) }, 1988 + { .driver_data = MT_CLS_LG, 1989 + HID_DEVICE(BUS_I2C, HID_GROUP_GENERIC, 1990 + USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_7010) }, 1988 1991 1989 1992 /* MosArt panels */ 1990 1993 { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE,
+1
drivers/hid/hid-quirks.c
··· 174 174 { HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_SIRIUS_BATTERY_FREE_TABLET), HID_QUIRK_MULTI_INPUT }, 175 175 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP_LTD2, USB_DEVICE_ID_SMARTJOY_DUAL_PLUS), HID_QUIRK_NOGET | HID_QUIRK_MULTI_INPUT }, 176 176 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_QUAD_USB_JOYPAD), HID_QUIRK_NOGET | HID_QUIRK_MULTI_INPUT }, 177 + { HID_USB_DEVICE(USB_VENDOR_ID_XIN_MO, USB_DEVICE_ID_XIN_MO_DUAL_ARCADE), HID_QUIRK_MULTI_INPUT }, 177 178 178 179 { 0 } 179 180 };
+4
drivers/hid/hid-steam.c
··· 768 768 769 769 if (steam->quirks & STEAM_QUIRK_WIRELESS) { 770 770 hid_info(hdev, "Steam wireless receiver connected"); 771 + /* If using a wireless adaptor ask for connection status */ 772 + steam->connected = false; 771 773 steam_request_conn_status(steam); 772 774 } else { 775 + /* A wired connection is always present */ 776 + steam->connected = true; 773 777 ret = steam_register(steam); 774 778 if (ret) { 775 779 hid_err(hdev,
+4 -3
drivers/hid/hidraw.c
··· 249 249 static __poll_t hidraw_poll(struct file *file, poll_table *wait) 250 250 { 251 251 struct hidraw_list *list = file->private_data; 252 + __poll_t mask = EPOLLOUT | EPOLLWRNORM; /* hidraw is always writable */ 252 253 253 254 poll_wait(file, &list->hidraw->wait, wait); 254 255 if (list->head != list->tail) 255 - return EPOLLIN | EPOLLRDNORM | EPOLLOUT; 256 + mask |= EPOLLIN | EPOLLRDNORM; 256 257 if (!list->hidraw->exist) 257 - return EPOLLERR | EPOLLHUP; 258 - return 0; 258 + mask |= EPOLLERR | EPOLLHUP; 259 + return mask; 259 260 } 260 261 261 262 static int hidraw_open(struct inode *inode, struct file *file)
+13 -3
drivers/hid/i2c-hid/i2c-hid-core.c
··· 49 49 #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET BIT(1) 50 50 #define I2C_HID_QUIRK_BOGUS_IRQ BIT(4) 51 51 #define I2C_HID_QUIRK_RESET_ON_RESUME BIT(5) 52 + #define I2C_HID_QUIRK_BAD_INPUT_SIZE BIT(6) 53 + 52 54 53 55 /* flags */ 54 56 #define I2C_HID_STARTED 0 ··· 177 175 I2C_HID_QUIRK_BOGUS_IRQ }, 178 176 { USB_VENDOR_ID_ALPS_JP, HID_ANY_ID, 179 177 I2C_HID_QUIRK_RESET_ON_RESUME }, 178 + { USB_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720, 179 + I2C_HID_QUIRK_BAD_INPUT_SIZE }, 180 180 { 0, 0 } 181 181 }; 182 182 ··· 500 496 } 501 497 502 498 if ((ret_size > size) || (ret_size < 2)) { 503 - dev_err(&ihid->client->dev, "%s: incomplete report (%d/%d)\n", 504 - __func__, size, ret_size); 505 - return; 499 + if (ihid->quirks & I2C_HID_QUIRK_BAD_INPUT_SIZE) { 500 + ihid->inbuf[0] = size & 0xff; 501 + ihid->inbuf[1] = size >> 8; 502 + ret_size = size; 503 + } else { 504 + dev_err(&ihid->client->dev, "%s: incomplete report (%d/%d)\n", 505 + __func__, size, ret_size); 506 + return; 507 + } 506 508 } 507 509 508 510 i2c_hid_dbg(ihid, "input: %*ph\n", ret_size, ihid->inbuf);
+2
drivers/hid/intel-ish-hid/ipc/hw-ish.h
··· 24 24 #define ICL_MOBILE_DEVICE_ID 0x34FC 25 25 #define SPT_H_DEVICE_ID 0xA135 26 26 #define CML_LP_DEVICE_ID 0x02FC 27 + #define CMP_H_DEVICE_ID 0x06FC 27 28 #define EHL_Ax_DEVICE_ID 0x4BB3 29 + #define TGL_LP_DEVICE_ID 0xA0FC 28 30 29 31 #define REVISION_ID_CHT_A0 0x6 30 32 #define REVISION_ID_CHT_Ax_SI 0x0
+2
drivers/hid/intel-ish-hid/ipc/pci-ish.c
··· 34 34 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, ICL_MOBILE_DEVICE_ID)}, 35 35 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, SPT_H_DEVICE_ID)}, 36 36 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, CML_LP_DEVICE_ID)}, 37 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, CMP_H_DEVICE_ID)}, 37 38 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, EHL_Ax_DEVICE_ID)}, 39 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, TGL_LP_DEVICE_ID)}, 38 40 {0, } 39 41 }; 40 42 MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+3 -2
drivers/hid/uhid.c
··· 766 766 static __poll_t uhid_char_poll(struct file *file, poll_table *wait) 767 767 { 768 768 struct uhid_device *uhid = file->private_data; 769 + __poll_t mask = EPOLLOUT | EPOLLWRNORM; /* uhid is always writable */ 769 770 770 771 poll_wait(file, &uhid->waitq, wait); 771 772 772 773 if (uhid->head != uhid->tail) 773 - return EPOLLIN | EPOLLRDNORM; 774 + mask |= EPOLLIN | EPOLLRDNORM; 774 775 775 - return 0; 776 + return mask; 776 777 } 777 778 778 779 static const struct file_operations uhid_fops = {
+42 -55
drivers/hid/usbhid/hiddev.c
··· 241 241 return 0; 242 242 } 243 243 244 + static int __hiddev_open(struct hiddev *hiddev, struct file *file) 245 + { 246 + struct hiddev_list *list; 247 + int error; 248 + 249 + lockdep_assert_held(&hiddev->existancelock); 250 + 251 + list = vzalloc(sizeof(*list)); 252 + if (!list) 253 + return -ENOMEM; 254 + 255 + mutex_init(&list->thread_lock); 256 + list->hiddev = hiddev; 257 + 258 + if (!hiddev->open++) { 259 + error = hid_hw_power(hiddev->hid, PM_HINT_FULLON); 260 + if (error < 0) 261 + goto err_drop_count; 262 + 263 + error = hid_hw_open(hiddev->hid); 264 + if (error < 0) 265 + goto err_normal_power; 266 + } 267 + 268 + spin_lock_irq(&hiddev->list_lock); 269 + list_add_tail(&list->node, &hiddev->list); 270 + spin_unlock_irq(&hiddev->list_lock); 271 + 272 + file->private_data = list; 273 + 274 + return 0; 275 + 276 + err_normal_power: 277 + hid_hw_power(hiddev->hid, PM_HINT_NORMAL); 278 + err_drop_count: 279 + hiddev->open--; 280 + vfree(list); 281 + return error; 282 + } 283 + 244 284 /* 245 285 * open file op 246 286 */ 247 287 static int hiddev_open(struct inode *inode, struct file *file) 248 288 { 249 - struct hiddev_list *list; 250 289 struct usb_interface *intf; 251 290 struct hid_device *hid; 252 291 struct hiddev *hiddev; ··· 294 255 intf = usbhid_find_interface(iminor(inode)); 295 256 if (!intf) 296 257 return -ENODEV; 258 + 297 259 hid = usb_get_intfdata(intf); 298 260 hiddev = hid->hiddev; 299 261 300 - if (!(list = vzalloc(sizeof(struct hiddev_list)))) 301 - return -ENOMEM; 302 - mutex_init(&list->thread_lock); 303 - list->hiddev = hiddev; 304 - file->private_data = list; 305 - 306 - /* 307 - * no need for locking because the USB major number 308 - * is shared which usbcore guards against disconnect 309 - */ 310 - if (list->hiddev->exist) { 311 - if (!list->hiddev->open++) { 312 - res = hid_hw_open(hiddev->hid); 313 - if (res < 0) 314 - goto bail; 315 - } 316 - } else { 317 - res = -ENODEV; 318 - goto bail; 319 - } 320 - 321 - spin_lock_irq(&list->hiddev->list_lock); 322 - list_add_tail(&list->node, &hiddev->list); 323 - spin_unlock_irq(&list->hiddev->list_lock); 324 - 325 262 mutex_lock(&hiddev->existancelock); 326 - /* 327 - * recheck exist with existance lock held to 328 - * avoid opening a disconnected device 329 - */ 330 - if (!list->hiddev->exist) { 331 - res = -ENODEV; 332 - goto bail_unlock; 333 - } 334 - if (!list->hiddev->open++) 335 - if (list->hiddev->exist) { 336 - struct hid_device *hid = hiddev->hid; 337 - res = hid_hw_power(hid, PM_HINT_FULLON); 338 - if (res < 0) 339 - goto bail_unlock; 340 - res = hid_hw_open(hid); 341 - if (res < 0) 342 - goto bail_normal_power; 343 - } 344 - mutex_unlock(&hiddev->existancelock); 345 - return 0; 346 - bail_normal_power: 347 - hid_hw_power(hid, PM_HINT_NORMAL); 348 - bail_unlock: 263 + res = hiddev->exist ? __hiddev_open(hiddev, file) : -ENODEV; 349 264 mutex_unlock(&hiddev->existancelock); 350 265 351 - spin_lock_irq(&list->hiddev->list_lock); 352 - list_del(&list->node); 353 - spin_unlock_irq(&list->hiddev->list_lock); 354 - bail: 355 - file->private_data = NULL; 356 - vfree(list); 357 266 return res; 358 267 } 359 268
+4 -2
drivers/hid/wacom_wac.c
··· 2096 2096 (hdev->product == 0x34d || hdev->product == 0x34e || /* MobileStudio Pro */ 2097 2097 hdev->product == 0x357 || hdev->product == 0x358 || /* Intuos Pro 2 */ 2098 2098 hdev->product == 0x392 || /* Intuos Pro 2 */ 2099 - hdev->product == 0x398 || hdev->product == 0x399)) { /* MobileStudio Pro */ 2099 + hdev->product == 0x398 || hdev->product == 0x399 || /* MobileStudio Pro */ 2100 + hdev->product == 0x3AA)) { /* MobileStudio Pro */ 2100 2101 value = (field->logical_maximum - value); 2101 2102 2102 2103 if (hdev->product == 0x357 || hdev->product == 0x358 || 2103 2104 hdev->product == 0x392) 2104 2105 value = wacom_offset_rotation(input, usage, value, 3, 16); 2105 2106 else if (hdev->product == 0x34d || hdev->product == 0x34e || 2106 - hdev->product == 0x398 || hdev->product == 0x399) 2107 + hdev->product == 0x398 || hdev->product == 0x399 || 2108 + hdev->product == 0x3AA) 2107 2109 value = wacom_offset_rotation(input, usage, value, 1, 2); 2108 2110 } 2109 2111 else {
+1 -1
drivers/i2c/busses/i2c-at91-core.c
··· 174 174 175 175 static struct at91_twi_pdata sam9x60_config = { 176 176 .clk_max_div = 7, 177 - .clk_offset = 4, 177 + .clk_offset = 3, 178 178 .has_unre_flag = true, 179 179 .has_alt_cmd = true, 180 180 .has_hold_field = true,
+8 -9
drivers/i2c/busses/i2c-bcm2835.c
··· 58 58 struct i2c_adapter adapter; 59 59 struct completion completion; 60 60 struct i2c_msg *curr_msg; 61 + struct clk *bus_clk; 61 62 int num_msgs; 62 63 u32 msg_err; 63 64 u8 *msg_buf; ··· 405 404 struct resource *mem, *irq; 406 405 int ret; 407 406 struct i2c_adapter *adap; 408 - struct clk *bus_clk; 409 407 struct clk *mclk; 410 408 u32 bus_clk_rate; 411 409 ··· 427 427 return PTR_ERR(mclk); 428 428 } 429 429 430 - bus_clk = bcm2835_i2c_register_div(&pdev->dev, mclk, i2c_dev); 430 + i2c_dev->bus_clk = bcm2835_i2c_register_div(&pdev->dev, mclk, i2c_dev); 431 431 432 - if (IS_ERR(bus_clk)) { 432 + if (IS_ERR(i2c_dev->bus_clk)) { 433 433 dev_err(&pdev->dev, "Could not register clock\n"); 434 - return PTR_ERR(bus_clk); 434 + return PTR_ERR(i2c_dev->bus_clk); 435 435 } 436 436 437 437 ret = of_property_read_u32(pdev->dev.of_node, "clock-frequency", ··· 442 442 bus_clk_rate = 100000; 443 443 } 444 444 445 - ret = clk_set_rate_exclusive(bus_clk, bus_clk_rate); 445 + ret = clk_set_rate_exclusive(i2c_dev->bus_clk, bus_clk_rate); 446 446 if (ret < 0) { 447 447 dev_err(&pdev->dev, "Could not set clock frequency\n"); 448 448 return ret; 449 449 } 450 450 451 - ret = clk_prepare_enable(bus_clk); 451 + ret = clk_prepare_enable(i2c_dev->bus_clk); 452 452 if (ret) { 453 453 dev_err(&pdev->dev, "Couldn't prepare clock"); 454 454 return ret; ··· 491 491 static int bcm2835_i2c_remove(struct platform_device *pdev) 492 492 { 493 493 struct bcm2835_i2c_dev *i2c_dev = platform_get_drvdata(pdev); 494 - struct clk *bus_clk = devm_clk_get(i2c_dev->dev, "div"); 495 494 496 - clk_rate_exclusive_put(bus_clk); 497 - clk_disable_unprepare(bus_clk); 495 + clk_rate_exclusive_put(i2c_dev->bus_clk); 496 + clk_disable_unprepare(i2c_dev->bus_clk); 498 497 499 498 free_irq(i2c_dev->irq, i2c_dev); 500 499 i2c_del_adapter(&i2c_dev->adapter);
+10 -3
drivers/i2c/i2c-core-base.c
··· 186 186 * If we can set SDA, we will always create a STOP to ensure additional 187 187 * pulses will do no harm. This is achieved by letting SDA follow SCL 188 188 * half a cycle later. Check the 'incomplete_write_byte' fault injector 189 - * for details. 189 + * for details. Note that we must honour tsu:sto, 4us, but lets use 5us 190 + * here for simplicity. 190 191 */ 191 192 bri->set_scl(adap, scl); 192 - ndelay(RECOVERY_NDELAY / 2); 193 + ndelay(RECOVERY_NDELAY); 193 194 if (bri->set_sda) 194 195 bri->set_sda(adap, scl); 195 196 ndelay(RECOVERY_NDELAY / 2); ··· 212 211 scl = !scl; 213 212 bri->set_scl(adap, scl); 214 213 /* Creating STOP again, see above */ 215 - ndelay(RECOVERY_NDELAY / 2); 214 + if (scl) { 215 + /* Honour minimum tsu:sto */ 216 + ndelay(RECOVERY_NDELAY); 217 + } else { 218 + /* Honour minimum tf and thd:dat */ 219 + ndelay(RECOVERY_NDELAY / 2); 220 + } 216 221 if (bri->set_sda) 217 222 bri->set_sda(adap, scl); 218 223 ndelay(RECOVERY_NDELAY / 2);
+3 -1
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 3305 3305 int rc; 3306 3306 3307 3307 rc = bnxt_qplib_free_mrw(&rdev->qplib_res, &mr->qplib_mr); 3308 - if (rc) 3308 + if (rc) { 3309 3309 dev_err(rdev_to_dev(rdev), "Dereg MR failed: %#x\n", rc); 3310 + return rc; 3311 + } 3310 3312 3311 3313 if (mr->pages) { 3312 3314 rc = bnxt_qplib_free_fast_reg_page_list(&rdev->qplib_res,
+6 -6
drivers/infiniband/hw/bnxt_re/qplib_fp.c
··· 2283 2283 /* Add qp to flush list of the CQ */ 2284 2284 bnxt_qplib_add_flush_qp(qp); 2285 2285 } else { 2286 + /* Before we complete, do WA 9060 */ 2287 + if (do_wa9060(qp, cq, cq_cons, sw_sq_cons, 2288 + cqe_sq_cons)) { 2289 + *lib_qp = qp; 2290 + goto out; 2291 + } 2286 2292 if (swq->flags & SQ_SEND_FLAGS_SIGNAL_COMP) { 2287 - /* Before we complete, do WA 9060 */ 2288 - if (do_wa9060(qp, cq, cq_cons, sw_sq_cons, 2289 - cqe_sq_cons)) { 2290 - *lib_qp = qp; 2291 - goto out; 2292 - } 2293 2293 cqe->status = CQ_REQ_STATUS_OK; 2294 2294 cqe++; 2295 2295 (*budget)--;
+3 -1
drivers/infiniband/hw/hfi1/iowait.c
··· 81 81 void iowait_cancel_work(struct iowait *w) 82 82 { 83 83 cancel_work_sync(&iowait_get_ib_work(w)->iowork); 84 - cancel_work_sync(&iowait_get_tid_work(w)->iowork); 84 + /* Make sure that the iowork for TID RDMA is used */ 85 + if (iowait_get_tid_work(w)->iowork.func) 86 + cancel_work_sync(&iowait_get_tid_work(w)->iowork); 85 87 } 86 88 87 89 /**
+9
drivers/infiniband/hw/hfi1/tid_rdma.c
··· 4633 4633 */ 4634 4634 fpsn = full_flow_psn(flow, flow->flow_state.spsn); 4635 4635 req->r_ack_psn = psn; 4636 + /* 4637 + * If resync_psn points to the last flow PSN for a 4638 + * segment and the new segment (likely from a new 4639 + * request) starts with a new generation number, we 4640 + * need to adjust resync_psn accordingly. 4641 + */ 4642 + if (flow->flow_state.generation != 4643 + (resync_psn >> HFI1_KDETH_BTH_SEQ_SHIFT)) 4644 + resync_psn = mask_psn(fpsn - 1); 4636 4645 flow->resync_npkts += 4637 4646 delta_psn(mask_psn(resync_psn + 1), fpsn); 4638 4647 /*
+6 -8
drivers/infiniband/hw/i40iw/i40iw_verbs.c
··· 169 169 static int i40iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma) 170 170 { 171 171 struct i40iw_ucontext *ucontext; 172 - u64 db_addr_offset; 173 - u64 push_offset; 172 + u64 db_addr_offset, push_offset, pfn; 174 173 175 174 ucontext = to_ucontext(context); 176 175 if (ucontext->iwdev->sc_dev.is_pf) { ··· 188 189 189 190 if (vma->vm_pgoff == (db_addr_offset >> PAGE_SHIFT)) { 190 191 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 191 - vma->vm_private_data = ucontext; 192 192 } else { 193 193 if ((vma->vm_pgoff - (push_offset >> PAGE_SHIFT)) % 2) 194 194 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); ··· 195 197 vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 196 198 } 197 199 198 - if (io_remap_pfn_range(vma, vma->vm_start, 199 - vma->vm_pgoff + (pci_resource_start(ucontext->iwdev->ldev->pcidev, 0) >> PAGE_SHIFT), 200 - PAGE_SIZE, vma->vm_page_prot)) 201 - return -EAGAIN; 200 + pfn = vma->vm_pgoff + 201 + (pci_resource_start(ucontext->iwdev->ldev->pcidev, 0) >> 202 + PAGE_SHIFT); 202 203 203 - return 0; 204 + return rdma_user_mmap_io(context, vma, pfn, PAGE_SIZE, 205 + vma->vm_page_prot, NULL); 204 206 } 205 207 206 208 /**
+7 -7
drivers/input/evdev.c
··· 224 224 */ 225 225 client->tail = (client->head - 2) & (client->bufsize - 1); 226 226 227 - client->buffer[client->tail].input_event_sec = 228 - event->input_event_sec; 229 - client->buffer[client->tail].input_event_usec = 230 - event->input_event_usec; 231 - client->buffer[client->tail].type = EV_SYN; 232 - client->buffer[client->tail].code = SYN_DROPPED; 233 - client->buffer[client->tail].value = 0; 227 + client->buffer[client->tail] = (struct input_event) { 228 + .input_event_sec = event->input_event_sec, 229 + .input_event_usec = event->input_event_usec, 230 + .type = EV_SYN, 231 + .code = SYN_DROPPED, 232 + .value = 0, 233 + }; 234 234 235 235 client->packet_head = client->tail; 236 236 }
+16 -10
drivers/input/input.c
··· 878 878 } 879 879 } 880 880 881 - __clear_bit(*old_keycode, dev->keybit); 882 - __set_bit(ke->keycode, dev->keybit); 883 - 884 - for (i = 0; i < dev->keycodemax; i++) { 885 - if (input_fetch_keycode(dev, i) == *old_keycode) { 886 - __set_bit(*old_keycode, dev->keybit); 887 - break; /* Setting the bit twice is useless, so break */ 881 + if (*old_keycode <= KEY_MAX) { 882 + __clear_bit(*old_keycode, dev->keybit); 883 + for (i = 0; i < dev->keycodemax; i++) { 884 + if (input_fetch_keycode(dev, i) == *old_keycode) { 885 + __set_bit(*old_keycode, dev->keybit); 886 + /* Setting the bit twice is useless, so break */ 887 + break; 888 + } 888 889 } 889 890 } 890 891 892 + __set_bit(ke->keycode, dev->keybit); 891 893 return 0; 892 894 } 893 895 ··· 945 943 * Simulate keyup event if keycode is not present 946 944 * in the keymap anymore 947 945 */ 948 - if (test_bit(EV_KEY, dev->evbit) && 949 - !is_event_supported(old_keycode, dev->keybit, KEY_MAX) && 950 - __test_and_clear_bit(old_keycode, dev->key)) { 946 + if (old_keycode > KEY_MAX) { 947 + dev_warn(dev->dev.parent ?: &dev->dev, 948 + "%s: got too big old keycode %#x\n", 949 + __func__, old_keycode); 950 + } else if (test_bit(EV_KEY, dev->evbit) && 951 + !is_event_supported(old_keycode, dev->keybit, KEY_MAX) && 952 + __test_and_clear_bit(old_keycode, dev->key)) { 951 953 struct input_value vals[] = { 952 954 { EV_KEY, old_keycode, 0 }, 953 955 input_value_sync
+7 -1
drivers/input/keyboard/imx_sc_key.c
··· 78 78 return; 79 79 } 80 80 81 - state = (bool)msg.state; 81 + /* 82 + * The response data from SCU firmware is 4 bytes, 83 + * but ONLY the first byte is the key state, other 84 + * 3 bytes could be some dirty data, so we should 85 + * ONLY take the first byte as key state. 86 + */ 87 + state = (bool)(msg.state & 0xff); 82 88 83 89 if (state ^ priv->keystate) { 84 90 priv->keystate = state;
+12 -7
drivers/input/misc/uinput.c
··· 74 74 struct uinput_device *udev = input_get_drvdata(dev); 75 75 struct timespec64 ts; 76 76 77 - udev->buff[udev->head].type = type; 78 - udev->buff[udev->head].code = code; 79 - udev->buff[udev->head].value = value; 80 77 ktime_get_ts64(&ts); 81 - udev->buff[udev->head].input_event_sec = ts.tv_sec; 82 - udev->buff[udev->head].input_event_usec = ts.tv_nsec / NSEC_PER_USEC; 78 + 79 + udev->buff[udev->head] = (struct input_event) { 80 + .input_event_sec = ts.tv_sec, 81 + .input_event_usec = ts.tv_nsec / NSEC_PER_USEC, 82 + .type = type, 83 + .code = code, 84 + .value = value, 85 + }; 86 + 83 87 udev->head = (udev->head + 1) % UINPUT_BUFFER_SIZE; 84 88 85 89 wake_up_interruptible(&udev->waitq); ··· 693 689 static __poll_t uinput_poll(struct file *file, poll_table *wait) 694 690 { 695 691 struct uinput_device *udev = file->private_data; 692 + __poll_t mask = EPOLLOUT | EPOLLWRNORM; /* uinput is always writable */ 696 693 697 694 poll_wait(file, &udev->waitq, wait); 698 695 699 696 if (udev->head != udev->tail) 700 - return EPOLLIN | EPOLLRDNORM; 697 + mask |= EPOLLIN | EPOLLRDNORM; 701 698 702 - return EPOLLOUT | EPOLLWRNORM; 699 + return mask; 703 700 } 704 701 705 702 static int uinput_release(struct inode *inode, struct file *file)
-3
drivers/iommu/dma-iommu.c
··· 1203 1203 { 1204 1204 struct device *dev = msi_desc_to_dev(desc); 1205 1205 struct iommu_domain *domain = iommu_get_domain_for_dev(dev); 1206 - struct iommu_dma_cookie *cookie; 1207 1206 struct iommu_dma_msi_page *msi_page; 1208 1207 static DEFINE_MUTEX(msi_prepare_lock); /* see below */ 1209 1208 ··· 1210 1211 desc->iommu_cookie = NULL; 1211 1212 return 0; 1212 1213 } 1213 - 1214 - cookie = domain->iova_cookie; 1215 1214 1216 1215 /* 1217 1216 * In fact the whole prepare operation should already be serialised by
+18 -4
drivers/iommu/intel-iommu.c
··· 5624 5624 5625 5625 group = iommu_group_get_for_dev(dev); 5626 5626 5627 - if (IS_ERR(group)) 5628 - return PTR_ERR(group); 5627 + if (IS_ERR(group)) { 5628 + ret = PTR_ERR(group); 5629 + goto unlink; 5630 + } 5629 5631 5630 5632 iommu_group_put(group); 5631 5633 ··· 5653 5651 if (!get_private_domain_for_dev(dev)) { 5654 5652 dev_warn(dev, 5655 5653 "Failed to get a private domain.\n"); 5656 - return -ENOMEM; 5654 + ret = -ENOMEM; 5655 + goto unlink; 5657 5656 } 5658 5657 5659 5658 dev_info(dev, ··· 5669 5666 } 5670 5667 5671 5668 return 0; 5669 + 5670 + unlink: 5671 + iommu_device_unlink(&iommu->iommu, dev); 5672 + return ret; 5672 5673 } 5673 5674 5674 5675 static void intel_iommu_remove_device(struct device *dev) ··· 5822 5815 end = IOVA_PFN(region->start + region->length - 1); 5823 5816 5824 5817 WARN_ON_ONCE(!reserve_iova(&dmar_domain->iovad, start, end)); 5818 + } 5819 + 5820 + static struct iommu_group *intel_iommu_device_group(struct device *dev) 5821 + { 5822 + if (dev_is_pci(dev)) 5823 + return pci_device_group(dev); 5824 + return generic_device_group(dev); 5825 5825 } 5826 5826 5827 5827 #ifdef CONFIG_INTEL_IOMMU_SVM ··· 6003 5989 .get_resv_regions = intel_iommu_get_resv_regions, 6004 5990 .put_resv_regions = intel_iommu_put_resv_regions, 6005 5991 .apply_resv_region = intel_iommu_apply_resv_region, 6006 - .device_group = pci_device_group, 5992 + .device_group = intel_iommu_device_group, 6007 5993 .dev_has_feat = intel_iommu_dev_has_feat, 6008 5994 .dev_feat_enabled = intel_iommu_dev_feat_enabled, 6009 5995 .dev_enable_feat = intel_iommu_dev_enable_feat,
+1
drivers/iommu/iommu.c
··· 751 751 mutex_unlock(&group->mutex); 752 752 dev->iommu_group = NULL; 753 753 kobject_put(group->devices_kobj); 754 + sysfs_remove_link(group->devices_kobj, device->name); 754 755 err_free_name: 755 756 kfree(device->name); 756 757 err_remove_link:
+1 -1
drivers/irqchip/irq-sifive-plic.c
··· 256 256 * Skip contexts other than external interrupts for our 257 257 * privilege level. 258 258 */ 259 - if (parent.args[0] != IRQ_EXT) 259 + if (parent.args[0] != RV_IRQ_EXT) 260 260 continue; 261 261 262 262 hartid = plic_find_hart_id(parent.np);
+27 -13
drivers/media/cec/cec-adap.c
··· 380 380 } else { 381 381 list_del_init(&data->list); 382 382 if (!(data->msg.tx_status & CEC_TX_STATUS_OK)) 383 - data->adap->transmit_queue_sz--; 383 + if (!WARN_ON(!data->adap->transmit_queue_sz)) 384 + data->adap->transmit_queue_sz--; 384 385 } 385 386 386 387 if (data->msg.tx_status & CEC_TX_STATUS_OK) { ··· 433 432 * need to do anything special in that case. 434 433 */ 435 434 } 435 + /* 436 + * If something went wrong and this counter isn't what it should 437 + * be, then this will reset it back to 0. Warn if it is not 0, 438 + * since it indicates a bug, either in this framework or in a 439 + * CEC driver. 440 + */ 441 + if (WARN_ON(adap->transmit_queue_sz)) 442 + adap->transmit_queue_sz = 0; 436 443 } 437 444 438 445 /* ··· 465 456 bool timeout = false; 466 457 u8 attempts; 467 458 468 - if (adap->transmitting) { 459 + if (adap->transmit_in_progress) { 469 460 int err; 470 461 471 462 /* ··· 500 491 goto unlock; 501 492 } 502 493 503 - if (adap->transmitting && timeout) { 494 + if (adap->transmit_in_progress && timeout) { 504 495 /* 505 496 * If we timeout, then log that. Normally this does 506 497 * not happen and it is an indication of a faulty CEC ··· 509 500 * so much traffic on the bus that the adapter was 510 501 * unable to transmit for CEC_XFER_TIMEOUT_MS (2.1s). 511 502 */ 512 - pr_warn("cec-%s: message %*ph timed out\n", adap->name, 513 - adap->transmitting->msg.len, 514 - adap->transmitting->msg.msg); 503 + if (adap->transmitting) { 504 + pr_warn("cec-%s: message %*ph timed out\n", adap->name, 505 + adap->transmitting->msg.len, 506 + adap->transmitting->msg.msg); 507 + /* Just give up on this. */ 508 + cec_data_cancel(adap->transmitting, 509 + CEC_TX_STATUS_TIMEOUT); 510 + } else { 511 + pr_warn("cec-%s: transmit timed out\n", adap->name); 512 + } 515 513 adap->transmit_in_progress = false; 516 514 adap->tx_timeouts++; 517 - /* Just give up on this. */ 518 - cec_data_cancel(adap->transmitting, 519 - CEC_TX_STATUS_TIMEOUT); 520 515 goto unlock; 521 516 } 522 517 ··· 535 522 data = list_first_entry(&adap->transmit_queue, 536 523 struct cec_data, list); 537 524 list_del_init(&data->list); 538 - adap->transmit_queue_sz--; 525 + if (!WARN_ON(!data->adap->transmit_queue_sz)) 526 + adap->transmit_queue_sz--; 539 527 540 528 /* Make this the current transmitting message */ 541 529 adap->transmitting = data; ··· 1099 1085 valid_la = false; 1100 1086 else if (!cec_msg_is_broadcast(msg) && !(dir_fl & DIRECTED)) 1101 1087 valid_la = false; 1102 - else if (cec_msg_is_broadcast(msg) && !(dir_fl & BCAST1_4)) 1088 + else if (cec_msg_is_broadcast(msg) && !(dir_fl & BCAST)) 1103 1089 valid_la = false; 1104 1090 else if (cec_msg_is_broadcast(msg) && 1105 - adap->log_addrs.cec_version >= CEC_OP_CEC_VERSION_2_0 && 1106 - !(dir_fl & BCAST2_0)) 1091 + adap->log_addrs.cec_version < CEC_OP_CEC_VERSION_2_0 && 1092 + !(dir_fl & BCAST1_4)) 1107 1093 valid_la = false; 1108 1094 } 1109 1095 if (valid_la && min_len) {
+13 -4
drivers/media/usb/pulse8-cec/pulse8-cec.c
··· 116 116 unsigned int vers; 117 117 struct completion cmd_done; 118 118 struct work_struct work; 119 + u8 work_result; 119 120 struct delayed_work ping_eeprom_work; 120 121 struct cec_msg rx_msg; 121 122 u8 data[DATA_SIZE]; ··· 138 137 { 139 138 struct pulse8 *pulse8 = 140 139 container_of(work, struct pulse8, work); 140 + u8 result = pulse8->work_result; 141 141 142 - switch (pulse8->data[0] & 0x3f) { 142 + pulse8->work_result = 0; 143 + switch (result & 0x3f) { 143 144 case MSGCODE_FRAME_DATA: 144 145 cec_received_msg(pulse8->adap, &pulse8->rx_msg); 145 146 break; ··· 175 172 pulse8->escape = false; 176 173 } else if (data == MSGEND) { 177 174 struct cec_msg *msg = &pulse8->rx_msg; 175 + u8 msgcode = pulse8->buf[0]; 178 176 179 177 if (debug) 180 178 dev_info(pulse8->dev, "received: %*ph\n", 181 179 pulse8->idx, pulse8->buf); 182 - pulse8->data[0] = pulse8->buf[0]; 183 - switch (pulse8->buf[0] & 0x3f) { 180 + switch (msgcode & 0x3f) { 184 181 case MSGCODE_FRAME_START: 185 182 msg->len = 1; 186 183 msg->msg[0] = pulse8->buf[1]; ··· 189 186 if (msg->len == CEC_MAX_MSG_SIZE) 190 187 break; 191 188 msg->msg[msg->len++] = pulse8->buf[1]; 192 - if (pulse8->buf[0] & MSGCODE_FRAME_EOM) 189 + if (msgcode & MSGCODE_FRAME_EOM) { 190 + WARN_ON(pulse8->work_result); 191 + pulse8->work_result = msgcode; 193 192 schedule_work(&pulse8->work); 193 + break; 194 + } 194 195 break; 195 196 case MSGCODE_TRANSMIT_SUCCEEDED: 196 197 case MSGCODE_TRANSMIT_FAILED_LINE: 197 198 case MSGCODE_TRANSMIT_FAILED_ACK: 198 199 case MSGCODE_TRANSMIT_FAILED_TIMEOUT_DATA: 199 200 case MSGCODE_TRANSMIT_FAILED_TIMEOUT_LINE: 201 + WARN_ON(pulse8->work_result); 202 + pulse8->work_result = msgcode; 200 203 schedule_work(&pulse8->work); 201 204 break; 202 205 case MSGCODE_HIGH_ERROR:
+8 -6
drivers/mtd/nand/onenand/omap2.c
··· 148 148 unsigned long timeout; 149 149 u32 syscfg; 150 150 151 - if (state == FL_RESETING || state == FL_PREPARING_ERASE || 151 + if (state == FL_RESETTING || state == FL_PREPARING_ERASE || 152 152 state == FL_VERIFYING_ERASE) { 153 153 int i = 21; 154 154 unsigned int intr_flags = ONENAND_INT_MASTER; 155 155 156 156 switch (state) { 157 - case FL_RESETING: 157 + case FL_RESETTING: 158 158 intr_flags |= ONENAND_INT_RESET; 159 159 break; 160 160 case FL_PREPARING_ERASE: ··· 328 328 struct dma_async_tx_descriptor *tx; 329 329 dma_cookie_t cookie; 330 330 331 - tx = dmaengine_prep_dma_memcpy(c->dma_chan, dst, src, count, 0); 331 + tx = dmaengine_prep_dma_memcpy(c->dma_chan, dst, src, count, 332 + DMA_CTRL_ACK | DMA_PREP_INTERRUPT); 332 333 if (!tx) { 333 334 dev_err(&c->pdev->dev, "Failed to prepare DMA memcpy\n"); 334 335 return -EIO; ··· 376 375 * context fallback to PIO mode. 377 376 */ 378 377 if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 || 379 - count < 384 || in_interrupt() || oops_in_progress ) 378 + count < 384 || in_interrupt() || oops_in_progress) 380 379 goto out_copy; 381 380 382 381 xtra = count & 3; ··· 423 422 * context fallback to PIO mode. 424 423 */ 425 424 if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 || 426 - count < 384 || in_interrupt() || oops_in_progress ) 425 + count < 384 || in_interrupt() || oops_in_progress) 427 426 goto out_copy; 428 427 429 428 dma_src = dma_map_single(dev, buf, count, DMA_TO_DEVICE); ··· 529 528 c->gpmc_cs, c->phys_base, c->onenand.base, 530 529 c->dma_chan ? "DMA" : "PIO"); 531 530 532 - if ((r = onenand_scan(&c->mtd, 1)) < 0) 531 + r = onenand_scan(&c->mtd, 1); 532 + if (r < 0) 533 533 goto err_release_dma; 534 534 535 535 freq = omap2_onenand_get_freq(c->onenand.version_id);
+7 -7
drivers/mtd/nand/onenand/onenand_base.c
··· 2853 2853 2854 2854 /* Exit OTP access mode */ 2855 2855 this->command(mtd, ONENAND_CMD_RESET, 0, 0); 2856 - this->wait(mtd, FL_RESETING); 2856 + this->wait(mtd, FL_RESETTING); 2857 2857 2858 2858 status = this->read_word(this->base + ONENAND_REG_CTRL_STATUS); 2859 2859 status &= 0x60; ··· 2924 2924 2925 2925 /* Exit OTP access mode */ 2926 2926 this->command(mtd, ONENAND_CMD_RESET, 0, 0); 2927 - this->wait(mtd, FL_RESETING); 2927 + this->wait(mtd, FL_RESETTING); 2928 2928 2929 2929 return ret; 2930 2930 } ··· 2968 2968 2969 2969 /* Exit OTP access mode */ 2970 2970 this->command(mtd, ONENAND_CMD_RESET, 0, 0); 2971 - this->wait(mtd, FL_RESETING); 2971 + this->wait(mtd, FL_RESETTING); 2972 2972 2973 2973 return ret; 2974 2974 } ··· 3008 3008 3009 3009 /* Exit OTP access mode */ 3010 3010 this->command(mtd, ONENAND_CMD_RESET, 0, 0); 3011 - this->wait(mtd, FL_RESETING); 3011 + this->wait(mtd, FL_RESETTING); 3012 3012 } else { 3013 3013 ops.mode = MTD_OPS_PLACE_OOB; 3014 3014 ops.ooblen = len; ··· 3413 3413 this->boundary[die] = bdry & FLEXONENAND_PI_MASK; 3414 3414 3415 3415 this->command(mtd, ONENAND_CMD_RESET, 0, 0); 3416 - this->wait(mtd, FL_RESETING); 3416 + this->wait(mtd, FL_RESETTING); 3417 3417 3418 3418 printk(KERN_INFO "Die %d boundary: %d%s\n", die, 3419 3419 this->boundary[die], locked ? "(Locked)" : "(Unlocked)"); ··· 3635 3635 ret = this->wait(mtd, FL_WRITING); 3636 3636 out: 3637 3637 this->write_word(ONENAND_CMD_RESET, this->base + ONENAND_REG_COMMAND); 3638 - this->wait(mtd, FL_RESETING); 3638 + this->wait(mtd, FL_RESETTING); 3639 3639 if (!ret) 3640 3640 /* Recalculate device size on boundary change*/ 3641 3641 flexonenand_get_size(mtd); ··· 3671 3671 /* Reset OneNAND to read default register values */ 3672 3672 this->write_word(ONENAND_CMD_RESET, this->base + ONENAND_BOOTRAM); 3673 3673 /* Wait reset */ 3674 - this->wait(mtd, FL_RESETING); 3674 + this->wait(mtd, FL_RESETTING); 3675 3675 3676 3676 /* Restore system configuration 1 */ 3677 3677 this->write_word(syscfg, this->base + ONENAND_REG_SYS_CFG1);
+4 -4
drivers/mtd/nand/onenand/samsung_mtd.c
··· 675 675 normal: 676 676 if (count != mtd->writesize) { 677 677 /* Copy the bufferram to memory to prevent unaligned access */ 678 - memcpy(this->page_buf, p, mtd->writesize); 679 - p = this->page_buf + offset; 678 + memcpy_fromio(this->page_buf, p, mtd->writesize); 679 + memcpy(buffer, this->page_buf + offset, count); 680 + } else { 681 + memcpy_fromio(buffer, p, count); 680 682 } 681 - 682 - memcpy(buffer, p, count); 683 683 684 684 return 0; 685 685 }
+6 -7
drivers/mtd/nand/raw/cadence-nand-controller.c
··· 914 914 /* Prepare CDMA descriptor. */ 915 915 static void 916 916 cadence_nand_cdma_desc_prepare(struct cdns_nand_ctrl *cdns_ctrl, 917 - char nf_mem, u32 flash_ptr, char *mem_ptr, 918 - char *ctrl_data_ptr, u16 ctype) 917 + char nf_mem, u32 flash_ptr, dma_addr_t mem_ptr, 918 + dma_addr_t ctrl_data_ptr, u16 ctype) 919 919 { 920 920 struct cadence_nand_cdma_desc *cdma_desc = cdns_ctrl->cdma_desc; 921 921 ··· 931 931 cdma_desc->command_flags |= CDMA_CF_DMA_MASTER; 932 932 cdma_desc->command_flags |= CDMA_CF_INT; 933 933 934 - cdma_desc->memory_pointer = (uintptr_t)mem_ptr; 934 + cdma_desc->memory_pointer = mem_ptr; 935 935 cdma_desc->status = 0; 936 936 cdma_desc->sync_flag_pointer = 0; 937 937 cdma_desc->sync_arguments = 0; 938 938 939 939 cdma_desc->command_type = ctype; 940 - cdma_desc->ctrl_data_ptr = (uintptr_t)ctrl_data_ptr; 940 + cdma_desc->ctrl_data_ptr = ctrl_data_ptr; 941 941 } 942 942 943 943 static u8 cadence_nand_check_desc_error(struct cdns_nand_ctrl *cdns_ctrl, ··· 1280 1280 } 1281 1281 1282 1282 cadence_nand_cdma_desc_prepare(cdns_ctrl, chip_nr, page, 1283 - (void *)dma_buf, (void *)dma_ctrl_dat, 1284 - ctype); 1283 + dma_buf, dma_ctrl_dat, ctype); 1285 1284 1286 1285 status = cadence_nand_cdma_send_and_wait(cdns_ctrl, thread_nr); 1287 1286 ··· 1359 1360 1360 1361 cadence_nand_cdma_desc_prepare(cdns_ctrl, 1361 1362 cdns_chip->cs[chip->cur_cs], 1362 - page, NULL, NULL, 1363 + page, 0, 0, 1363 1364 CDMA_CT_ERASE); 1364 1365 status = cadence_nand_cdma_send_and_wait(cdns_ctrl, thread_nr); 1365 1366 if (status) {
+36 -2
drivers/mtd/nand/raw/stm32_fmc2_nand.c
··· 37 37 /* Max ECC buffer length */ 38 38 #define FMC2_MAX_ECC_BUF_LEN (FMC2_BCHDSRS_LEN * FMC2_MAX_SG) 39 39 40 + #define FMC2_TIMEOUT_US 1000 40 41 #define FMC2_TIMEOUT_MS 1000 41 42 42 43 /* Timings */ ··· 54 53 #define FMC2_PMEM 0x88 55 54 #define FMC2_PATT 0x8c 56 55 #define FMC2_HECCR 0x94 56 + #define FMC2_ISR 0x184 57 + #define FMC2_ICR 0x188 57 58 #define FMC2_CSQCR 0x200 58 59 #define FMC2_CSQCFGR1 0x204 59 60 #define FMC2_CSQCFGR2 0x208 ··· 120 117 #define FMC2_PATT_ATTHOLD(x) (((x) & 0xff) << 16) 121 118 #define FMC2_PATT_ATTHIZ(x) (((x) & 0xff) << 24) 122 119 #define FMC2_PATT_DEFAULT 0x0a0a0a0a 120 + 121 + /* Register: FMC2_ISR */ 122 + #define FMC2_ISR_IHLF BIT(1) 123 + 124 + /* Register: FMC2_ICR */ 125 + #define FMC2_ICR_CIHLF BIT(1) 123 126 124 127 /* Register: FMC2_CSQCR */ 125 128 #define FMC2_CSQCR_CSQSTART BIT(0) ··· 1331 1322 stm32_fmc2_set_buswidth_16(fmc2, true); 1332 1323 } 1333 1324 1325 + static int stm32_fmc2_waitrdy(struct nand_chip *chip, unsigned long timeout_ms) 1326 + { 1327 + struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1328 + const struct nand_sdr_timings *timings; 1329 + u32 isr, sr; 1330 + 1331 + /* Check if there is no pending requests to the NAND flash */ 1332 + if (readl_relaxed_poll_timeout_atomic(fmc2->io_base + FMC2_SR, sr, 1333 + sr & FMC2_SR_NWRF, 1, 1334 + FMC2_TIMEOUT_US)) 1335 + dev_warn(fmc2->dev, "Waitrdy timeout\n"); 1336 + 1337 + /* Wait tWB before R/B# signal is low */ 1338 + timings = nand_get_sdr_timings(&chip->data_interface); 1339 + ndelay(PSEC_TO_NSEC(timings->tWB_max)); 1340 + 1341 + /* R/B# signal is low, clear high level flag */ 1342 + writel_relaxed(FMC2_ICR_CIHLF, fmc2->io_base + FMC2_ICR); 1343 + 1344 + /* Wait R/B# signal is high */ 1345 + return readl_relaxed_poll_timeout_atomic(fmc2->io_base + FMC2_ISR, 1346 + isr, isr & FMC2_ISR_IHLF, 1347 + 5, 1000 * timeout_ms); 1348 + } 1349 + 1334 1350 static int stm32_fmc2_exec_op(struct nand_chip *chip, 1335 1351 const struct nand_operation *op, 1336 1352 bool check_only) ··· 1400 1366 break; 1401 1367 1402 1368 case NAND_OP_WAITRDY_INSTR: 1403 - ret = nand_soft_waitrdy(chip, 1404 - instr->ctx.waitrdy.timeout_ms); 1369 + ret = stm32_fmc2_waitrdy(chip, 1370 + instr->ctx.waitrdy.timeout_ms); 1405 1371 break; 1406 1372 } 1407 1373 }
+2 -1
drivers/mtd/sm_ftl.c
··· 247 247 248 248 /* FTL can contain -1 entries that are by default filled with bits */ 249 249 if (block == -1) { 250 - memset(buffer, 0xFF, SM_SECTOR_SIZE); 250 + if (buffer) 251 + memset(buffer, 0xFF, SM_SECTOR_SIZE); 251 252 return 0; 252 253 } 253 254
+1
drivers/mtd/spi-nor/spi-nor.c
··· 4596 4596 static void st_micron_set_default_init(struct spi_nor *nor) 4597 4597 { 4598 4598 nor->flags |= SNOR_F_HAS_LOCK; 4599 + nor->flags &= ~SNOR_F_HAS_16BIT_SR; 4599 4600 nor->params.quad_enable = NULL; 4600 4601 nor->params.set_4byte = st_micron_set_4byte; 4601 4602 }
+53 -10
drivers/net/can/m_can/tcan4x5x.c
··· 102 102 #define TCAN4X5X_MODE_NORMAL BIT(7) 103 103 104 104 #define TCAN4X5X_DISABLE_WAKE_MSK (BIT(31) | BIT(30)) 105 + #define TCAN4X5X_DISABLE_INH_MSK BIT(9) 105 106 106 107 #define TCAN4X5X_SW_RESET BIT(2) 107 108 ··· 165 164 usleep_range(5, 50); 166 165 gpiod_set_value(priv->device_wake_gpio, 1); 167 166 } 167 + } 168 + 169 + static int tcan4x5x_reset(struct tcan4x5x_priv *priv) 170 + { 171 + int ret = 0; 172 + 173 + if (priv->reset_gpio) { 174 + gpiod_set_value(priv->reset_gpio, 1); 175 + 176 + /* tpulse_width minimum 30us */ 177 + usleep_range(30, 100); 178 + gpiod_set_value(priv->reset_gpio, 0); 179 + } else { 180 + ret = regmap_write(priv->regmap, TCAN4X5X_CONFIG, 181 + TCAN4X5X_SW_RESET); 182 + if (ret) 183 + return ret; 184 + } 185 + 186 + usleep_range(700, 1000); 187 + 188 + return ret; 168 189 } 169 190 170 191 static int regmap_spi_gather_write(void *context, const void *reg, ··· 371 348 TCAN4X5X_DISABLE_WAKE_MSK, 0x00); 372 349 } 373 350 351 + static int tcan4x5x_disable_state(struct m_can_classdev *cdev) 352 + { 353 + struct tcan4x5x_priv *tcan4x5x = cdev->device_data; 354 + 355 + return regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG, 356 + TCAN4X5X_DISABLE_INH_MSK, 0x01); 357 + } 358 + 374 359 static int tcan4x5x_parse_config(struct m_can_classdev *cdev) 375 360 { 376 361 struct tcan4x5x_priv *tcan4x5x = cdev->device_data; 362 + int ret; 377 363 378 364 tcan4x5x->device_wake_gpio = devm_gpiod_get(cdev->dev, "device-wake", 379 365 GPIOD_OUT_HIGH); 380 366 if (IS_ERR(tcan4x5x->device_wake_gpio)) { 381 - if (PTR_ERR(tcan4x5x->power) == -EPROBE_DEFER) 367 + if (PTR_ERR(tcan4x5x->device_wake_gpio) == -EPROBE_DEFER) 382 368 return -EPROBE_DEFER; 383 369 384 370 tcan4x5x_disable_wake(cdev); ··· 398 366 if (IS_ERR(tcan4x5x->reset_gpio)) 399 367 tcan4x5x->reset_gpio = NULL; 400 368 401 - usleep_range(700, 1000); 369 + ret = tcan4x5x_reset(tcan4x5x); 370 + if (ret) 371 + return ret; 402 372 403 373 tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev, 404 374 "device-state", 405 375 GPIOD_IN); 406 - if (IS_ERR(tcan4x5x->device_state_gpio)) 376 + if (IS_ERR(tcan4x5x->device_state_gpio)) { 407 377 tcan4x5x->device_state_gpio = NULL; 408 - 409 - tcan4x5x->power = devm_regulator_get_optional(cdev->dev, 410 - "vsup"); 411 - if (PTR_ERR(tcan4x5x->power) == -EPROBE_DEFER) 412 - return -EPROBE_DEFER; 378 + tcan4x5x_disable_state(cdev); 379 + } 413 380 414 381 return 0; 415 382 } ··· 442 411 priv = devm_kzalloc(&spi->dev, sizeof(*priv), GFP_KERNEL); 443 412 if (!priv) 444 413 return -ENOMEM; 414 + 415 + priv->power = devm_regulator_get_optional(&spi->dev, "vsup"); 416 + if (PTR_ERR(priv->power) == -EPROBE_DEFER) 417 + return -EPROBE_DEFER; 418 + else 419 + priv->power = NULL; 445 420 446 421 mcan_class->device_data = priv; 447 422 ··· 488 451 priv->regmap = devm_regmap_init(&spi->dev, &tcan4x5x_bus, 489 452 &spi->dev, &tcan4x5x_regmap); 490 453 491 - ret = tcan4x5x_parse_config(mcan_class); 454 + ret = tcan4x5x_power_enable(priv->power, 1); 492 455 if (ret) 493 456 goto out_clk; 494 457 495 - tcan4x5x_power_enable(priv->power, 1); 458 + ret = tcan4x5x_parse_config(mcan_class); 459 + if (ret) 460 + goto out_power; 461 + 462 + ret = tcan4x5x_init(mcan_class); 463 + if (ret) 464 + goto out_power; 496 465 497 466 ret = m_can_class_register(mcan_class); 498 467 if (ret)
+10 -11
drivers/net/can/mscan/mscan.c
··· 381 381 struct net_device *dev = napi->dev; 382 382 struct mscan_regs __iomem *regs = priv->reg_base; 383 383 struct net_device_stats *stats = &dev->stats; 384 - int npackets = 0; 385 - int ret = 1; 384 + int work_done = 0; 386 385 struct sk_buff *skb; 387 386 struct can_frame *frame; 388 387 u8 canrflg; 389 388 390 - while (npackets < quota) { 389 + while (work_done < quota) { 391 390 canrflg = in_8(&regs->canrflg); 392 391 if (!(canrflg & (MSCAN_RXF | MSCAN_ERR_IF))) 393 392 break; ··· 407 408 408 409 stats->rx_packets++; 409 410 stats->rx_bytes += frame->can_dlc; 410 - npackets++; 411 + work_done++; 411 412 netif_receive_skb(skb); 412 413 } 413 414 414 - if (!(in_8(&regs->canrflg) & (MSCAN_RXF | MSCAN_ERR_IF))) { 415 - napi_complete(&priv->napi); 416 - clear_bit(F_RX_PROGRESS, &priv->flags); 417 - if (priv->can.state < CAN_STATE_BUS_OFF) 418 - out_8(&regs->canrier, priv->shadow_canrier); 419 - ret = 0; 415 + if (work_done < quota) { 416 + if (likely(napi_complete_done(&priv->napi, work_done))) { 417 + clear_bit(F_RX_PROGRESS, &priv->flags); 418 + if (priv->can.state < CAN_STATE_BUS_OFF) 419 + out_8(&regs->canrier, priv->shadow_canrier); 420 + } 420 421 } 421 - return ret; 422 + return work_done; 422 423 } 423 424 424 425 static irqreturn_t mscan_isr(int irq, void *dev_id)
+2 -2
drivers/net/can/usb/gs_usb.c
··· 918 918 GS_USB_BREQ_HOST_FORMAT, 919 919 USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_INTERFACE, 920 920 1, 921 - intf->altsetting[0].desc.bInterfaceNumber, 921 + intf->cur_altsetting->desc.bInterfaceNumber, 922 922 hconf, 923 923 sizeof(*hconf), 924 924 1000); ··· 941 941 GS_USB_BREQ_DEVICE_CONFIG, 942 942 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_INTERFACE, 943 943 1, 944 - intf->altsetting[0].desc.bInterfaceNumber, 944 + intf->cur_altsetting->desc.bInterfaceNumber, 945 945 dconf, 946 946 sizeof(*dconf), 947 947 1000);
+1 -1
drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
··· 1590 1590 struct usb_endpoint_descriptor *ep; 1591 1591 int i; 1592 1592 1593 - iface_desc = &dev->intf->altsetting[0]; 1593 + iface_desc = dev->intf->cur_altsetting; 1594 1594 1595 1595 for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { 1596 1596 ep = &iface_desc->endpoint[i].desc;
+1 -1
drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
··· 1310 1310 struct usb_endpoint_descriptor *endpoint; 1311 1311 int i; 1312 1312 1313 - iface_desc = &dev->intf->altsetting[0]; 1313 + iface_desc = dev->intf->cur_altsetting; 1314 1314 1315 1315 for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { 1316 1316 endpoint = &iface_desc->endpoint[i].desc;
+3 -3
drivers/net/dsa/bcm_sf2_cfp.c
··· 358 358 return -EINVAL; 359 359 } 360 360 361 - ip_frag = be32_to_cpu(fs->m_ext.data[0]); 361 + ip_frag = !!(be32_to_cpu(fs->h_ext.data[0]) & 1); 362 362 363 363 /* Locate the first rule available */ 364 364 if (fs->location == RX_CLS_LOC_ANY) ··· 569 569 570 570 if (rule->fs.flow_type != fs->flow_type || 571 571 rule->fs.ring_cookie != fs->ring_cookie || 572 - rule->fs.m_ext.data[0] != fs->m_ext.data[0]) 572 + rule->fs.h_ext.data[0] != fs->h_ext.data[0]) 573 573 continue; 574 574 575 575 switch (fs->flow_type & ~FLOW_EXT) { ··· 621 621 return -EINVAL; 622 622 } 623 623 624 - ip_frag = be32_to_cpu(fs->m_ext.data[0]); 624 + ip_frag = !!(be32_to_cpu(fs->h_ext.data[0]) & 1); 625 625 626 626 layout = &udf_tcpip6_layout; 627 627 slice_num = bcm_sf2_get_slice_number(layout, 0);
+5
drivers/net/dsa/mv88e6xxx/global1.c
··· 360 360 { 361 361 u16 ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST; 362 362 363 + /* Use the default high priority for management frames sent to 364 + * the CPU. 365 + */ 366 + port |= MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST_MGMTPRI; 367 + 363 368 return mv88e6390_g1_monitor_write(chip, ptr, port); 364 369 } 365 370
+1
drivers/net/dsa/mv88e6xxx/global1.h
··· 211 211 #define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_INGRESS_DEST 0x2000 212 212 #define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_EGRESS_DEST 0x2100 213 213 #define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST 0x3000 214 + #define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST_MGMTPRI 0x00e0 214 215 #define MV88E6390_G1_MONITOR_MGMT_CTL_DATA_MASK 0x00ff 215 216 216 217 /* Offset 0x1C: Global Control 2 */
+6 -6
drivers/net/dsa/mv88e6xxx/port.c
··· 393 393 } 394 394 395 395 static int mv88e6xxx_port_set_cmode(struct mv88e6xxx_chip *chip, int port, 396 - phy_interface_t mode) 396 + phy_interface_t mode, bool force) 397 397 { 398 398 u8 lane; 399 399 u16 cmode; ··· 427 427 cmode = 0; 428 428 } 429 429 430 - /* cmode doesn't change, nothing to do for us */ 431 - if (cmode == chip->ports[port].cmode) 430 + /* cmode doesn't change, nothing to do for us unless forced */ 431 + if (cmode == chip->ports[port].cmode && !force) 432 432 return 0; 433 433 434 434 lane = mv88e6xxx_serdes_get_lane(chip, port); ··· 484 484 if (port != 9 && port != 10) 485 485 return -EOPNOTSUPP; 486 486 487 - return mv88e6xxx_port_set_cmode(chip, port, mode); 487 + return mv88e6xxx_port_set_cmode(chip, port, mode, false); 488 488 } 489 489 490 490 int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port, ··· 504 504 break; 505 505 } 506 506 507 - return mv88e6xxx_port_set_cmode(chip, port, mode); 507 + return mv88e6xxx_port_set_cmode(chip, port, mode, false); 508 508 } 509 509 510 510 static int mv88e6341_port_set_cmode_writable(struct mv88e6xxx_chip *chip, ··· 555 555 if (err) 556 556 return err; 557 557 558 - return mv88e6xxx_port_set_cmode(chip, port, mode); 558 + return mv88e6xxx_port_set_cmode(chip, port, mode, true); 559 559 } 560 560 561 561 int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode)
+5 -5
drivers/net/dsa/sja1105/sja1105_main.c
··· 1569 1569 1570 1570 if (enabled) { 1571 1571 /* Enable VLAN filtering. */ 1572 - tpid = ETH_P_8021AD; 1573 - tpid2 = ETH_P_8021Q; 1572 + tpid = ETH_P_8021Q; 1573 + tpid2 = ETH_P_8021AD; 1574 1574 } else { 1575 1575 /* Disable VLAN filtering. */ 1576 1576 tpid = ETH_P_SJA1105; ··· 1579 1579 1580 1580 table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS]; 1581 1581 general_params = table->entries; 1582 - /* EtherType used to identify outer tagged (S-tag) VLAN traffic */ 1583 - general_params->tpid = tpid; 1584 1582 /* EtherType used to identify inner tagged (C-tag) VLAN traffic */ 1583 + general_params->tpid = tpid; 1584 + /* EtherType used to identify outer tagged (S-tag) VLAN traffic */ 1585 1585 general_params->tpid2 = tpid2; 1586 1586 /* When VLAN filtering is on, we need to at least be able to 1587 1587 * decode management traffic through the "backup plan". ··· 1855 1855 if (!clone) 1856 1856 goto out; 1857 1857 1858 - sja1105_ptp_txtstamp_skb(ds, slot, clone); 1858 + sja1105_ptp_txtstamp_skb(ds, port, clone); 1859 1859 1860 1860 out: 1861 1861 mutex_unlock(&priv->mgmt_lock);
+3 -3
drivers/net/dsa/sja1105/sja1105_ptp.c
··· 234 234 if (rw == SPI_WRITE) 235 235 priv->info->ptp_cmd_packing(buf, cmd, PACK); 236 236 237 - rc = sja1105_xfer_buf(priv, SPI_WRITE, regs->ptp_control, buf, 237 + rc = sja1105_xfer_buf(priv, rw, regs->ptp_control, buf, 238 238 SJA1105_SIZE_PTP_CMD); 239 239 240 240 if (rw == SPI_READ) ··· 659 659 ptp_data->clock = NULL; 660 660 } 661 661 662 - void sja1105_ptp_txtstamp_skb(struct dsa_switch *ds, int slot, 662 + void sja1105_ptp_txtstamp_skb(struct dsa_switch *ds, int port, 663 663 struct sk_buff *skb) 664 664 { 665 665 struct sja1105_private *priv = ds->priv; ··· 679 679 goto out; 680 680 } 681 681 682 - rc = sja1105_ptpegr_ts_poll(ds, slot, &ts); 682 + rc = sja1105_ptpegr_ts_poll(ds, port, &ts); 683 683 if (rc < 0) { 684 684 dev_err(ds->dev, "timed out polling for tstamp\n"); 685 685 kfree_skb(skb);
+5 -2
drivers/net/dsa/sja1105/sja1105_static_config.c
··· 142 142 return size; 143 143 } 144 144 145 + /* TPID and TPID2 are intentionally reversed so that semantic 146 + * compatibility with E/T is kept. 147 + */ 145 148 static size_t 146 149 sja1105pqrs_general_params_entry_packing(void *buf, void *entry_ptr, 147 150 enum packing_op op) ··· 169 166 sja1105_packing(buf, &entry->mirr_port, 141, 139, size, op); 170 167 sja1105_packing(buf, &entry->vlmarker, 138, 107, size, op); 171 168 sja1105_packing(buf, &entry->vlmask, 106, 75, size, op); 172 - sja1105_packing(buf, &entry->tpid, 74, 59, size, op); 169 + sja1105_packing(buf, &entry->tpid2, 74, 59, size, op); 173 170 sja1105_packing(buf, &entry->ignore2stf, 58, 58, size, op); 174 - sja1105_packing(buf, &entry->tpid2, 57, 42, size, op); 171 + sja1105_packing(buf, &entry->tpid, 57, 42, size, op); 175 172 sja1105_packing(buf, &entry->queue_ts, 41, 41, size, op); 176 173 sja1105_packing(buf, &entry->egrmirrvid, 40, 29, size, op); 177 174 sja1105_packing(buf, &entry->egrmirrpcp, 28, 26, size, op);
-5
drivers/net/dsa/sja1105/sja1105_tas.c
··· 477 477 if (admin->cycle_time_extension) 478 478 return -ENOTSUPP; 479 479 480 - if (!ns_to_sja1105_delta(admin->base_time)) { 481 - dev_err(ds->dev, "A base time of zero is not hardware-allowed\n"); 482 - return -ERANGE; 483 - } 484 - 485 480 for (i = 0; i < admin->num_entries; i++) { 486 481 s64 delta_ns = admin->entries[i].interval; 487 482 s64 delta_cycles = ns_to_sja1105_delta(delta_ns);
+2 -2
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 403 403 if (err < 0) 404 404 goto err_exit; 405 405 406 + aq_nic_set_loopback(self); 407 + 406 408 err = self->aq_hw_ops->hw_start(self->aq_hw); 407 409 if (err < 0) 408 410 goto err_exit; ··· 414 412 goto err_exit; 415 413 416 414 INIT_WORK(&self->service_task, aq_nic_service_task); 417 - 418 - aq_nic_set_loopback(self); 419 415 420 416 timer_setup(&self->service_timer, aq_nic_service_timer_cb, 0); 421 417 aq_nic_service_timer_cb(&self->service_timer);
-3
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
··· 1525 1525 .rx_extract_ts = hw_atl_b0_rx_extract_ts, 1526 1526 .extract_hwts = hw_atl_b0_extract_hwts, 1527 1527 .hw_set_offload = hw_atl_b0_hw_offload_set, 1528 - .hw_get_hw_stats = hw_atl_utils_get_hw_stats, 1529 - .hw_get_fw_version = hw_atl_utils_get_fw_version, 1530 - .hw_set_offload = hw_atl_b0_hw_offload_set, 1531 1528 .hw_set_loopback = hw_atl_b0_set_loopback, 1532 1529 .hw_set_fc = hw_atl_b0_set_fc, 1533 1530 };
+1 -3
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c
··· 667 667 u32 speed; 668 668 669 669 mpi_state = hw_atl_utils_mpi_get_state(self); 670 - speed = mpi_state & (FW2X_RATE_100M | FW2X_RATE_1G | 671 - FW2X_RATE_2G5 | FW2X_RATE_5G | 672 - FW2X_RATE_10G); 670 + speed = mpi_state >> HW_ATL_MPI_SPEED_SHIFT; 673 671 674 672 if (!speed) { 675 673 link_status->mbps = 0U;
+6 -3
drivers/net/ethernet/broadcom/b44.c
··· 1516 1516 int ethaddr_bytes = ETH_ALEN; 1517 1517 1518 1518 memset(ppattern + offset, 0xff, magicsync); 1519 - for (j = 0; j < magicsync; j++) 1520 - set_bit(len++, (unsigned long *) pmask); 1519 + for (j = 0; j < magicsync; j++) { 1520 + pmask[len >> 3] |= BIT(len & 7); 1521 + len++; 1522 + } 1521 1523 1522 1524 for (j = 0; j < B44_MAX_PATTERNS; j++) { 1523 1525 if ((B44_PATTERN_SIZE - len) >= ETH_ALEN) ··· 1531 1529 for (k = 0; k< ethaddr_bytes; k++) { 1532 1530 ppattern[offset + magicsync + 1533 1531 (j * ETH_ALEN) + k] = macaddr[k]; 1534 - set_bit(len++, (unsigned long *) pmask); 1532 + pmask[len >> 3] |= BIT(len & 7); 1533 + len++; 1535 1534 } 1536 1535 } 1537 1536 return len - 1;
+4 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
··· 1536 1536 ((MAX_MAC_CREDIT_E2 - GET_NUM_VFS_PER_PATH(bp) * VF_MAC_CREDIT_CNT) / \ 1537 1537 func_num + GET_NUM_VFS_PER_PF(bp) * VF_MAC_CREDIT_CNT) 1538 1538 1539 + #define BNX2X_VFS_VLAN_CREDIT(bp) \ 1540 + (GET_NUM_VFS_PER_PATH(bp) * VF_VLAN_CREDIT_CNT) 1541 + 1539 1542 #define PF_VLAN_CREDIT_E2(bp, func_num) \ 1540 - ((MAX_MAC_CREDIT_E2 - GET_NUM_VFS_PER_PATH(bp) * VF_VLAN_CREDIT_CNT) / \ 1543 + ((MAX_VLAN_CREDIT_E2 - 1 - BNX2X_VFS_VLAN_CREDIT(bp)) / \ 1541 1544 func_num + GET_NUM_VFS_PER_PF(bp) * VF_VLAN_CREDIT_CNT) 1542 1545 1543 1546 #endif /* BNX2X_SP_VERBS */
+1 -3
drivers/net/ethernet/cadence/macb_main.c
··· 4088 4088 mgmt->rate = 0; 4089 4089 mgmt->hw.init = &init; 4090 4090 4091 - *tx_clk = clk_register(NULL, &mgmt->hw); 4091 + *tx_clk = devm_clk_register(&pdev->dev, &mgmt->hw); 4092 4092 if (IS_ERR(*tx_clk)) 4093 4093 return PTR_ERR(*tx_clk); 4094 4094 ··· 4416 4416 4417 4417 err_disable_clocks: 4418 4418 clk_disable_unprepare(tx_clk); 4419 - clk_unregister(tx_clk); 4420 4419 clk_disable_unprepare(hclk); 4421 4420 clk_disable_unprepare(pclk); 4422 4421 clk_disable_unprepare(rx_clk); ··· 4445 4446 pm_runtime_dont_use_autosuspend(&pdev->dev); 4446 4447 if (!pm_runtime_suspended(&pdev->dev)) { 4447 4448 clk_disable_unprepare(bp->tx_clk); 4448 - clk_unregister(bp->tx_clk); 4449 4449 clk_disable_unprepare(bp->hclk); 4450 4450 clk_disable_unprepare(bp->pclk); 4451 4451 clk_disable_unprepare(bp->rx_clk);
+1
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 504 504 505 505 enum cc_pause requested_fc; /* flow control user has requested */ 506 506 enum cc_pause fc; /* actual link flow control */ 507 + enum cc_pause advertised_fc; /* actual advertised flow control */ 507 508 508 509 enum cc_fec requested_fec; /* Forward Error Correction: */ 509 510 enum cc_fec fec; /* requested and actual in use */
+2 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
··· 807 807 struct port_info *p = netdev_priv(dev); 808 808 809 809 epause->autoneg = (p->link_cfg.requested_fc & PAUSE_AUTONEG) != 0; 810 - epause->rx_pause = (p->link_cfg.fc & PAUSE_RX) != 0; 811 - epause->tx_pause = (p->link_cfg.fc & PAUSE_TX) != 0; 810 + epause->rx_pause = (p->link_cfg.advertised_fc & PAUSE_RX) != 0; 811 + epause->tx_pause = (p->link_cfg.advertised_fc & PAUSE_TX) != 0; 812 812 } 813 813 814 814 static int set_pauseparam(struct net_device *dev,
+14 -9
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 4089 4089 if (cc_pause & PAUSE_TX) 4090 4090 fw_pause |= FW_PORT_CAP32_802_3_PAUSE; 4091 4091 else 4092 - fw_pause |= FW_PORT_CAP32_802_3_ASM_DIR; 4092 + fw_pause |= FW_PORT_CAP32_802_3_ASM_DIR | 4093 + FW_PORT_CAP32_802_3_PAUSE; 4093 4094 } else if (cc_pause & PAUSE_TX) { 4094 4095 fw_pause |= FW_PORT_CAP32_802_3_ASM_DIR; 4095 4096 } ··· 8564 8563 void t4_handle_get_port_info(struct port_info *pi, const __be64 *rpl) 8565 8564 { 8566 8565 const struct fw_port_cmd *cmd = (const void *)rpl; 8567 - int action = FW_PORT_CMD_ACTION_G(be32_to_cpu(cmd->action_to_len16)); 8568 - struct adapter *adapter = pi->adapter; 8569 - struct link_config *lc = &pi->link_cfg; 8570 - int link_ok, linkdnrc; 8571 - enum fw_port_type port_type; 8572 - enum fw_port_module_type mod_type; 8573 - unsigned int speed, fc, fec; 8574 8566 fw_port_cap32_t pcaps, acaps, lpacaps, linkattr; 8567 + struct link_config *lc = &pi->link_cfg; 8568 + struct adapter *adapter = pi->adapter; 8569 + unsigned int speed, fc, fec, adv_fc; 8570 + enum fw_port_module_type mod_type; 8571 + int action, link_ok, linkdnrc; 8572 + enum fw_port_type port_type; 8575 8573 8576 8574 /* Extract the various fields from the Port Information message. 8577 8575 */ 8576 + action = FW_PORT_CMD_ACTION_G(be32_to_cpu(cmd->action_to_len16)); 8578 8577 switch (action) { 8579 8578 case FW_PORT_ACTION_GET_PORT_INFO: { 8580 8579 u32 lstatus = be32_to_cpu(cmd->u.info.lstatus_to_modtype); ··· 8612 8611 } 8613 8612 8614 8613 fec = fwcap_to_cc_fec(acaps); 8614 + adv_fc = fwcap_to_cc_pause(acaps); 8615 8615 fc = fwcap_to_cc_pause(linkattr); 8616 8616 speed = fwcap_to_speed(linkattr); 8617 8617 ··· 8669 8667 } 8670 8668 8671 8669 if (link_ok != lc->link_ok || speed != lc->speed || 8672 - fc != lc->fc || fec != lc->fec) { /* something changed */ 8670 + fc != lc->fc || adv_fc != lc->advertised_fc || 8671 + fec != lc->fec) { 8672 + /* something changed */ 8673 8673 if (!link_ok && lc->link_ok) { 8674 8674 lc->link_down_rc = linkdnrc; 8675 8675 dev_warn_ratelimited(adapter->pdev_dev, ··· 8681 8677 } 8682 8678 lc->link_ok = link_ok; 8683 8679 lc->speed = speed; 8680 + lc->advertised_fc = adv_fc; 8684 8681 lc->fc = fc; 8685 8682 lc->fec = fec; 8686 8683
+2 -2
drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
··· 1690 1690 struct port_info *pi = netdev_priv(dev); 1691 1691 1692 1692 pauseparam->autoneg = (pi->link_cfg.requested_fc & PAUSE_AUTONEG) != 0; 1693 - pauseparam->rx_pause = (pi->link_cfg.fc & PAUSE_RX) != 0; 1694 - pauseparam->tx_pause = (pi->link_cfg.fc & PAUSE_TX) != 0; 1693 + pauseparam->rx_pause = (pi->link_cfg.advertised_fc & PAUSE_RX) != 0; 1694 + pauseparam->tx_pause = (pi->link_cfg.advertised_fc & PAUSE_TX) != 0; 1695 1695 } 1696 1696 1697 1697 /*
+1
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_common.h
··· 135 135 136 136 enum cc_pause requested_fc; /* flow control user has requested */ 137 137 enum cc_pause fc; /* actual link flow control */ 138 + enum cc_pause advertised_fc; /* actual advertised flow control */ 138 139 139 140 enum cc_fec auto_fec; /* Forward Error Correction: */ 140 141 enum cc_fec requested_fec; /* "automatic" (IEEE 802.3), */
+12 -8
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
··· 1913 1913 static void t4vf_handle_get_port_info(struct port_info *pi, 1914 1914 const struct fw_port_cmd *cmd) 1915 1915 { 1916 - int action = FW_PORT_CMD_ACTION_G(be32_to_cpu(cmd->action_to_len16)); 1917 - struct adapter *adapter = pi->adapter; 1918 - struct link_config *lc = &pi->link_cfg; 1919 - int link_ok, linkdnrc; 1920 - enum fw_port_type port_type; 1921 - enum fw_port_module_type mod_type; 1922 - unsigned int speed, fc, fec; 1923 1916 fw_port_cap32_t pcaps, acaps, lpacaps, linkattr; 1917 + struct link_config *lc = &pi->link_cfg; 1918 + struct adapter *adapter = pi->adapter; 1919 + unsigned int speed, fc, fec, adv_fc; 1920 + enum fw_port_module_type mod_type; 1921 + int action, link_ok, linkdnrc; 1922 + enum fw_port_type port_type; 1924 1923 1925 1924 /* Extract the various fields from the Port Information message. */ 1925 + action = FW_PORT_CMD_ACTION_G(be32_to_cpu(cmd->action_to_len16)); 1926 1926 switch (action) { 1927 1927 case FW_PORT_ACTION_GET_PORT_INFO: { 1928 1928 u32 lstatus = be32_to_cpu(cmd->u.info.lstatus_to_modtype); ··· 1982 1982 } 1983 1983 1984 1984 fec = fwcap_to_cc_fec(acaps); 1985 + adv_fc = fwcap_to_cc_pause(acaps); 1985 1986 fc = fwcap_to_cc_pause(linkattr); 1986 1987 speed = fwcap_to_speed(linkattr); 1987 1988 ··· 2013 2012 } 2014 2013 2015 2014 if (link_ok != lc->link_ok || speed != lc->speed || 2016 - fc != lc->fc || fec != lc->fec) { /* something changed */ 2015 + fc != lc->fc || adv_fc != lc->advertised_fc || 2016 + fec != lc->fec) { 2017 + /* something changed */ 2017 2018 if (!link_ok && lc->link_ok) { 2018 2019 lc->link_down_rc = linkdnrc; 2019 2020 dev_warn_ratelimited(adapter->pdev_dev, ··· 2025 2022 } 2026 2023 lc->link_ok = link_ok; 2027 2024 lc->speed = speed; 2025 + lc->advertised_fc = adv_fc; 2028 2026 lc->fc = fc; 2029 2027 lc->fec = fec; 2030 2028
+20 -19
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 1719 1719 int page_offset; 1720 1720 unsigned int sz; 1721 1721 int *count_ptr; 1722 - int i; 1722 + int i, j; 1723 1723 1724 1724 vaddr = phys_to_virt(addr); 1725 1725 WARN_ON(!IS_ALIGNED((unsigned long)vaddr, SMP_CACHE_BYTES)); ··· 1736 1736 WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr, 1737 1737 SMP_CACHE_BYTES)); 1738 1738 1739 + dma_unmap_page(priv->rx_dma_dev, sg_addr, 1740 + DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE); 1741 + 1739 1742 /* We may use multiple Rx pools */ 1740 1743 dpaa_bp = dpaa_bpid2pool(sgt[i].bpid); 1741 1744 if (!dpaa_bp) 1742 1745 goto free_buffers; 1743 1746 1744 - count_ptr = this_cpu_ptr(dpaa_bp->percpu_count); 1745 - dma_unmap_page(priv->rx_dma_dev, sg_addr, 1746 - DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE); 1747 1747 if (!skb) { 1748 1748 sz = dpaa_bp->size + 1749 1749 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); ··· 1786 1786 skb_add_rx_frag(skb, i - 1, head_page, frag_off, 1787 1787 frag_len, dpaa_bp->size); 1788 1788 } 1789 + 1789 1790 /* Update the pool count for the current {cpu x bpool} */ 1791 + count_ptr = this_cpu_ptr(dpaa_bp->percpu_count); 1790 1792 (*count_ptr)--; 1791 1793 1792 1794 if (qm_sg_entry_is_final(&sgt[i])) ··· 1802 1800 return skb; 1803 1801 1804 1802 free_buffers: 1805 - /* compensate sw bpool counter changes */ 1806 - for (i--; i >= 0; i--) { 1807 - dpaa_bp = dpaa_bpid2pool(sgt[i].bpid); 1808 - if (dpaa_bp) { 1809 - count_ptr = this_cpu_ptr(dpaa_bp->percpu_count); 1810 - (*count_ptr)++; 1811 - } 1812 - } 1813 1803 /* free all the SG entries */ 1814 - for (i = 0; i < DPAA_SGT_MAX_ENTRIES ; i++) { 1815 - sg_addr = qm_sg_addr(&sgt[i]); 1804 + for (j = 0; j < DPAA_SGT_MAX_ENTRIES ; j++) { 1805 + sg_addr = qm_sg_addr(&sgt[j]); 1816 1806 sg_vaddr = phys_to_virt(sg_addr); 1807 + /* all pages 0..i were unmaped */ 1808 + if (j > i) 1809 + dma_unmap_page(priv->rx_dma_dev, qm_sg_addr(&sgt[j]), 1810 + DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE); 1817 1811 free_pages((unsigned long)sg_vaddr, 0); 1818 - dpaa_bp = dpaa_bpid2pool(sgt[i].bpid); 1819 - if (dpaa_bp) { 1820 - count_ptr = this_cpu_ptr(dpaa_bp->percpu_count); 1821 - (*count_ptr)--; 1812 + /* counters 0..i-1 were decremented */ 1813 + if (j >= i) { 1814 + dpaa_bp = dpaa_bpid2pool(sgt[j].bpid); 1815 + if (dpaa_bp) { 1816 + count_ptr = this_cpu_ptr(dpaa_bp->percpu_count); 1817 + (*count_ptr)--; 1818 + } 1822 1819 } 1823 1820 1824 - if (qm_sg_entry_is_final(&sgt[i])) 1821 + if (qm_sg_entry_is_final(&sgt[j])) 1825 1822 break; 1826 1823 } 1827 1824 /* free the SGT fragment */
+9
drivers/net/ethernet/freescale/fec_main.c
··· 2199 2199 { 2200 2200 struct fec_enet_private *fep = netdev_priv(ndev); 2201 2201 u32 __iomem *theregs = (u32 __iomem *)fep->hwp; 2202 + struct device *dev = &fep->pdev->dev; 2202 2203 u32 *buf = (u32 *)regbuf; 2203 2204 u32 i, off; 2205 + int ret; 2206 + 2207 + ret = pm_runtime_get_sync(dev); 2208 + if (ret < 0) 2209 + return; 2204 2210 2205 2211 regs->version = fec_enet_register_version; 2206 2212 ··· 2222 2216 off >>= 2; 2223 2217 buf[off] = readl(&theregs[off]); 2224 2218 } 2219 + 2220 + pm_runtime_mark_last_busy(dev); 2221 + pm_runtime_put_autosuspend(dev); 2225 2222 } 2226 2223 2227 2224 static int fec_enet_get_ts_info(struct net_device *ndev,
-2
drivers/net/ethernet/google/gve/gve_rx.c
··· 418 418 rx->cnt = cnt; 419 419 rx->fill_cnt += work_done; 420 420 421 - /* restock desc ring slots */ 422 - dma_wmb(); /* Ensure descs are visible before ringing doorbell */ 423 421 gve_rx_write_doorbell(priv, rx); 424 422 return gve_rx_work_pending(rx); 425 423 }
-6
drivers/net/ethernet/google/gve/gve_tx.c
··· 487 487 * may have added descriptors without ringing the doorbell. 488 488 */ 489 489 490 - /* Ensure tx descs from a prior gve_tx are visible before 491 - * ringing doorbell. 492 - */ 493 - dma_wmb(); 494 490 gve_tx_put_doorbell(priv, tx->q_resources, tx->req); 495 491 return NETDEV_TX_BUSY; 496 492 } ··· 501 505 if (!netif_xmit_stopped(tx->netdev_txq) && netdev_xmit_more()) 502 506 return NETDEV_TX_OK; 503 507 504 - /* Ensure tx descs are visible before ringing doorbell */ 505 - dma_wmb(); 506 508 gve_tx_put_doorbell(priv, tx->q_resources, tx->req); 507 509 return NETDEV_TX_OK; 508 510 }
+16
drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
··· 122 122 #endif 123 123 }; 124 124 125 + #define MLX5E_TTC_NUM_GROUPS 3 126 + #define MLX5E_TTC_GROUP1_SIZE (BIT(3) + MLX5E_NUM_TUNNEL_TT) 127 + #define MLX5E_TTC_GROUP2_SIZE BIT(1) 128 + #define MLX5E_TTC_GROUP3_SIZE BIT(0) 129 + #define MLX5E_TTC_TABLE_SIZE (MLX5E_TTC_GROUP1_SIZE +\ 130 + MLX5E_TTC_GROUP2_SIZE +\ 131 + MLX5E_TTC_GROUP3_SIZE) 132 + 133 + #define MLX5E_INNER_TTC_NUM_GROUPS 3 134 + #define MLX5E_INNER_TTC_GROUP1_SIZE BIT(3) 135 + #define MLX5E_INNER_TTC_GROUP2_SIZE BIT(1) 136 + #define MLX5E_INNER_TTC_GROUP3_SIZE BIT(0) 137 + #define MLX5E_INNER_TTC_TABLE_SIZE (MLX5E_INNER_TTC_GROUP1_SIZE +\ 138 + MLX5E_INNER_TTC_GROUP2_SIZE +\ 139 + MLX5E_INNER_TTC_GROUP3_SIZE) 140 + 125 141 #ifdef CONFIG_MLX5_EN_RXNFC 126 142 127 143 struct mlx5e_ethtool_table {
+4 -3
drivers/net/ethernet/mellanox/mlx5/core/en/health.c
··· 197 197 struct devlink_health_reporter *reporter, char *err_str, 198 198 struct mlx5e_err_ctx *err_ctx) 199 199 { 200 - if (!reporter) { 201 - netdev_err(priv->netdev, err_str); 200 + netdev_err(priv->netdev, err_str); 201 + 202 + if (!reporter) 202 203 return err_ctx->recover(&err_ctx->ctx); 203 - } 204 + 204 205 return devlink_health_report(reporter, err_str, err_ctx); 205 206 }
-16
drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
··· 904 904 return err; 905 905 } 906 906 907 - #define MLX5E_TTC_NUM_GROUPS 3 908 - #define MLX5E_TTC_GROUP1_SIZE (BIT(3) + MLX5E_NUM_TUNNEL_TT) 909 - #define MLX5E_TTC_GROUP2_SIZE BIT(1) 910 - #define MLX5E_TTC_GROUP3_SIZE BIT(0) 911 - #define MLX5E_TTC_TABLE_SIZE (MLX5E_TTC_GROUP1_SIZE +\ 912 - MLX5E_TTC_GROUP2_SIZE +\ 913 - MLX5E_TTC_GROUP3_SIZE) 914 - 915 - #define MLX5E_INNER_TTC_NUM_GROUPS 3 916 - #define MLX5E_INNER_TTC_GROUP1_SIZE BIT(3) 917 - #define MLX5E_INNER_TTC_GROUP2_SIZE BIT(1) 918 - #define MLX5E_INNER_TTC_GROUP3_SIZE BIT(0) 919 - #define MLX5E_INNER_TTC_TABLE_SIZE (MLX5E_INNER_TTC_GROUP1_SIZE +\ 920 - MLX5E_INNER_TTC_GROUP2_SIZE +\ 921 - MLX5E_INNER_TTC_GROUP3_SIZE) 922 - 923 907 static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc, 924 908 bool use_ipv) 925 909 {
+58 -2
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 592 592 for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) 593 593 ttc_params->indir_tirn[tt] = hp->indir_tirn[tt]; 594 594 595 - ft_attr->max_fte = MLX5E_NUM_TT; 595 + ft_attr->max_fte = MLX5E_TTC_TABLE_SIZE; 596 596 ft_attr->level = MLX5E_TC_TTC_FT_LEVEL; 597 597 ft_attr->prio = MLX5E_TC_PRIO; 598 598 } ··· 2999 2999 return kmemdup(tun_info, tun_size, GFP_KERNEL); 3000 3000 } 3001 3001 3002 + static bool is_duplicated_encap_entry(struct mlx5e_priv *priv, 3003 + struct mlx5e_tc_flow *flow, 3004 + int out_index, 3005 + struct mlx5e_encap_entry *e, 3006 + struct netlink_ext_ack *extack) 3007 + { 3008 + int i; 3009 + 3010 + for (i = 0; i < out_index; i++) { 3011 + if (flow->encaps[i].e != e) 3012 + continue; 3013 + NL_SET_ERR_MSG_MOD(extack, "can't duplicate encap action"); 3014 + netdev_err(priv->netdev, "can't duplicate encap action\n"); 3015 + return true; 3016 + } 3017 + 3018 + return false; 3019 + } 3020 + 3002 3021 static int mlx5e_attach_encap(struct mlx5e_priv *priv, 3003 3022 struct mlx5e_tc_flow *flow, 3004 3023 struct net_device *mirred_dev, ··· 3053 3034 3054 3035 /* must verify if encap is valid or not */ 3055 3036 if (e) { 3037 + /* Check that entry was not already attached to this flow */ 3038 + if (is_duplicated_encap_entry(priv, flow, out_index, e, extack)) { 3039 + err = -EOPNOTSUPP; 3040 + goto out_err; 3041 + } 3042 + 3056 3043 mutex_unlock(&esw->offloads.encap_tbl_lock); 3057 3044 wait_for_completion(&e->res_ready); 3058 3045 ··· 3245 3220 same_hw_devs(priv, netdev_priv(out_dev)); 3246 3221 } 3247 3222 3223 + static bool is_duplicated_output_device(struct net_device *dev, 3224 + struct net_device *out_dev, 3225 + int *ifindexes, int if_count, 3226 + struct netlink_ext_ack *extack) 3227 + { 3228 + int i; 3229 + 3230 + for (i = 0; i < if_count; i++) { 3231 + if (ifindexes[i] == out_dev->ifindex) { 3232 + NL_SET_ERR_MSG_MOD(extack, 3233 + "can't duplicate output to same device"); 3234 + netdev_err(dev, "can't duplicate output to same device: %s\n", 3235 + out_dev->name); 3236 + return true; 3237 + } 3238 + } 3239 + 3240 + return false; 3241 + } 3242 + 3248 3243 static int parse_tc_fdb_actions(struct mlx5e_priv *priv, 3249 3244 struct flow_action *flow_action, 3250 3245 struct mlx5e_tc_flow *flow, ··· 3276 3231 struct mlx5e_tc_flow_parse_attr *parse_attr = attr->parse_attr; 3277 3232 struct mlx5e_rep_priv *rpriv = priv->ppriv; 3278 3233 const struct ip_tunnel_info *info = NULL; 3234 + int ifindexes[MLX5_MAX_FLOW_FWD_VPORTS]; 3279 3235 bool ft_flow = mlx5e_is_ft_flow(flow); 3280 3236 const struct flow_action_entry *act; 3237 + int err, i, if_count = 0; 3281 3238 bool encap = false; 3282 3239 u32 action = 0; 3283 - int err, i; 3284 3240 3285 3241 if (!flow_action_has_entries(flow_action)) 3286 3242 return -EINVAL; ··· 3357 3311 struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 3358 3312 struct net_device *uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH); 3359 3313 struct net_device *uplink_upper; 3314 + 3315 + if (is_duplicated_output_device(priv->netdev, 3316 + out_dev, 3317 + ifindexes, 3318 + if_count, 3319 + extack)) 3320 + return -EOPNOTSUPP; 3321 + 3322 + ifindexes[if_count] = out_dev->ifindex; 3323 + if_count++; 3360 3324 3361 3325 rcu_read_lock(); 3362 3326 uplink_upper =
+27 -67
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 531 531 } 532 532 } 533 533 534 - static void del_sw_fte_rcu(struct rcu_head *head) 535 - { 536 - struct fs_fte *fte = container_of(head, struct fs_fte, rcu); 537 - struct mlx5_flow_steering *steering = get_steering(&fte->node); 538 - 539 - kmem_cache_free(steering->ftes_cache, fte); 540 - } 541 - 542 534 static void del_sw_fte(struct fs_node *node) 543 535 { 536 + struct mlx5_flow_steering *steering = get_steering(node); 544 537 struct mlx5_flow_group *fg; 545 538 struct fs_fte *fte; 546 539 int err; ··· 546 553 rhash_fte); 547 554 WARN_ON(err); 548 555 ida_simple_remove(&fg->fte_allocator, fte->index - fg->start_index); 549 - 550 - call_rcu(&fte->rcu, del_sw_fte_rcu); 556 + kmem_cache_free(steering->ftes_cache, fte); 551 557 } 552 558 553 559 static void del_hw_flow_group(struct fs_node *node) ··· 1625 1633 } 1626 1634 1627 1635 static struct fs_fte * 1628 - lookup_fte_for_write_locked(struct mlx5_flow_group *g, const u32 *match_value) 1636 + lookup_fte_locked(struct mlx5_flow_group *g, 1637 + const u32 *match_value, 1638 + bool take_write) 1629 1639 { 1630 1640 struct fs_fte *fte_tmp; 1631 1641 1632 - nested_down_write_ref_node(&g->node, FS_LOCK_PARENT); 1633 - 1634 - fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, match_value, rhash_fte); 1635 - if (!fte_tmp || !tree_get_node(&fte_tmp->node)) { 1636 - fte_tmp = NULL; 1637 - goto out; 1638 - } 1639 - 1640 - if (!fte_tmp->node.active) { 1641 - tree_put_node(&fte_tmp->node, false); 1642 - fte_tmp = NULL; 1643 - goto out; 1644 - } 1645 - nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD); 1646 - 1647 - out: 1648 - up_write_ref_node(&g->node, false); 1649 - return fte_tmp; 1650 - } 1651 - 1652 - static struct fs_fte * 1653 - lookup_fte_for_read_locked(struct mlx5_flow_group *g, const u32 *match_value) 1654 - { 1655 - struct fs_fte *fte_tmp; 1656 - 1657 - if (!tree_get_node(&g->node)) 1658 - return NULL; 1659 - 1660 - rcu_read_lock(); 1661 - fte_tmp = rhashtable_lookup(&g->ftes_hash, match_value, rhash_fte); 1662 - if (!fte_tmp || !tree_get_node(&fte_tmp->node)) { 1663 - rcu_read_unlock(); 1664 - fte_tmp = NULL; 1665 - goto out; 1666 - } 1667 - rcu_read_unlock(); 1668 - 1669 - if (!fte_tmp->node.active) { 1670 - tree_put_node(&fte_tmp->node, false); 1671 - fte_tmp = NULL; 1672 - goto out; 1673 - } 1674 - 1675 - nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD); 1676 - 1677 - out: 1678 - tree_put_node(&g->node, false); 1679 - return fte_tmp; 1680 - } 1681 - 1682 - static struct fs_fte * 1683 - lookup_fte_locked(struct mlx5_flow_group *g, const u32 *match_value, bool write) 1684 - { 1685 - if (write) 1686 - return lookup_fte_for_write_locked(g, match_value); 1642 + if (take_write) 1643 + nested_down_write_ref_node(&g->node, FS_LOCK_PARENT); 1687 1644 else 1688 - return lookup_fte_for_read_locked(g, match_value); 1645 + nested_down_read_ref_node(&g->node, FS_LOCK_PARENT); 1646 + fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, match_value, 1647 + rhash_fte); 1648 + if (!fte_tmp || !tree_get_node(&fte_tmp->node)) { 1649 + fte_tmp = NULL; 1650 + goto out; 1651 + } 1652 + if (!fte_tmp->node.active) { 1653 + tree_put_node(&fte_tmp->node, false); 1654 + fte_tmp = NULL; 1655 + goto out; 1656 + } 1657 + 1658 + nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD); 1659 + out: 1660 + if (take_write) 1661 + up_write_ref_node(&g->node, false); 1662 + else 1663 + up_read_ref_node(&g->node); 1664 + return fte_tmp; 1689 1665 } 1690 1666 1691 1667 static struct mlx5_flow_handle *
-1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
··· 203 203 enum fs_fte_status status; 204 204 struct mlx5_fc *counter; 205 205 struct rhash_head hash; 206 - struct rcu_head rcu; 207 206 int modify_mask; 208 207 }; 209 208
+9 -7
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1193 1193 if (err) 1194 1194 goto err_load; 1195 1195 1196 + if (boot) { 1197 + err = mlx5_devlink_register(priv_to_devlink(dev), dev->device); 1198 + if (err) 1199 + goto err_devlink_reg; 1200 + } 1201 + 1196 1202 if (mlx5_device_registered(dev)) { 1197 1203 mlx5_attach_device(dev); 1198 1204 } else { ··· 1216 1210 return err; 1217 1211 1218 1212 err_reg_dev: 1213 + if (boot) 1214 + mlx5_devlink_unregister(priv_to_devlink(dev)); 1215 + err_devlink_reg: 1219 1216 mlx5_unload(dev); 1220 1217 err_load: 1221 1218 if (boot) ··· 1356 1347 1357 1348 request_module_nowait(MLX5_IB_MOD); 1358 1349 1359 - err = mlx5_devlink_register(devlink, &pdev->dev); 1360 - if (err) 1361 - goto clean_load; 1362 - 1363 1350 err = mlx5_crdump_enable(dev); 1364 1351 if (err) 1365 1352 dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err); 1366 1353 1367 1354 pci_save_state(pdev); 1368 1355 return 0; 1369 - 1370 - clean_load: 1371 - mlx5_unload_one(dev, true); 1372 1356 1373 1357 err_load_one: 1374 1358 mlx5_pci_close(dev);
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
··· 209 209 /* We need to copy the refcount since this ste 210 210 * may have been traversed several times 211 211 */ 212 - refcount_set(&new_ste->refcount, refcount_read(&cur_ste->refcount)); 212 + new_ste->refcount = cur_ste->refcount; 213 213 214 214 /* Link old STEs rule_mem list to the new ste */ 215 215 mlx5dr_rule_update_rule_member(cur_ste, new_ste); ··· 637 637 rule_mem = kvzalloc(sizeof(*rule_mem), GFP_KERNEL); 638 638 if (!rule_mem) 639 639 return -ENOMEM; 640 + 641 + INIT_LIST_HEAD(&rule_mem->list); 642 + INIT_LIST_HEAD(&rule_mem->use_ste_list); 640 643 641 644 rule_mem->ste = ste; 642 645 list_add_tail(&rule_mem->list, &nic_rule->rule_members_list);
+5 -5
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
··· 348 348 if (dst->next_htbl) 349 349 dst->next_htbl->pointing_ste = dst; 350 350 351 - refcount_set(&dst->refcount, refcount_read(&src->refcount)); 351 + dst->refcount = src->refcount; 352 352 353 353 INIT_LIST_HEAD(&dst->rule_list); 354 354 list_splice_tail_init(&src->rule_list, &dst->rule_list); ··· 565 565 566 566 bool mlx5dr_ste_not_used_ste(struct mlx5dr_ste *ste) 567 567 { 568 - return !refcount_read(&ste->refcount); 568 + return !ste->refcount; 569 569 } 570 570 571 571 /* Init one ste as a pattern for ste data array */ ··· 689 689 htbl->ste_arr = chunk->ste_arr; 690 690 htbl->hw_ste_arr = chunk->hw_ste_arr; 691 691 htbl->miss_list = chunk->miss_list; 692 - refcount_set(&htbl->refcount, 0); 692 + htbl->refcount = 0; 693 693 694 694 for (i = 0; i < chunk->num_of_entries; i++) { 695 695 struct mlx5dr_ste *ste = &htbl->ste_arr[i]; 696 696 697 697 ste->hw_ste = htbl->hw_ste_arr + i * DR_STE_SIZE_REDUCED; 698 698 ste->htbl = htbl; 699 - refcount_set(&ste->refcount, 0); 699 + ste->refcount = 0; 700 700 INIT_LIST_HEAD(&ste->miss_list_node); 701 701 INIT_LIST_HEAD(&htbl->miss_list[i]); 702 702 INIT_LIST_HEAD(&ste->rule_list); ··· 713 713 714 714 int mlx5dr_ste_htbl_free(struct mlx5dr_ste_htbl *htbl) 715 715 { 716 - if (refcount_read(&htbl->refcount)) 716 + if (htbl->refcount) 717 717 return -EBUSY; 718 718 719 719 mlx5dr_icm_free_chunk(htbl->chunk);
+8 -6
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
··· 123 123 struct mlx5dr_ste { 124 124 u8 *hw_ste; 125 125 /* refcount: indicates the num of rules that using this ste */ 126 - refcount_t refcount; 126 + u32 refcount; 127 127 128 128 /* attached to the miss_list head at each htbl entry */ 129 129 struct list_head miss_list_node; ··· 155 155 struct mlx5dr_ste_htbl { 156 156 u8 lu_type; 157 157 u16 byte_mask; 158 - refcount_t refcount; 158 + u32 refcount; 159 159 struct mlx5dr_icm_chunk *chunk; 160 160 struct mlx5dr_ste *ste_arr; 161 161 u8 *hw_ste_arr; ··· 206 206 207 207 static inline void mlx5dr_htbl_put(struct mlx5dr_ste_htbl *htbl) 208 208 { 209 - if (refcount_dec_and_test(&htbl->refcount)) 209 + htbl->refcount--; 210 + if (!htbl->refcount) 210 211 mlx5dr_ste_htbl_free(htbl); 211 212 } 212 213 213 214 static inline void mlx5dr_htbl_get(struct mlx5dr_ste_htbl *htbl) 214 215 { 215 - refcount_inc(&htbl->refcount); 216 + htbl->refcount++; 216 217 } 217 218 218 219 /* STE utils */ ··· 255 254 struct mlx5dr_matcher *matcher, 256 255 struct mlx5dr_matcher_rx_tx *nic_matcher) 257 256 { 258 - if (refcount_dec_and_test(&ste->refcount)) 257 + ste->refcount--; 258 + if (!ste->refcount) 259 259 mlx5dr_ste_free(ste, matcher, nic_matcher); 260 260 } 261 261 262 262 /* initial as 0, increased only when ste appears in a new rule */ 263 263 static inline void mlx5dr_ste_get(struct mlx5dr_ste *ste) 264 264 { 265 - refcount_inc(&ste->refcount); 265 + ste->refcount++; 266 266 } 267 267 268 268 void mlx5dr_ste_set_hit_addr_by_next_htbl(u8 *hw_ste,
+4 -3
drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2.c
··· 6 6 #include <linux/kernel.h> 7 7 #include <linux/module.h> 8 8 #include <linux/netlink.h> 9 + #include <linux/vmalloc.h> 9 10 #include <linux/xz.h> 10 11 #include "mlxfw_mfa2.h" 11 12 #include "mlxfw_mfa2_file.h" ··· 549 548 comp_size = be32_to_cpu(comp->size); 550 549 comp_buf_size = comp_size + mlxfw_mfa2_comp_magic_len; 551 550 552 - comp_data = kmalloc(sizeof(*comp_data) + comp_buf_size, GFP_KERNEL); 551 + comp_data = vzalloc(sizeof(*comp_data) + comp_buf_size); 553 552 if (!comp_data) 554 553 return ERR_PTR(-ENOMEM); 555 554 comp_data->comp.data_size = comp_size; ··· 571 570 comp_data->comp.data = comp_data->buff + mlxfw_mfa2_comp_magic_len; 572 571 return &comp_data->comp; 573 572 err_out: 574 - kfree(comp_data); 573 + vfree(comp_data); 575 574 return ERR_PTR(err); 576 575 } 577 576 ··· 580 579 const struct mlxfw_mfa2_comp_data *comp_data; 581 580 582 581 comp_data = container_of(comp, struct mlxfw_mfa2_comp_data, comp); 583 - kfree(comp_data); 582 + vfree(comp_data); 584 583 } 585 584 586 585 void mlxfw_mfa2_file_fini(struct mlxfw_mfa2_file *mfa2_file)
+1
drivers/net/ethernet/mellanox/mlxsw/reg.h
··· 5472 5472 MLXSW_REG_HTGT_TRAP_GROUP_SP_LBERROR, 5473 5473 MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0, 5474 5474 MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP1, 5475 + MLXSW_REG_HTGT_TRAP_GROUP_SP_VRRP, 5475 5476 5476 5477 __MLXSW_REG_HTGT_TRAP_GROUP_MAX, 5477 5478 MLXSW_REG_HTGT_TRAP_GROUP_MAX = __MLXSW_REG_HTGT_TRAP_GROUP_MAX - 1
+7 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 4542 4542 MLXSW_SP_RXL_MARK(ROUTER_ALERT_IPV6, TRAP_TO_CPU, ROUTER_EXP, false), 4543 4543 MLXSW_SP_RXL_MARK(IPIP_DECAP_ERROR, TRAP_TO_CPU, ROUTER_EXP, false), 4544 4544 MLXSW_SP_RXL_MARK(DECAP_ECN0, TRAP_TO_CPU, ROUTER_EXP, false), 4545 - MLXSW_SP_RXL_MARK(IPV4_VRRP, TRAP_TO_CPU, ROUTER_EXP, false), 4546 - MLXSW_SP_RXL_MARK(IPV6_VRRP, TRAP_TO_CPU, ROUTER_EXP, false), 4545 + MLXSW_SP_RXL_MARK(IPV4_VRRP, TRAP_TO_CPU, VRRP, false), 4546 + MLXSW_SP_RXL_MARK(IPV6_VRRP, TRAP_TO_CPU, VRRP, false), 4547 4547 /* PKT Sample trap */ 4548 4548 MLXSW_RXL(mlxsw_sp_rx_listener_sample_func, PKT_SAMPLE, MIRROR_TO_CPU, 4549 4549 false, SP_IP2ME, DISCARD), ··· 4626 4626 rate = 19 * 1024; 4627 4627 burst_size = 12; 4628 4628 break; 4629 + case MLXSW_REG_HTGT_TRAP_GROUP_SP_VRRP: 4630 + rate = 360; 4631 + burst_size = 7; 4632 + break; 4629 4633 default: 4630 4634 continue; 4631 4635 } ··· 4669 4665 case MLXSW_REG_HTGT_TRAP_GROUP_SP_OSPF: 4670 4666 case MLXSW_REG_HTGT_TRAP_GROUP_SP_PIM: 4671 4667 case MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0: 4668 + case MLXSW_REG_HTGT_TRAP_GROUP_SP_VRRP: 4672 4669 priority = 5; 4673 4670 tc = 5; 4674 4671 break;
+7
drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c
··· 651 651 mlxsw_sp_port->tclass_qdiscs[tclass_num].handle == p->child_handle) 652 652 return 0; 653 653 654 + if (!p->child_handle) { 655 + /* This is an invisible FIFO replacing the original Qdisc. 656 + * Ignore it--the original Qdisc's destroy will follow. 657 + */ 658 + return 0; 659 + } 660 + 654 661 /* See if the grafted qdisc is already offloaded on any tclass. If so, 655 662 * unoffload it. 656 663 */
+3
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 7079 7079 7080 7080 for (i = 0; i < MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS); i++) { 7081 7081 rif = mlxsw_sp->router->rifs[i]; 7082 + if (rif && rif->ops && 7083 + rif->ops->type == MLXSW_SP_RIF_TYPE_IPIP_LB) 7084 + continue; 7082 7085 if (rif && rif->dev && rif->dev != dev && 7083 7086 !ether_addr_equal_masked(rif->dev->dev_addr, dev_addr, 7084 7087 mlxsw_sp->mac_mask)) {
+1 -1
drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
··· 2296 2296 2297 2297 2298 2298 2299 - MODULE_DESCRIPTION("SAMSUNG 10G/2.5G/1G Ethernet PLATFORM driver"); 2299 + MODULE_DESCRIPTION("Samsung 10G/2.5G/1G Ethernet PLATFORM driver"); 2300 2300 2301 2301 MODULE_PARM_DESC(debug, "Message Level (-1: default, 0: no output, 16: all)"); 2302 2302 MODULE_PARM_DESC(eee_timer, "EEE-LPI Default LS timer value");
+11 -3
drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
··· 112 112 struct device *dev = dwmac->dev; 113 113 const char *parent_name, *mux_parent_names[MUX_CLK_NUM_PARENTS]; 114 114 struct meson8b_dwmac_clk_configs *clk_configs; 115 + static const struct clk_div_table div_table[] = { 116 + { .div = 2, .val = 2, }, 117 + { .div = 3, .val = 3, }, 118 + { .div = 4, .val = 4, }, 119 + { .div = 5, .val = 5, }, 120 + { .div = 6, .val = 6, }, 121 + { .div = 7, .val = 7, }, 122 + }; 115 123 116 124 clk_configs = devm_kzalloc(dev, sizeof(*clk_configs), GFP_KERNEL); 117 125 if (!clk_configs) ··· 154 146 clk_configs->m250_div.reg = dwmac->regs + PRG_ETH0; 155 147 clk_configs->m250_div.shift = PRG_ETH0_CLK_M250_DIV_SHIFT; 156 148 clk_configs->m250_div.width = PRG_ETH0_CLK_M250_DIV_WIDTH; 157 - clk_configs->m250_div.flags = CLK_DIVIDER_ONE_BASED | 158 - CLK_DIVIDER_ALLOW_ZERO | 159 - CLK_DIVIDER_ROUND_CLOSEST; 149 + clk_configs->m250_div.table = div_table; 150 + clk_configs->m250_div.flags = CLK_DIVIDER_ALLOW_ZERO | 151 + CLK_DIVIDER_ROUND_CLOSEST; 160 152 clk = meson8b_dwmac_register_clk(dwmac, "m250_div", &parent_name, 1, 161 153 &clk_divider_ops, 162 154 &clk_configs->m250_div.hw);
+3
drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
··· 957 957 /* default */ 958 958 break; 959 959 case PHY_INTERFACE_MODE_RGMII: 960 + case PHY_INTERFACE_MODE_RGMII_ID: 961 + case PHY_INTERFACE_MODE_RGMII_RXID: 962 + case PHY_INTERFACE_MODE_RGMII_TXID: 960 963 reg |= SYSCON_EPIT | SYSCON_ETCS_INT_GMII; 961 964 break; 962 965 case PHY_INTERFACE_MODE_RMII:
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
··· 44 44 * rate, which then uses the auto-reparenting feature of the 45 45 * clock driver, and enabling/disabling the clock. 46 46 */ 47 - if (gmac->interface == PHY_INTERFACE_MODE_RGMII) { 47 + if (phy_interface_mode_is_rgmii(gmac->interface)) { 48 48 clk_set_rate(gmac->tx_clk, SUN7I_GMAC_GMII_RGMII_RATE); 49 49 clk_prepare_enable(gmac->tx_clk); 50 50 gmac->clk_enabled = 1;
+32
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 106 106 static irqreturn_t stmmac_interrupt(int irq, void *dev_id); 107 107 108 108 #ifdef CONFIG_DEBUG_FS 109 + static const struct net_device_ops stmmac_netdev_ops; 109 110 static void stmmac_init_fs(struct net_device *dev); 110 111 static void stmmac_exit_fs(struct net_device *dev); 111 112 #endif ··· 4257 4256 } 4258 4257 DEFINE_SHOW_ATTRIBUTE(stmmac_dma_cap); 4259 4258 4259 + /* Use network device events to rename debugfs file entries. 4260 + */ 4261 + static int stmmac_device_event(struct notifier_block *unused, 4262 + unsigned long event, void *ptr) 4263 + { 4264 + struct net_device *dev = netdev_notifier_info_to_dev(ptr); 4265 + struct stmmac_priv *priv = netdev_priv(dev); 4266 + 4267 + if (dev->netdev_ops != &stmmac_netdev_ops) 4268 + goto done; 4269 + 4270 + switch (event) { 4271 + case NETDEV_CHANGENAME: 4272 + if (priv->dbgfs_dir) 4273 + priv->dbgfs_dir = debugfs_rename(stmmac_fs_dir, 4274 + priv->dbgfs_dir, 4275 + stmmac_fs_dir, 4276 + dev->name); 4277 + break; 4278 + } 4279 + done: 4280 + return NOTIFY_DONE; 4281 + } 4282 + 4283 + static struct notifier_block stmmac_notifier = { 4284 + .notifier_call = stmmac_device_event, 4285 + }; 4286 + 4260 4287 static void stmmac_init_fs(struct net_device *dev) 4261 4288 { 4262 4289 struct stmmac_priv *priv = netdev_priv(dev); ··· 4299 4270 /* Entry to report the DMA HW features */ 4300 4271 debugfs_create_file("dma_cap", 0444, priv->dbgfs_dir, dev, 4301 4272 &stmmac_dma_cap_fops); 4273 + 4274 + register_netdevice_notifier(&stmmac_notifier); 4302 4275 } 4303 4276 4304 4277 static void stmmac_exit_fs(struct net_device *dev) 4305 4278 { 4306 4279 struct stmmac_priv *priv = netdev_priv(dev); 4307 4280 4281 + unregister_netdevice_notifier(&stmmac_notifier); 4308 4282 debugfs_remove_recursive(priv->dbgfs_dir); 4309 4283 } 4310 4284 #endif /* CONFIG_DEBUG_FS */
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 320 320 static int stmmac_dt_phy(struct plat_stmmacenet_data *plat, 321 321 struct device_node *np, struct device *dev) 322 322 { 323 - bool mdio = false; 323 + bool mdio = !of_phy_is_fixed_link(np); 324 324 static const struct of_device_id need_mdio_ids[] = { 325 325 { .compatible = "snps,dwc-qos-ethernet-4.10" }, 326 326 {},
+4 -3
drivers/net/gtp.c
··· 540 540 mtu = dst_mtu(&rt->dst); 541 541 } 542 542 543 - rt->dst.ops->update_pmtu(&rt->dst, NULL, skb, mtu); 543 + rt->dst.ops->update_pmtu(&rt->dst, NULL, skb, mtu, false); 544 544 545 545 if (!skb_is_gso(skb) && (iph->frag_off & htons(IP_DF)) && 546 546 mtu < ntohs(iph->tot_len)) { ··· 813 813 lock_sock(sock->sk); 814 814 if (sock->sk->sk_user_data) { 815 815 sk = ERR_PTR(-EBUSY); 816 - goto out_sock; 816 + goto out_rel_sock; 817 817 } 818 818 819 819 sk = sock->sk; ··· 826 826 827 827 setup_udp_tunnel_sock(sock_net(sock->sk), sock, &tuncfg); 828 828 829 - out_sock: 829 + out_rel_sock: 830 830 release_sock(sock->sk); 831 + out_sock: 831 832 sockfd_put(sock); 832 833 return sk; 833 834 }
+1 -1
drivers/net/macvlan.c
··· 259 259 struct net_device *src, 260 260 enum macvlan_mode mode) 261 261 { 262 - const struct ethhdr *eth = eth_hdr(skb); 262 + const struct ethhdr *eth = skb_eth_hdr(skb); 263 263 const struct macvlan_dev *vlan; 264 264 struct sk_buff *nskb; 265 265 unsigned int i;
+2
drivers/net/phy/aquantia_main.c
··· 627 627 .config_intr = aqr_config_intr, 628 628 .ack_interrupt = aqr_ack_interrupt, 629 629 .read_status = aqr_read_status, 630 + .suspend = aqr107_suspend, 631 + .resume = aqr107_resume, 630 632 }, 631 633 { 632 634 PHY_ID_MATCH_MODEL(PHY_ID_AQR106),
+3
drivers/net/phy/phylink.c
··· 566 566 struct sfp_bus *bus; 567 567 int ret; 568 568 569 + if (!fwnode) 570 + return 0; 571 + 569 572 bus = sfp_bus_find_fwnode(fwnode); 570 573 if (IS_ERR(bus)) { 571 574 ret = PTR_ERR(bus);
+3 -6
drivers/net/usb/lan78xx.c
··· 2724 2724 return 0; 2725 2725 } 2726 2726 2727 - static int lan78xx_linearize(struct sk_buff *skb) 2728 - { 2729 - return skb_linearize(skb); 2730 - } 2731 - 2732 2727 static struct sk_buff *lan78xx_tx_prep(struct lan78xx_net *dev, 2733 2728 struct sk_buff *skb, gfp_t flags) 2734 2729 { ··· 2735 2740 return NULL; 2736 2741 } 2737 2742 2738 - if (lan78xx_linearize(skb) < 0) 2743 + if (skb_linearize(skb)) { 2744 + dev_kfree_skb_any(skb); 2739 2745 return NULL; 2746 + } 2740 2747 2741 2748 tx_cmd_a = (u32)(skb->len & TX_CMD_A_LEN_MASK_) | TX_CMD_A_FCS_; 2742 2749
+2 -2
drivers/net/vxlan.c
··· 2541 2541 ndst = &rt->dst; 2542 2542 skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM); 2543 2543 2544 - tos = ip_tunnel_ecn_encap(tos, old_iph, skb); 2544 + tos = ip_tunnel_ecn_encap(RT_TOS(tos), old_iph, skb); 2545 2545 ttl = ttl ? : ip4_dst_hoplimit(&rt->dst); 2546 2546 err = vxlan_build_skb(skb, ndst, sizeof(struct iphdr), 2547 2547 vni, md, flags, udp_sum); ··· 2581 2581 2582 2582 skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM); 2583 2583 2584 - tos = ip_tunnel_ecn_encap(tos, old_iph, skb); 2584 + tos = ip_tunnel_ecn_encap(RT_TOS(tos), old_iph, skb); 2585 2585 ttl = ttl ? : ip6_dst_hoplimit(ndst); 2586 2586 skb_scrub_packet(skb, xnet); 2587 2587 err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr),
+1 -1
drivers/net/wan/sdla.c
··· 708 708 709 709 spin_lock_irqsave(&sdla_lock, flags); 710 710 SDLA_WINDOW(dev, addr); 711 - pbuf = (void *)(((int) dev->mem_start) + (addr & SDLA_ADDR_MASK)); 711 + pbuf = (void *)(dev->mem_start + (addr & SDLA_ADDR_MASK)); 712 712 __sdla_write(dev, pbuf->buf_addr, skb->data, skb->len); 713 713 SDLA_WINDOW(dev, addr); 714 714 pbuf->opp_flag = 1;
+2
drivers/nvme/host/core.c
··· 222 222 case NVME_SC_CAP_EXCEEDED: 223 223 return BLK_STS_NOSPC; 224 224 case NVME_SC_LBA_RANGE: 225 + case NVME_SC_CMD_INTERRUPTED: 226 + case NVME_SC_NS_NOT_READY: 225 227 return BLK_STS_TARGET; 226 228 case NVME_SC_BAD_ATTRIBUTES: 227 229 case NVME_SC_ONCS_NOT_SUPPORTED:
+11 -1
drivers/nvme/target/admin-cmd.c
··· 24 24 return len; 25 25 } 26 26 27 + static u32 nvmet_feat_data_len(struct nvmet_req *req, u32 cdw10) 28 + { 29 + switch (cdw10 & 0xff) { 30 + case NVME_FEAT_HOST_ID: 31 + return sizeof(req->sq->ctrl->hostid); 32 + default: 33 + return 0; 34 + } 35 + } 36 + 27 37 u64 nvmet_get_log_page_offset(struct nvme_command *cmd) 28 38 { 29 39 return le64_to_cpu(cmd->get_log_page.lpo); ··· 788 778 u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10); 789 779 u16 status = 0; 790 780 791 - if (!nvmet_check_data_len(req, 0)) 781 + if (!nvmet_check_data_len(req, nvmet_feat_data_len(req, cdw10))) 792 782 return; 793 783 794 784 switch (cdw10 & 0xff) {
+90 -38
drivers/phy/motorola/phy-cpcap-usb.c
··· 115 115 enum cpcap_gpio_mode { 116 116 CPCAP_DM_DP, 117 117 CPCAP_MDM_RX_TX, 118 - CPCAP_UNKNOWN, 118 + CPCAP_UNKNOWN_DISABLED, /* Seems to disable USB lines */ 119 119 CPCAP_OTG_DM_DP, 120 120 }; 121 121 ··· 134 134 struct iio_channel *id; 135 135 struct regulator *vusb; 136 136 atomic_t active; 137 + unsigned int vbus_provider:1; 138 + unsigned int docked:1; 137 139 }; 138 140 139 141 static bool cpcap_usb_vbus_valid(struct cpcap_phy_ddata *ddata) ··· 209 207 static int cpcap_usb_set_uart_mode(struct cpcap_phy_ddata *ddata); 210 208 static int cpcap_usb_set_usb_mode(struct cpcap_phy_ddata *ddata); 211 209 210 + static void cpcap_usb_try_musb_mailbox(struct cpcap_phy_ddata *ddata, 211 + enum musb_vbus_id_status status) 212 + { 213 + int error; 214 + 215 + error = musb_mailbox(status); 216 + if (!error) 217 + return; 218 + 219 + dev_dbg(ddata->dev, "%s: musb_mailbox failed: %i\n", 220 + __func__, error); 221 + } 222 + 212 223 static void cpcap_usb_detect(struct work_struct *work) 213 224 { 214 225 struct cpcap_phy_ddata *ddata; ··· 235 220 if (error) 236 221 return; 237 222 238 - if (s.id_ground) { 239 - dev_dbg(ddata->dev, "id ground, USB host mode\n"); 223 + vbus = cpcap_usb_vbus_valid(ddata); 224 + 225 + /* We need to kick the VBUS as USB A-host */ 226 + if (s.id_ground && ddata->vbus_provider) { 227 + dev_dbg(ddata->dev, "still in USB A-host mode, kicking VBUS\n"); 228 + 229 + cpcap_usb_try_musb_mailbox(ddata, MUSB_ID_GROUND); 230 + 231 + error = regmap_update_bits(ddata->reg, CPCAP_REG_USBC3, 232 + CPCAP_BIT_VBUSSTBY_EN | 233 + CPCAP_BIT_VBUSEN_SPI, 234 + CPCAP_BIT_VBUSEN_SPI); 235 + if (error) 236 + goto out_err; 237 + 238 + return; 239 + } 240 + 241 + if (vbus && s.id_ground && ddata->docked) { 242 + dev_dbg(ddata->dev, "still docked as A-host, signal ID down\n"); 243 + 244 + cpcap_usb_try_musb_mailbox(ddata, MUSB_ID_GROUND); 245 + 246 + return; 247 + } 248 + 249 + /* No VBUS needed with docks */ 250 + if (vbus && s.id_ground && !ddata->vbus_provider) { 251 + dev_dbg(ddata->dev, "connected to a dock\n"); 252 + 253 + ddata->docked = true; 254 + 240 255 error = cpcap_usb_set_usb_mode(ddata); 241 256 if (error) 242 257 goto out_err; 243 258 244 - error = musb_mailbox(MUSB_ID_GROUND); 259 + cpcap_usb_try_musb_mailbox(ddata, MUSB_ID_GROUND); 260 + 261 + /* 262 + * Force check state again after musb has reoriented, 263 + * otherwise devices won't enumerate after loading PHY 264 + * driver. 265 + */ 266 + schedule_delayed_work(&ddata->detect_work, 267 + msecs_to_jiffies(1000)); 268 + 269 + return; 270 + } 271 + 272 + if (s.id_ground && !ddata->docked) { 273 + dev_dbg(ddata->dev, "id ground, USB host mode\n"); 274 + 275 + ddata->vbus_provider = true; 276 + 277 + error = cpcap_usb_set_usb_mode(ddata); 245 278 if (error) 246 279 goto out_err; 280 + 281 + cpcap_usb_try_musb_mailbox(ddata, MUSB_ID_GROUND); 247 282 248 283 error = regmap_update_bits(ddata->reg, CPCAP_REG_USBC3, 249 284 CPCAP_BIT_VBUSSTBY_EN | ··· 313 248 314 249 vbus = cpcap_usb_vbus_valid(ddata); 315 250 251 + /* Otherwise assume we're connected to a USB host */ 316 252 if (vbus) { 317 - /* Are we connected to a docking station with vbus? */ 318 - if (s.id_ground) { 319 - dev_dbg(ddata->dev, "connected to a dock\n"); 320 - 321 - /* No VBUS needed with docks */ 322 - error = cpcap_usb_set_usb_mode(ddata); 323 - if (error) 324 - goto out_err; 325 - error = musb_mailbox(MUSB_ID_GROUND); 326 - if (error) 327 - goto out_err; 328 - 329 - return; 330 - } 331 - 332 - /* Otherwise assume we're connected to a USB host */ 333 253 dev_dbg(ddata->dev, "connected to USB host\n"); 334 254 error = cpcap_usb_set_usb_mode(ddata); 335 255 if (error) 336 256 goto out_err; 337 - error = musb_mailbox(MUSB_VBUS_VALID); 338 - if (error) 339 - goto out_err; 257 + cpcap_usb_try_musb_mailbox(ddata, MUSB_VBUS_VALID); 340 258 341 259 return; 342 260 } 343 261 262 + ddata->vbus_provider = false; 263 + ddata->docked = false; 264 + cpcap_usb_try_musb_mailbox(ddata, MUSB_VBUS_OFF); 265 + 344 266 /* Default to debug UART mode */ 345 267 error = cpcap_usb_set_uart_mode(ddata); 346 - if (error) 347 - goto out_err; 348 - 349 - error = musb_mailbox(MUSB_VBUS_OFF); 350 268 if (error) 351 269 goto out_err; 352 270 ··· 424 376 { 425 377 int error; 426 378 427 - error = cpcap_usb_gpio_set_mode(ddata, CPCAP_DM_DP); 379 + /* Disable lines to prevent glitches from waking up mdm6600 */ 380 + error = cpcap_usb_gpio_set_mode(ddata, CPCAP_UNKNOWN_DISABLED); 428 381 if (error) 429 382 goto out_err; 430 383 ··· 452 403 if (error) 453 404 goto out_err; 454 405 406 + /* Enable UART mode */ 407 + error = cpcap_usb_gpio_set_mode(ddata, CPCAP_DM_DP); 408 + if (error) 409 + goto out_err; 410 + 455 411 return 0; 456 412 457 413 out_err: ··· 469 415 { 470 416 int error; 471 417 472 - error = cpcap_usb_gpio_set_mode(ddata, CPCAP_OTG_DM_DP); 418 + /* Disable lines to prevent glitches from waking up mdm6600 */ 419 + error = cpcap_usb_gpio_set_mode(ddata, CPCAP_UNKNOWN_DISABLED); 473 420 if (error) 474 421 return error; 475 422 ··· 489 434 if (error) 490 435 goto out_err; 491 436 492 - error = regmap_update_bits(ddata->reg, CPCAP_REG_USBC2, 493 - CPCAP_BIT_USBXCVREN, 494 - CPCAP_BIT_USBXCVREN); 495 - if (error) 496 - goto out_err; 497 - 498 437 error = regmap_update_bits(ddata->reg, CPCAP_REG_USBC3, 499 438 CPCAP_BIT_PU_SPI | 500 439 CPCAP_BIT_DMPD_SPI | ··· 501 452 error = regmap_update_bits(ddata->reg, CPCAP_REG_USBC2, 502 453 CPCAP_BIT_USBXCVREN, 503 454 CPCAP_BIT_USBXCVREN); 455 + if (error) 456 + goto out_err; 457 + 458 + /* Enable USB mode */ 459 + error = cpcap_usb_gpio_set_mode(ddata, CPCAP_OTG_DM_DP); 504 460 if (error) 505 461 goto out_err; 506 462 ··· 703 649 if (error) 704 650 dev_err(ddata->dev, "could not set UART mode\n"); 705 651 706 - error = musb_mailbox(MUSB_VBUS_OFF); 707 - if (error) 708 - dev_err(ddata->dev, "could not set mailbox\n"); 652 + cpcap_usb_try_musb_mailbox(ddata, MUSB_VBUS_OFF); 709 653 710 654 usb_remove_phy(&ddata->phy); 711 655 cancel_delayed_work_sync(&ddata->detect_work);
+3 -8
drivers/phy/motorola/phy-mapphone-mdm6600.c
··· 200 200 struct phy_mdm6600 *ddata; 201 201 struct device *dev; 202 202 DECLARE_BITMAP(values, PHY_MDM6600_NR_STATUS_LINES); 203 - int error, i, val = 0; 203 + int error; 204 204 205 205 ddata = container_of(work, struct phy_mdm6600, status_work.work); 206 206 dev = ddata->dev; ··· 212 212 if (error) 213 213 return; 214 214 215 - for (i = 0; i < PHY_MDM6600_NR_STATUS_LINES; i++) { 216 - val |= test_bit(i, values) << i; 217 - dev_dbg(ddata->dev, "XXX %s: i: %i values[i]: %i val: %i\n", 218 - __func__, i, test_bit(i, values), val); 219 - } 220 - ddata->status = values[0]; 215 + ddata->status = values[0] & ((1 << PHY_MDM6600_NR_STATUS_LINES) - 1); 221 216 222 217 dev_info(dev, "modem status: %i %s\n", 223 218 ddata->status, 224 - phy_mdm6600_status_name[ddata->status & 7]); 219 + phy_mdm6600_status_name[ddata->status]); 225 220 complete(&ddata->ack); 226 221 } 227 222
+1 -1
drivers/phy/qualcomm/phy-qcom-qmp.c
··· 66 66 /* QPHY_V3_PCS_MISC_CLAMP_ENABLE register bits */ 67 67 #define CLAMP_EN BIT(0) /* enables i/o clamp_n */ 68 68 69 - #define PHY_INIT_COMPLETE_TIMEOUT 1000 69 + #define PHY_INIT_COMPLETE_TIMEOUT 10000 70 70 #define POWER_DOWN_DELAY_US_MIN 10 71 71 #define POWER_DOWN_DELAY_US_MAX 11 72 72
+4
drivers/phy/rockchip/phy-rockchip-inno-hdmi.c
··· 603 603 { 604 604 const struct pre_pll_config *cfg = pre_pll_cfg_table; 605 605 606 + rate = (rate / 1000) * 1000; 607 + 606 608 for (; cfg->pixclock != 0; cfg++) 607 609 if (cfg->pixclock == rate && !cfg->fracdiv) 608 610 break; ··· 756 754 unsigned long *parent_rate) 757 755 { 758 756 const struct pre_pll_config *cfg = pre_pll_cfg_table; 757 + 758 + rate = (rate / 1000) * 1000; 759 759 760 760 for (; cfg->pixclock != 0; cfg++) 761 761 if (cfg->pixclock == rate)
+1
drivers/pinctrl/cirrus/Kconfig
··· 2 2 config PINCTRL_LOCHNAGAR 3 3 tristate "Cirrus Logic Lochnagar pinctrl driver" 4 4 depends on MFD_LOCHNAGAR 5 + select GPIOLIB 5 6 select PINMUX 6 7 select PINCONF 7 8 select GENERIC_PINCONF
+1
drivers/pinctrl/meson/pinctrl-meson.c
··· 441 441 return ret; 442 442 443 443 meson_calc_reg_and_bit(bank, pin, REG_DS, &reg, &bit); 444 + bit = bit << 1; 444 445 445 446 ret = regmap_read(pc->reg_ds, reg, &val); 446 447 if (ret)
+1 -1
drivers/platform/mips/Kconfig
··· 18 18 19 19 config CPU_HWMON 20 20 tristate "Loongson-3 CPU HWMon Driver" 21 - depends on CONFIG_MACH_LOONGSON64 21 + depends on MACH_LOONGSON64 22 22 select HWMON 23 23 default y 24 24 help
+3
drivers/powercap/intel_rapl_common.c
··· 1295 1295 struct cpuinfo_x86 *c = &cpu_data(cpu); 1296 1296 int ret; 1297 1297 1298 + if (!rapl_defaults) 1299 + return ERR_PTR(-ENODEV); 1300 + 1298 1301 rp = kzalloc(sizeof(struct rapl_package), GFP_KERNEL); 1299 1302 if (!rp) 1300 1303 return ERR_PTR(-ENOMEM);
+14 -17
drivers/ptp/ptp_clock.c
··· 166 166 .read = ptp_read, 167 167 }; 168 168 169 - static void delete_ptp_clock(struct posix_clock *pc) 169 + static void ptp_clock_release(struct device *dev) 170 170 { 171 - struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock); 171 + struct ptp_clock *ptp = container_of(dev, struct ptp_clock, dev); 172 172 173 173 mutex_destroy(&ptp->tsevq_mux); 174 174 mutex_destroy(&ptp->pincfg_mux); ··· 213 213 } 214 214 215 215 ptp->clock.ops = ptp_clock_ops; 216 - ptp->clock.release = delete_ptp_clock; 217 216 ptp->info = info; 218 217 ptp->devid = MKDEV(major, index); 219 218 ptp->index = index; ··· 235 236 if (err) 236 237 goto no_pin_groups; 237 238 238 - /* Create a new device in our class. */ 239 - ptp->dev = device_create_with_groups(ptp_class, parent, ptp->devid, 240 - ptp, ptp->pin_attr_groups, 241 - "ptp%d", ptp->index); 242 - if (IS_ERR(ptp->dev)) { 243 - err = PTR_ERR(ptp->dev); 244 - goto no_device; 245 - } 246 - 247 239 /* Register a new PPS source. */ 248 240 if (info->pps) { 249 241 struct pps_source_info pps; ··· 250 260 } 251 261 } 252 262 253 - /* Create a posix clock. */ 254 - err = posix_clock_register(&ptp->clock, ptp->devid); 263 + /* Initialize a new device of our class in our clock structure. */ 264 + device_initialize(&ptp->dev); 265 + ptp->dev.devt = ptp->devid; 266 + ptp->dev.class = ptp_class; 267 + ptp->dev.parent = parent; 268 + ptp->dev.groups = ptp->pin_attr_groups; 269 + ptp->dev.release = ptp_clock_release; 270 + dev_set_drvdata(&ptp->dev, ptp); 271 + dev_set_name(&ptp->dev, "ptp%d", ptp->index); 272 + 273 + /* Create a posix clock and link it to the device. */ 274 + err = posix_clock_register(&ptp->clock, &ptp->dev); 255 275 if (err) { 256 276 pr_err("failed to create posix clock\n"); 257 277 goto no_clock; ··· 273 273 if (ptp->pps_source) 274 274 pps_unregister_source(ptp->pps_source); 275 275 no_pps: 276 - device_destroy(ptp_class, ptp->devid); 277 - no_device: 278 276 ptp_cleanup_pin_groups(ptp); 279 277 no_pin_groups: 280 278 if (ptp->kworker) ··· 302 304 if (ptp->pps_source) 303 305 pps_unregister_source(ptp->pps_source); 304 306 305 - device_destroy(ptp_class, ptp->devid); 306 307 ptp_cleanup_pin_groups(ptp); 307 308 308 309 posix_clock_unregister(&ptp->clock);
+1 -1
drivers/ptp/ptp_private.h
··· 28 28 29 29 struct ptp_clock { 30 30 struct posix_clock clock; 31 - struct device *dev; 31 + struct device dev; 32 32 struct ptp_clock_info *info; 33 33 dev_t devid; 34 34 int index; /* index into clocks.map */
+7 -4
drivers/regulator/axp20x-regulator.c
··· 413 413 int i; 414 414 415 415 for (i = 0; i < rate_count; i++) { 416 - if (ramp <= slew_rates[i]) 417 - cfg = AXP20X_DCDC2_LDO3_V_RAMP_LDO3_RATE(i); 418 - else 416 + if (ramp > slew_rates[i]) 419 417 break; 418 + 419 + if (id == AXP20X_DCDC2) 420 + cfg = AXP20X_DCDC2_LDO3_V_RAMP_DCDC2_RATE(i); 421 + else 422 + cfg = AXP20X_DCDC2_LDO3_V_RAMP_LDO3_RATE(i); 420 423 } 421 424 422 425 if (cfg == 0xff) { ··· 608 605 AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_ELDO1_MASK), 609 606 AXP_DESC(AXP22X, ELDO2, "eldo2", "eldoin", 700, 3300, 100, 610 607 AXP22X_ELDO2_V_OUT, AXP22X_ELDO2_V_OUT_MASK, 611 - AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_ELDO1_MASK), 608 + AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_ELDO2_MASK), 612 609 AXP_DESC(AXP22X, ELDO3, "eldo3", "eldoin", 700, 3300, 100, 613 610 AXP22X_ELDO3_V_OUT, AXP22X_ELDO3_V_OUT_MASK, 614 611 AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_ELDO3_MASK),
-1
drivers/regulator/bd70528-regulator.c
··· 101 101 .set_voltage_sel = regulator_set_voltage_sel_regmap, 102 102 .get_voltage_sel = regulator_get_voltage_sel_regmap, 103 103 .set_voltage_time_sel = regulator_set_voltage_time_sel, 104 - .set_ramp_delay = bd70528_set_ramp_delay, 105 104 }; 106 105 107 106 static const struct regulator_ops bd70528_led_ops = {
+1 -14
drivers/rtc/rtc-mc146818-lib.c
··· 172 172 save_control = CMOS_READ(RTC_CONTROL); 173 173 CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL); 174 174 save_freq_select = CMOS_READ(RTC_FREQ_SELECT); 175 - 176 - #ifdef CONFIG_X86 177 - if ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD && 178 - boot_cpu_data.x86 == 0x17) || 179 - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) { 180 - CMOS_WRITE((save_freq_select & (~RTC_DIV_RESET2)), 181 - RTC_FREQ_SELECT); 182 - save_freq_select &= ~RTC_DIV_RESET2; 183 - } else 184 - CMOS_WRITE((save_freq_select | RTC_DIV_RESET2), 185 - RTC_FREQ_SELECT); 186 - #else 187 - CMOS_WRITE((save_freq_select | RTC_DIV_RESET2), RTC_FREQ_SELECT); 188 - #endif 175 + CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT); 189 176 190 177 #ifdef CONFIG_MACH_DECSTATION 191 178 CMOS_WRITE(real_yrs, RTC_DEC_YEAR);
+25 -14
drivers/rtc/rtc-mt6397.c
··· 47 47 irqen = irqsta & ~RTC_IRQ_EN_AL; 48 48 mutex_lock(&rtc->lock); 49 49 if (regmap_write(rtc->regmap, rtc->addr_base + RTC_IRQ_EN, 50 - irqen) < 0) 50 + irqen) == 0) 51 51 mtk_rtc_write_trigger(rtc); 52 52 mutex_unlock(&rtc->lock); 53 53 ··· 169 169 alm->pending = !!(pdn2 & RTC_PDN2_PWRON_ALARM); 170 170 mutex_unlock(&rtc->lock); 171 171 172 - tm->tm_sec = data[RTC_OFFSET_SEC]; 173 - tm->tm_min = data[RTC_OFFSET_MIN]; 174 - tm->tm_hour = data[RTC_OFFSET_HOUR]; 175 - tm->tm_mday = data[RTC_OFFSET_DOM]; 176 - tm->tm_mon = data[RTC_OFFSET_MTH]; 177 - tm->tm_year = data[RTC_OFFSET_YEAR]; 172 + tm->tm_sec = data[RTC_OFFSET_SEC] & RTC_AL_SEC_MASK; 173 + tm->tm_min = data[RTC_OFFSET_MIN] & RTC_AL_MIN_MASK; 174 + tm->tm_hour = data[RTC_OFFSET_HOUR] & RTC_AL_HOU_MASK; 175 + tm->tm_mday = data[RTC_OFFSET_DOM] & RTC_AL_DOM_MASK; 176 + tm->tm_mon = data[RTC_OFFSET_MTH] & RTC_AL_MTH_MASK; 177 + tm->tm_year = data[RTC_OFFSET_YEAR] & RTC_AL_YEA_MASK; 178 178 179 179 tm->tm_year += RTC_MIN_YEAR_OFFSET; 180 180 tm->tm_mon--; ··· 195 195 tm->tm_year -= RTC_MIN_YEAR_OFFSET; 196 196 tm->tm_mon++; 197 197 198 - data[RTC_OFFSET_SEC] = tm->tm_sec; 199 - data[RTC_OFFSET_MIN] = tm->tm_min; 200 - data[RTC_OFFSET_HOUR] = tm->tm_hour; 201 - data[RTC_OFFSET_DOM] = tm->tm_mday; 202 - data[RTC_OFFSET_MTH] = tm->tm_mon; 203 - data[RTC_OFFSET_YEAR] = tm->tm_year; 204 - 205 198 mutex_lock(&rtc->lock); 199 + ret = regmap_bulk_read(rtc->regmap, rtc->addr_base + RTC_AL_SEC, 200 + data, RTC_OFFSET_COUNT); 201 + if (ret < 0) 202 + goto exit; 203 + 204 + data[RTC_OFFSET_SEC] = ((data[RTC_OFFSET_SEC] & ~(RTC_AL_SEC_MASK)) | 205 + (tm->tm_sec & RTC_AL_SEC_MASK)); 206 + data[RTC_OFFSET_MIN] = ((data[RTC_OFFSET_MIN] & ~(RTC_AL_MIN_MASK)) | 207 + (tm->tm_min & RTC_AL_MIN_MASK)); 208 + data[RTC_OFFSET_HOUR] = ((data[RTC_OFFSET_HOUR] & ~(RTC_AL_HOU_MASK)) | 209 + (tm->tm_hour & RTC_AL_HOU_MASK)); 210 + data[RTC_OFFSET_DOM] = ((data[RTC_OFFSET_DOM] & ~(RTC_AL_DOM_MASK)) | 211 + (tm->tm_mday & RTC_AL_DOM_MASK)); 212 + data[RTC_OFFSET_MTH] = ((data[RTC_OFFSET_MTH] & ~(RTC_AL_MTH_MASK)) | 213 + (tm->tm_mon & RTC_AL_MTH_MASK)); 214 + data[RTC_OFFSET_YEAR] = ((data[RTC_OFFSET_YEAR] & ~(RTC_AL_YEA_MASK)) | 215 + (tm->tm_year & RTC_AL_YEA_MASK)); 216 + 206 217 if (alm->enabled) { 207 218 ret = regmap_bulk_write(rtc->regmap, 208 219 rtc->addr_base + RTC_AL_SEC,
+16
drivers/rtc/rtc-sun6i.c
··· 379 379 CLK_OF_DECLARE_DRIVER(sun50i_h6_rtc_clk, "allwinner,sun50i-h6-rtc", 380 380 sun50i_h6_rtc_clk_init); 381 381 382 + /* 383 + * The R40 user manual is self-conflicting on whether the prescaler is 384 + * fixed or configurable. The clock diagram shows it as fixed, but there 385 + * is also a configurable divider in the RTC block. 386 + */ 387 + static const struct sun6i_rtc_clk_data sun8i_r40_rtc_data = { 388 + .rc_osc_rate = 16000000, 389 + .fixed_prescaler = 512, 390 + }; 391 + static void __init sun8i_r40_rtc_clk_init(struct device_node *node) 392 + { 393 + sun6i_rtc_clk_init(node, &sun8i_r40_rtc_data); 394 + } 395 + CLK_OF_DECLARE_DRIVER(sun8i_r40_rtc_clk, "allwinner,sun8i-r40-rtc", 396 + sun8i_r40_rtc_clk_init); 397 + 382 398 static const struct sun6i_rtc_clk_data sun8i_v3_rtc_data = { 383 399 .rc_osc_rate = 32000, 384 400 .has_out_clk = 1,
+9 -20
drivers/s390/net/qeth_core_main.c
··· 2482 2482 rc = qeth_cm_enable(card); 2483 2483 if (rc) { 2484 2484 QETH_CARD_TEXT_(card, 2, "2err%d", rc); 2485 - goto out_qdio; 2485 + return rc; 2486 2486 } 2487 2487 rc = qeth_cm_setup(card); 2488 2488 if (rc) { 2489 2489 QETH_CARD_TEXT_(card, 2, "3err%d", rc); 2490 - goto out_qdio; 2490 + return rc; 2491 2491 } 2492 2492 rc = qeth_ulp_enable(card); 2493 2493 if (rc) { 2494 2494 QETH_CARD_TEXT_(card, 2, "4err%d", rc); 2495 - goto out_qdio; 2495 + return rc; 2496 2496 } 2497 2497 rc = qeth_ulp_setup(card); 2498 2498 if (rc) { 2499 2499 QETH_CARD_TEXT_(card, 2, "5err%d", rc); 2500 - goto out_qdio; 2500 + return rc; 2501 2501 } 2502 2502 rc = qeth_alloc_qdio_queues(card); 2503 2503 if (rc) { 2504 2504 QETH_CARD_TEXT_(card, 2, "5err%d", rc); 2505 - goto out_qdio; 2505 + return rc; 2506 2506 } 2507 2507 rc = qeth_qdio_establish(card); 2508 2508 if (rc) { 2509 2509 QETH_CARD_TEXT_(card, 2, "6err%d", rc); 2510 2510 qeth_free_qdio_queues(card); 2511 - goto out_qdio; 2511 + return rc; 2512 2512 } 2513 2513 rc = qeth_qdio_activate(card); 2514 2514 if (rc) { 2515 2515 QETH_CARD_TEXT_(card, 2, "7err%d", rc); 2516 - goto out_qdio; 2516 + return rc; 2517 2517 } 2518 2518 rc = qeth_dm_act(card); 2519 2519 if (rc) { 2520 2520 QETH_CARD_TEXT_(card, 2, "8err%d", rc); 2521 - goto out_qdio; 2521 + return rc; 2522 2522 } 2523 2523 2524 2524 return 0; 2525 - out_qdio: 2526 - qeth_qdio_clear_card(card, !IS_IQD(card)); 2527 - qdio_free(CARD_DDEV(card)); 2528 - return rc; 2529 2525 } 2530 2526 2531 2527 void qeth_print_status_message(struct qeth_card *card) ··· 3422 3426 } else { 3423 3427 if (card->options.cq == cq) { 3424 3428 rc = 0; 3425 - goto out; 3426 - } 3427 - 3428 - if (card->state != CARD_STATE_DOWN) { 3429 - rc = -1; 3430 3429 goto out; 3431 3430 } 3432 3431 ··· 5026 5035 } 5027 5036 if (qeth_adp_supported(card, IPA_SETADP_SET_DIAG_ASSIST)) { 5028 5037 rc = qeth_query_setdiagass(card); 5029 - if (rc < 0) { 5038 + if (rc) 5030 5039 QETH_CARD_TEXT_(card, 2, "8err%d", rc); 5031 - goto out; 5032 - } 5033 5040 } 5034 5041 5035 5042 if (!qeth_is_diagass_supported(card, QETH_DIAGS_CMD_TRAP) ||
+5 -5
drivers/s390/net/qeth_l2_main.c
··· 287 287 card->state = CARD_STATE_HARDSETUP; 288 288 } 289 289 if (card->state == CARD_STATE_HARDSETUP) { 290 - qeth_qdio_clear_card(card, 0); 291 290 qeth_drain_output_queues(card); 292 291 qeth_clear_working_pool_list(card); 293 292 card->state = CARD_STATE_DOWN; 294 293 } 295 294 295 + qeth_qdio_clear_card(card, 0); 296 296 flush_workqueue(card->event_wq); 297 297 card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; 298 298 card->info.promisc_mode = 0; ··· 1952 1952 /* check if VNICC is currently enabled */ 1953 1953 bool qeth_l2_vnicc_is_in_use(struct qeth_card *card) 1954 1954 { 1955 - /* if everything is turned off, VNICC is not active */ 1956 - if (!card->options.vnicc.cur_chars) 1955 + if (!card->options.vnicc.sup_chars) 1957 1956 return false; 1958 1957 /* default values are only OK if rx_bcast was not enabled by user 1959 1958 * or the card is offline. ··· 2039 2040 /* enforce assumed default values and recover settings, if changed */ 2040 2041 error |= qeth_l2_vnicc_recover_timeout(card, QETH_VNICC_LEARNING, 2041 2042 timeout); 2042 - chars_tmp = card->options.vnicc.wanted_chars ^ QETH_VNICC_DEFAULT; 2043 - chars_tmp |= QETH_VNICC_BRIDGE_INVISIBLE; 2043 + /* Change chars, if necessary */ 2044 + chars_tmp = card->options.vnicc.wanted_chars ^ 2045 + card->options.vnicc.cur_chars; 2044 2046 chars_len = sizeof(card->options.vnicc.wanted_chars) * BITS_PER_BYTE; 2045 2047 for_each_set_bit(i, &chars_tmp, chars_len) { 2046 2048 vnicc = BIT(i);
+1 -1
drivers/s390/net/qeth_l3_main.c
··· 1307 1307 card->state = CARD_STATE_HARDSETUP; 1308 1308 } 1309 1309 if (card->state == CARD_STATE_HARDSETUP) { 1310 - qeth_qdio_clear_card(card, 0); 1311 1310 qeth_drain_output_queues(card); 1312 1311 qeth_clear_working_pool_list(card); 1313 1312 card->state = CARD_STATE_DOWN; 1314 1313 } 1315 1314 1315 + qeth_qdio_clear_card(card, 0); 1316 1316 flush_workqueue(card->event_wq); 1317 1317 card->info.promisc_mode = 0; 1318 1318 }
+28 -12
drivers/s390/net/qeth_l3_sys.c
··· 242 242 struct device_attribute *attr, const char *buf, size_t count) 243 243 { 244 244 struct qeth_card *card = dev_get_drvdata(dev); 245 + int rc = 0; 245 246 char *tmp; 246 - int rc; 247 247 248 248 if (!IS_IQD(card)) 249 249 return -EPERM; 250 - if (card->state != CARD_STATE_DOWN) 251 - return -EPERM; 252 - if (card->options.sniffer) 253 - return -EPERM; 254 - if (card->options.cq == QETH_CQ_NOTAVAILABLE) 255 - return -EPERM; 250 + 251 + mutex_lock(&card->conf_mutex); 252 + if (card->state != CARD_STATE_DOWN) { 253 + rc = -EPERM; 254 + goto out; 255 + } 256 + 257 + if (card->options.sniffer) { 258 + rc = -EPERM; 259 + goto out; 260 + } 261 + 262 + if (card->options.cq == QETH_CQ_NOTAVAILABLE) { 263 + rc = -EPERM; 264 + goto out; 265 + } 256 266 257 267 tmp = strsep((char **)&buf, "\n"); 258 - if (strlen(tmp) > 8) 259 - return -EINVAL; 268 + if (strlen(tmp) > 8) { 269 + rc = -EINVAL; 270 + goto out; 271 + } 260 272 261 273 if (card->options.hsuid[0]) 262 274 /* delete old ip address */ ··· 279 267 card->options.hsuid[0] = '\0'; 280 268 memcpy(card->dev->perm_addr, card->options.hsuid, 9); 281 269 qeth_configure_cq(card, QETH_CQ_DISABLED); 282 - return count; 270 + goto out; 283 271 } 284 272 285 - if (qeth_configure_cq(card, QETH_CQ_ENABLED)) 286 - return -EPERM; 273 + if (qeth_configure_cq(card, QETH_CQ_ENABLED)) { 274 + rc = -EPERM; 275 + goto out; 276 + } 287 277 288 278 snprintf(card->options.hsuid, sizeof(card->options.hsuid), 289 279 "%-8s", tmp); ··· 294 280 295 281 rc = qeth_l3_modify_hsuid(card, true); 296 282 283 + out: 284 + mutex_unlock(&card->conf_mutex); 297 285 return rc ? rc : count; 298 286 } 299 287
+2 -1
drivers/scsi/cxgbi/libcxgbi.c
··· 121 121 "cdev 0x%p, p# %u.\n", cdev, cdev->nports); 122 122 cxgbi_hbas_remove(cdev); 123 123 cxgbi_device_portmap_cleanup(cdev); 124 - cxgbi_ppm_release(cdev->cdev2ppm(cdev)); 124 + if (cdev->cdev2ppm) 125 + cxgbi_ppm_release(cdev->cdev2ppm(cdev)); 125 126 if (cdev->pmap.max_connect) 126 127 cxgbi_free_big_mem(cdev->pmap.port_csk); 127 128 kfree(cdev);
+1 -2
drivers/scsi/lpfc/lpfc_debugfs.c
··· 5385 5385 .read = lpfc_debugfs_read, 5386 5386 .release = lpfc_debugfs_ras_log_release, 5387 5387 }; 5388 - #endif 5389 5388 5390 5389 #undef lpfc_debugfs_op_dumpHBASlim 5391 5390 static const struct file_operations lpfc_debugfs_op_dumpHBASlim = { ··· 5556 5557 .write = lpfc_idiag_extacc_write, 5557 5558 .release = lpfc_idiag_cmd_release, 5558 5559 }; 5559 - 5560 + #endif 5560 5561 5561 5562 /* lpfc_idiag_mbxacc_dump_bsg_mbox - idiag debugfs dump bsg mailbox command 5562 5563 * @phba: Pointer to HBA context object.
+1 -1
drivers/scsi/lpfc/lpfc_init.c
··· 5883 5883 break; 5884 5884 default: 5885 5885 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 5886 - "1804 Invalid asynchrous event code: " 5886 + "1804 Invalid asynchronous event code: " 5887 5887 "x%x\n", bf_get(lpfc_trailer_code, 5888 5888 &cq_event->cqe.mcqe_cmpl)); 5889 5889 break;
+5 -5
drivers/scsi/lpfc/lpfc_sli.c
··· 8555 8555 psli->sli_flag &= ~LPFC_SLI_ASYNC_MBX_BLK; 8556 8556 spin_unlock_irq(&phba->hbalock); 8557 8557 8558 - /* wake up worker thread to post asynchronlous mailbox command */ 8558 + /* wake up worker thread to post asynchronous mailbox command */ 8559 8559 lpfc_worker_wake_up(phba); 8560 8560 } 8561 8561 ··· 8823 8823 return rc; 8824 8824 } 8825 8825 8826 - /* Now, interrupt mode asynchrous mailbox command */ 8826 + /* Now, interrupt mode asynchronous mailbox command */ 8827 8827 rc = lpfc_mbox_cmd_check(phba, mboxq); 8828 8828 if (rc) { 8829 8829 lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI, ··· 13112 13112 } 13113 13113 13114 13114 /** 13115 - * lpfc_sli4_sp_handle_async_event - Handle an asynchroous event 13115 + * lpfc_sli4_sp_handle_async_event - Handle an asynchronous event 13116 13116 * @phba: Pointer to HBA context object. 13117 13117 * @cqe: Pointer to mailbox completion queue entry. 13118 13118 * 13119 - * This routine process a mailbox completion queue entry with asynchrous 13119 + * This routine process a mailbox completion queue entry with asynchronous 13120 13120 * event. 13121 13121 * 13122 13122 * Return: true if work posted to worker thread, otherwise false. ··· 13270 13270 * @cqe: Pointer to mailbox completion queue entry. 13271 13271 * 13272 13272 * This routine process a mailbox completion queue entry, it invokes the 13273 - * proper mailbox complete handling or asynchrous event handling routine 13273 + * proper mailbox complete handling or asynchronous event handling routine 13274 13274 * according to the MCQE's async bit. 13275 13275 * 13276 13276 * Return: true if work posted to worker thread, otherwise false.
-1
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 5248 5248 &ct->chain_buffer_dma); 5249 5249 if (!ct->chain_buffer) { 5250 5250 ioc_err(ioc, "chain_lookup: pci_pool_alloc failed\n"); 5251 - _base_release_memory_pools(ioc); 5252 5251 goto out; 5253 5252 } 5254 5253 }
+1 -1
drivers/soc/sifive/sifive_l2_cache.c
··· 9 9 #include <linux/interrupt.h> 10 10 #include <linux/of_irq.h> 11 11 #include <linux/of_address.h> 12 - #include <asm/sifive_l2_cache.h> 12 + #include <soc/sifive/sifive_l2_cache.h> 13 13 14 14 #define SIFIVE_L2_DIRECCFIX_LOW 0x100 15 15 #define SIFIVE_L2_DIRECCFIX_HIGH 0x104
+12 -3
drivers/spi/spi-dw.c
··· 172 172 173 173 static void dw_writer(struct dw_spi *dws) 174 174 { 175 - u32 max = tx_max(dws); 175 + u32 max; 176 176 u16 txw = 0; 177 177 178 + spin_lock(&dws->buf_lock); 179 + max = tx_max(dws); 178 180 while (max--) { 179 181 /* Set the tx word if the transfer's original "tx" is not null */ 180 182 if (dws->tx_end - dws->len) { ··· 188 186 dw_write_io_reg(dws, DW_SPI_DR, txw); 189 187 dws->tx += dws->n_bytes; 190 188 } 189 + spin_unlock(&dws->buf_lock); 191 190 } 192 191 193 192 static void dw_reader(struct dw_spi *dws) 194 193 { 195 - u32 max = rx_max(dws); 194 + u32 max; 196 195 u16 rxw; 197 196 197 + spin_lock(&dws->buf_lock); 198 + max = rx_max(dws); 198 199 while (max--) { 199 200 rxw = dw_read_io_reg(dws, DW_SPI_DR); 200 201 /* Care rx only if the transfer's original "rx" is not null */ ··· 209 204 } 210 205 dws->rx += dws->n_bytes; 211 206 } 207 + spin_unlock(&dws->buf_lock); 212 208 } 213 209 214 210 static void int_error_stop(struct dw_spi *dws, const char *msg) ··· 282 276 { 283 277 struct dw_spi *dws = spi_controller_get_devdata(master); 284 278 struct chip_data *chip = spi_get_ctldata(spi); 279 + unsigned long flags; 285 280 u8 imask = 0; 286 281 u16 txlevel = 0; 287 282 u32 cr0; 288 283 int ret; 289 284 290 285 dws->dma_mapped = 0; 291 - 286 + spin_lock_irqsave(&dws->buf_lock, flags); 292 287 dws->tx = (void *)transfer->tx_buf; 293 288 dws->tx_end = dws->tx + transfer->len; 294 289 dws->rx = transfer->rx_buf; 295 290 dws->rx_end = dws->rx + transfer->len; 296 291 dws->len = transfer->len; 292 + spin_unlock_irqrestore(&dws->buf_lock, flags); 297 293 298 294 spi_enable_chip(dws, 0); 299 295 ··· 479 471 dws->type = SSI_MOTO_SPI; 480 472 dws->dma_inited = 0; 481 473 dws->dma_addr = (dma_addr_t)(dws->paddr + DW_SPI_DR); 474 + spin_lock_init(&dws->buf_lock); 482 475 483 476 spi_controller_set_devdata(master, dws); 484 477
+1
drivers/spi/spi-dw.h
··· 119 119 size_t len; 120 120 void *tx; 121 121 void *tx_end; 122 + spinlock_t buf_lock; 122 123 void *rx; 123 124 void *rx_end; 124 125 int dma_mapped;
+10 -14
drivers/spi/spi-fsl-dspi.c
··· 185 185 struct spi_transfer *cur_transfer; 186 186 struct spi_message *cur_msg; 187 187 struct chip_data *cur_chip; 188 + size_t progress; 188 189 size_t len; 189 190 const void *tx; 190 191 void *rx; ··· 587 586 dspi->tx_cmd |= SPI_PUSHR_CMD_CTCNT; 588 587 589 588 if (dspi->devtype_data->xspi_mode && dspi->bits_per_word > 16) { 590 - /* Write two TX FIFO entries first, and then the corresponding 591 - * CMD FIFO entry. 589 + /* Write the CMD FIFO entry first, and then the two 590 + * corresponding TX FIFO entries. 592 591 */ 593 592 u32 data = dspi_pop_tx(dspi); 594 593 595 - if (dspi->cur_chip->ctar_val & SPI_CTAR_LSBFE) { 596 - /* LSB */ 597 - tx_fifo_write(dspi, data & 0xFFFF); 598 - tx_fifo_write(dspi, data >> 16); 599 - } else { 600 - /* MSB */ 601 - tx_fifo_write(dspi, data >> 16); 602 - tx_fifo_write(dspi, data & 0xFFFF); 603 - } 604 594 cmd_fifo_write(dspi); 595 + tx_fifo_write(dspi, data & 0xFFFF); 596 + tx_fifo_write(dspi, data >> 16); 605 597 } else { 606 598 /* Write one entry to both TX FIFO and CMD FIFO 607 599 * simultaneously. ··· 652 658 u32 spi_tcr; 653 659 654 660 spi_take_timestamp_post(dspi->ctlr, dspi->cur_transfer, 655 - dspi->tx - dspi->bytes_per_word, !dspi->irq); 661 + dspi->progress, !dspi->irq); 656 662 657 663 /* Get transfer counter (in number of SPI transfers). It was 658 664 * reset to 0 when transfer(s) were started. ··· 661 667 spi_tcnt = SPI_TCR_GET_TCNT(spi_tcr); 662 668 /* Update total number of bytes that were transferred */ 663 669 msg->actual_length += spi_tcnt * dspi->bytes_per_word; 670 + dspi->progress += spi_tcnt; 664 671 665 672 trans_mode = dspi->devtype_data->trans_mode; 666 673 if (trans_mode == DSPI_EOQ_MODE) ··· 674 679 return 0; 675 680 676 681 spi_take_timestamp_pre(dspi->ctlr, dspi->cur_transfer, 677 - dspi->tx, !dspi->irq); 682 + dspi->progress, !dspi->irq); 678 683 679 684 if (trans_mode == DSPI_EOQ_MODE) 680 685 dspi_eoq_write(dspi); ··· 763 768 dspi->rx = transfer->rx_buf; 764 769 dspi->rx_end = dspi->rx + transfer->len; 765 770 dspi->len = transfer->len; 771 + dspi->progress = 0; 766 772 /* Validated transfer specific frame size (defaults applied) */ 767 773 dspi->bits_per_word = transfer->bits_per_word; 768 774 if (transfer->bits_per_word <= 8) ··· 785 789 SPI_CTARE_DTCP(1)); 786 790 787 791 spi_take_timestamp_pre(dspi->ctlr, dspi->cur_transfer, 788 - dspi->tx, !dspi->irq); 792 + dspi->progress, !dspi->irq); 789 793 790 794 trans_mode = dspi->devtype_data->trans_mode; 791 795 switch (trans_mode) {
+19 -12
drivers/spi/spi-uniphier.c
··· 290 290 } 291 291 } 292 292 293 - static void uniphier_spi_fill_tx_fifo(struct uniphier_spi_priv *priv) 293 + static void uniphier_spi_set_fifo_threshold(struct uniphier_spi_priv *priv, 294 + unsigned int threshold) 294 295 { 295 - unsigned int fifo_threshold, fill_bytes; 296 296 u32 val; 297 297 298 - fifo_threshold = DIV_ROUND_UP(priv->rx_bytes, 299 - bytes_per_word(priv->bits_per_word)); 300 - fifo_threshold = min(fifo_threshold, SSI_FIFO_DEPTH); 301 - 302 - fill_bytes = fifo_threshold - (priv->rx_bytes - priv->tx_bytes); 303 - 304 - /* set fifo threshold */ 305 298 val = readl(priv->base + SSI_FC); 306 299 val &= ~(SSI_FC_TXFTH_MASK | SSI_FC_RXFTH_MASK); 307 - val |= FIELD_PREP(SSI_FC_TXFTH_MASK, fifo_threshold); 308 - val |= FIELD_PREP(SSI_FC_RXFTH_MASK, fifo_threshold); 300 + val |= FIELD_PREP(SSI_FC_TXFTH_MASK, SSI_FIFO_DEPTH - threshold); 301 + val |= FIELD_PREP(SSI_FC_RXFTH_MASK, threshold); 309 302 writel(val, priv->base + SSI_FC); 303 + } 310 304 311 - while (fill_bytes--) 305 + static void uniphier_spi_fill_tx_fifo(struct uniphier_spi_priv *priv) 306 + { 307 + unsigned int fifo_threshold, fill_words; 308 + unsigned int bpw = bytes_per_word(priv->bits_per_word); 309 + 310 + fifo_threshold = DIV_ROUND_UP(priv->rx_bytes, bpw); 311 + fifo_threshold = min(fifo_threshold, SSI_FIFO_DEPTH); 312 + 313 + uniphier_spi_set_fifo_threshold(priv, fifo_threshold); 314 + 315 + fill_words = fifo_threshold - 316 + DIV_ROUND_UP(priv->rx_bytes - priv->tx_bytes, bpw); 317 + 318 + while (fill_words--) 312 319 uniphier_spi_send(priv); 313 320 } 314 321
+8 -14
drivers/spi/spi.c
··· 1499 1499 * advances its @tx buffer pointer monotonically. 1500 1500 * @ctlr: Pointer to the spi_controller structure of the driver 1501 1501 * @xfer: Pointer to the transfer being timestamped 1502 - * @tx: Pointer to the current word within the xfer->tx_buf that the driver is 1503 - * preparing to transmit right now. 1502 + * @progress: How many words (not bytes) have been transferred so far 1504 1503 * @irqs_off: If true, will disable IRQs and preemption for the duration of the 1505 1504 * transfer, for less jitter in time measurement. Only compatible 1506 1505 * with PIO drivers. If true, must follow up with ··· 1509 1510 */ 1510 1511 void spi_take_timestamp_pre(struct spi_controller *ctlr, 1511 1512 struct spi_transfer *xfer, 1512 - const void *tx, bool irqs_off) 1513 + size_t progress, bool irqs_off) 1513 1514 { 1514 - u8 bytes_per_word = DIV_ROUND_UP(xfer->bits_per_word, 8); 1515 - 1516 1515 if (!xfer->ptp_sts) 1517 1516 return; 1518 1517 1519 1518 if (xfer->timestamped_pre) 1520 1519 return; 1521 1520 1522 - if (tx < (xfer->tx_buf + xfer->ptp_sts_word_pre * bytes_per_word)) 1521 + if (progress < xfer->ptp_sts_word_pre) 1523 1522 return; 1524 1523 1525 1524 /* Capture the resolution of the timestamp */ 1526 - xfer->ptp_sts_word_pre = (tx - xfer->tx_buf) / bytes_per_word; 1525 + xfer->ptp_sts_word_pre = progress; 1527 1526 1528 1527 xfer->timestamped_pre = true; 1529 1528 ··· 1543 1546 * timestamped. 1544 1547 * @ctlr: Pointer to the spi_controller structure of the driver 1545 1548 * @xfer: Pointer to the transfer being timestamped 1546 - * @tx: Pointer to the current word within the xfer->tx_buf that the driver has 1547 - * just transmitted. 1549 + * @progress: How many words (not bytes) have been transferred so far 1548 1550 * @irqs_off: If true, will re-enable IRQs and preemption for the local CPU. 1549 1551 */ 1550 1552 void spi_take_timestamp_post(struct spi_controller *ctlr, 1551 1553 struct spi_transfer *xfer, 1552 - const void *tx, bool irqs_off) 1554 + size_t progress, bool irqs_off) 1553 1555 { 1554 - u8 bytes_per_word = DIV_ROUND_UP(xfer->bits_per_word, 8); 1555 - 1556 1556 if (!xfer->ptp_sts) 1557 1557 return; 1558 1558 1559 1559 if (xfer->timestamped_post) 1560 1560 return; 1561 1561 1562 - if (tx < (xfer->tx_buf + xfer->ptp_sts_word_post * bytes_per_word)) 1562 + if (progress < xfer->ptp_sts_word_post) 1563 1563 return; 1564 1564 1565 1565 ptp_read_system_postts(xfer->ptp_sts); ··· 1567 1573 } 1568 1574 1569 1575 /* Capture the resolution of the timestamp */ 1570 - xfer->ptp_sts_word_post = (tx - xfer->tx_buf) / bytes_per_word; 1576 + xfer->ptp_sts_word_post = progress; 1571 1577 1572 1578 xfer->timestamped_post = true; 1573 1579 }
+2 -2
drivers/staging/comedi/drivers/adv_pci1710.c
··· 46 46 #define PCI171X_RANGE_UNI BIT(4) 47 47 #define PCI171X_RANGE_GAIN(x) (((x) & 0x7) << 0) 48 48 #define PCI171X_MUX_REG 0x04 /* W: A/D multiplexor control */ 49 - #define PCI171X_MUX_CHANH(x) (((x) & 0xf) << 8) 50 - #define PCI171X_MUX_CHANL(x) (((x) & 0xf) << 0) 49 + #define PCI171X_MUX_CHANH(x) (((x) & 0xff) << 8) 50 + #define PCI171X_MUX_CHANL(x) (((x) & 0xff) << 0) 51 51 #define PCI171X_MUX_CHAN(x) (PCI171X_MUX_CHANH(x) | PCI171X_MUX_CHANL(x)) 52 52 #define PCI171X_STATUS_REG 0x06 /* R: status register */ 53 53 #define PCI171X_STATUS_IRQ BIT(11) /* 1=IRQ occurred */
+1 -1
drivers/staging/media/ipu3/include/intel-ipu3.h
··· 449 449 __u16 reserved1; 450 450 __u32 bayer_sign; 451 451 __u8 bayer_nf; 452 - __u8 reserved2[3]; 452 + __u8 reserved2[7]; 453 453 } __attribute__((aligned(32))) __packed; 454 454 455 455 /**
+1
drivers/staging/rtl8188eu/os_dep/usb_intf.c
··· 37 37 {USB_DEVICE(0x2001, 0x3311)}, /* DLink GO-USB-N150 REV B1 */ 38 38 {USB_DEVICE(0x2001, 0x331B)}, /* D-Link DWA-121 rev B1 */ 39 39 {USB_DEVICE(0x2357, 0x010c)}, /* TP-Link TL-WN722N v2 */ 40 + {USB_DEVICE(0x2357, 0x0111)}, /* TP-Link TL-WN727N v5.21 */ 40 41 {USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */ 41 42 {USB_DEVICE(USB_VENDER_ID_REALTEK, 0xffef)}, /* Rosewill RNX-N150NUB */ 42 43 {} /* Terminating entry */
+2 -2
drivers/staging/vt6656/baseband.c
··· 449 449 450 450 memcpy(array, addr, length); 451 451 452 - ret = vnt_control_out(priv, MESSAGE_TYPE_WRITE, 0, 453 - MESSAGE_REQUEST_BBREG, length, array); 452 + ret = vnt_control_out_blocks(priv, VNT_REG_BLOCK_SIZE, 453 + MESSAGE_REQUEST_BBREG, length, array); 454 454 if (ret) 455 455 goto end; 456 456
+1 -1
drivers/staging/vt6656/card.c
··· 719 719 */ 720 720 int vnt_radio_power_on(struct vnt_private *priv) 721 721 { 722 - int ret = true; 722 + int ret = 0; 723 723 724 724 vnt_exit_deep_sleep(priv); 725 725
+1
drivers/staging/vt6656/device.h
··· 259 259 u8 mac_hw; 260 260 /* netdev */ 261 261 struct usb_device *usb; 262 + struct usb_interface *intf; 262 263 263 264 u64 tsf_time; 264 265 u8 rx_rate;
+2 -1
drivers/staging/vt6656/main_usb.c
··· 949 949 950 950 int vnt_init(struct vnt_private *priv) 951 951 { 952 - if (!(vnt_init_registers(priv))) 952 + if (vnt_init_registers(priv)) 953 953 return -EAGAIN; 954 954 955 955 SET_IEEE80211_PERM_ADDR(priv->hw, priv->permanent_net_addr); ··· 992 992 priv = hw->priv; 993 993 priv->hw = hw; 994 994 priv->usb = udev; 995 + priv->intf = intf; 995 996 996 997 vnt_set_options(priv); 997 998
+23 -2
drivers/staging/vt6656/usbpipe.c
··· 59 59 60 60 kfree(usb_buffer); 61 61 62 - if (ret >= 0 && ret < (int)length) 62 + if (ret == (int)length) 63 + ret = 0; 64 + else 63 65 ret = -EIO; 64 66 65 67 end_unlock: ··· 74 72 { 75 73 return vnt_control_out(priv, MESSAGE_TYPE_WRITE, 76 74 reg_off, reg, sizeof(u8), &data); 75 + } 76 + 77 + int vnt_control_out_blocks(struct vnt_private *priv, 78 + u16 block, u8 reg, u16 length, u8 *data) 79 + { 80 + int ret = 0, i; 81 + 82 + for (i = 0; i < length; i += block) { 83 + u16 len = min_t(int, length - i, block); 84 + 85 + ret = vnt_control_out(priv, MESSAGE_TYPE_WRITE, 86 + i, reg, len, data + i); 87 + if (ret) 88 + goto end; 89 + } 90 + end: 91 + return ret; 77 92 } 78 93 79 94 int vnt_control_in(struct vnt_private *priv, u8 request, u16 value, ··· 122 103 123 104 kfree(usb_buffer); 124 105 125 - if (ret >= 0 && ret < (int)length) 106 + if (ret == (int)length) 107 + ret = 0; 108 + else 126 109 ret = -EIO; 127 110 128 111 end_unlock:
+5
drivers/staging/vt6656/usbpipe.h
··· 18 18 19 19 #include "device.h" 20 20 21 + #define VNT_REG_BLOCK_SIZE 64 22 + 21 23 int vnt_control_out(struct vnt_private *priv, u8 request, u16 value, 22 24 u16 index, u16 length, u8 *buffer); 23 25 int vnt_control_in(struct vnt_private *priv, u8 request, u16 value, ··· 27 25 28 26 int vnt_control_out_u8(struct vnt_private *priv, u8 reg, u8 ref_off, u8 data); 29 27 int vnt_control_in_u8(struct vnt_private *priv, u8 reg, u8 reg_off, u8 *data); 28 + 29 + int vnt_control_out_blocks(struct vnt_private *priv, 30 + u16 block, u8 reg, u16 len, u8 *data); 30 31 31 32 int vnt_start_interrupt_urb(struct vnt_private *priv); 32 33 int vnt_submit_rx_urb(struct vnt_private *priv, struct vnt_rcb *rcb);
+1
drivers/staging/vt6656/wcmd.c
··· 99 99 if (vnt_init(priv)) { 100 100 /* If fail all ends TODO retry */ 101 101 dev_err(&priv->usb->dev, "failed to start\n"); 102 + usb_set_intfdata(priv->intf, NULL); 102 103 ieee80211_free_hw(priv->hw); 103 104 return; 104 105 }
+3 -1
drivers/target/target_core_iblock.c
··· 646 646 } 647 647 648 648 bip->bip_iter.bi_size = bio_integrity_bytes(bi, bio_sectors(bio)); 649 - bip_set_seed(bip, bio->bi_iter.bi_sector); 649 + /* virtual start sector must be in integrity interval units */ 650 + bip_set_seed(bip, bio->bi_iter.bi_sector >> 651 + (bi->interval_exp - SECTOR_SHIFT)); 650 652 651 653 pr_debug("IBLOCK BIP Size: %u Sector: %llu\n", bip->bip_iter.bi_size, 652 654 (unsigned long long)bip->bip_iter.bi_sector);
+3
drivers/thermal/qcom/tsens.c
··· 110 110 irq = platform_get_irq_byname(pdev, "uplow"); 111 111 if (irq < 0) { 112 112 ret = irq; 113 + /* For old DTs with no IRQ defined */ 114 + if (irq == -ENXIO) 115 + ret = 0; 113 116 goto err_put_device; 114 117 } 115 118
+10
drivers/tty/serdev/core.c
··· 663 663 return AE_OK; 664 664 } 665 665 666 + static const struct acpi_device_id serdev_acpi_devices_blacklist[] = { 667 + { "INT3511", 0 }, 668 + { "INT3512", 0 }, 669 + { }, 670 + }; 671 + 666 672 static acpi_status acpi_serdev_add_device(acpi_handle handle, u32 level, 667 673 void *data, void **return_value) 668 674 { ··· 679 673 return AE_OK; 680 674 681 675 if (acpi_device_enumerated(adev)) 676 + return AE_OK; 677 + 678 + /* Skip if black listed */ 679 + if (!acpi_match_device_ids(adev, serdev_acpi_devices_blacklist)) 682 680 return AE_OK; 683 681 684 682 if (acpi_serdev_check_resources(ctrl, adev))
+1 -2
drivers/tty/tty_port.c
··· 89 89 { 90 90 if (WARN_ON(index >= driver->num)) 91 91 return; 92 - if (!driver->ports[index]) 93 - driver->ports[index] = port; 92 + driver->ports[index] = port; 94 93 } 95 94 EXPORT_SYMBOL_GPL(tty_port_link_device); 96 95
+5 -9
drivers/usb/cdns3/gadget.c
··· 1375 1375 */ 1376 1376 static irqreturn_t cdns3_device_irq_handler(int irq, void *data) 1377 1377 { 1378 - struct cdns3_device *priv_dev; 1379 - struct cdns3 *cdns = data; 1378 + struct cdns3_device *priv_dev = data; 1380 1379 irqreturn_t ret = IRQ_NONE; 1381 1380 u32 reg; 1382 - 1383 - priv_dev = cdns->gadget_dev; 1384 1381 1385 1382 /* check USB device interrupt */ 1386 1383 reg = readl(&priv_dev->regs->usb_ists); ··· 1416 1419 */ 1417 1420 static irqreturn_t cdns3_device_thread_irq_handler(int irq, void *data) 1418 1421 { 1419 - struct cdns3_device *priv_dev; 1420 - struct cdns3 *cdns = data; 1422 + struct cdns3_device *priv_dev = data; 1421 1423 irqreturn_t ret = IRQ_NONE; 1422 1424 unsigned long flags; 1423 1425 int bit; 1424 1426 u32 reg; 1425 1427 1426 - priv_dev = cdns->gadget_dev; 1427 1428 spin_lock_irqsave(&priv_dev->lock, flags); 1428 1429 1429 1430 reg = readl(&priv_dev->regs->usb_ists); ··· 2533 2538 2534 2539 priv_dev = cdns->gadget_dev; 2535 2540 2536 - devm_free_irq(cdns->dev, cdns->dev_irq, cdns); 2541 + devm_free_irq(cdns->dev, cdns->dev_irq, priv_dev); 2537 2542 2538 2543 pm_runtime_mark_last_busy(cdns->dev); 2539 2544 pm_runtime_put_autosuspend(cdns->dev); ··· 2704 2709 ret = devm_request_threaded_irq(cdns->dev, cdns->dev_irq, 2705 2710 cdns3_device_irq_handler, 2706 2711 cdns3_device_thread_irq_handler, 2707 - IRQF_SHARED, dev_name(cdns->dev), cdns); 2712 + IRQF_SHARED, dev_name(cdns->dev), 2713 + cdns->gadget_dev); 2708 2714 2709 2715 if (ret) 2710 2716 goto err0;
+3 -1
drivers/usb/chipidea/host.c
··· 26 26 27 27 struct ehci_ci_priv { 28 28 struct regulator *reg_vbus; 29 + bool enabled; 29 30 }; 30 31 31 32 static int ehci_ci_portpower(struct usb_hcd *hcd, int portnum, bool enable) ··· 38 37 int ret = 0; 39 38 int port = HCS_N_PORTS(ehci->hcs_params); 40 39 41 - if (priv->reg_vbus) { 40 + if (priv->reg_vbus && enable != priv->enabled) { 42 41 if (port > 1) { 43 42 dev_warn(dev, 44 43 "Not support multi-port regulator control\n"); ··· 54 53 enable ? "enable" : "disable", ret); 55 54 return ret; 56 55 } 56 + priv->enabled = enable; 57 57 } 58 58 59 59 if (enable && (ci->platdata->phy_mode == USBPHY_INTERFACE_MODE_HSIC)) {
+66 -16
drivers/usb/core/config.c
··· 203 203 [USB_ENDPOINT_XFER_INT] = 1024, 204 204 }; 205 205 206 - static int usb_parse_endpoint(struct device *ddev, int cfgno, int inum, 207 - int asnum, struct usb_host_interface *ifp, int num_ep, 208 - unsigned char *buffer, int size) 206 + static bool endpoint_is_duplicate(struct usb_endpoint_descriptor *e1, 207 + struct usb_endpoint_descriptor *e2) 208 + { 209 + if (e1->bEndpointAddress == e2->bEndpointAddress) 210 + return true; 211 + 212 + if (usb_endpoint_xfer_control(e1) || usb_endpoint_xfer_control(e2)) { 213 + if (usb_endpoint_num(e1) == usb_endpoint_num(e2)) 214 + return true; 215 + } 216 + 217 + return false; 218 + } 219 + 220 + /* 221 + * Check for duplicate endpoint addresses in other interfaces and in the 222 + * altsetting currently being parsed. 223 + */ 224 + static bool config_endpoint_is_duplicate(struct usb_host_config *config, 225 + int inum, int asnum, struct usb_endpoint_descriptor *d) 226 + { 227 + struct usb_endpoint_descriptor *epd; 228 + struct usb_interface_cache *intfc; 229 + struct usb_host_interface *alt; 230 + int i, j, k; 231 + 232 + for (i = 0; i < config->desc.bNumInterfaces; ++i) { 233 + intfc = config->intf_cache[i]; 234 + 235 + for (j = 0; j < intfc->num_altsetting; ++j) { 236 + alt = &intfc->altsetting[j]; 237 + 238 + if (alt->desc.bInterfaceNumber == inum && 239 + alt->desc.bAlternateSetting != asnum) 240 + continue; 241 + 242 + for (k = 0; k < alt->desc.bNumEndpoints; ++k) { 243 + epd = &alt->endpoint[k].desc; 244 + 245 + if (endpoint_is_duplicate(epd, d)) 246 + return true; 247 + } 248 + } 249 + } 250 + 251 + return false; 252 + } 253 + 254 + static int usb_parse_endpoint(struct device *ddev, int cfgno, 255 + struct usb_host_config *config, int inum, int asnum, 256 + struct usb_host_interface *ifp, int num_ep, 257 + unsigned char *buffer, int size) 209 258 { 210 259 unsigned char *buffer0 = buffer; 211 260 struct usb_endpoint_descriptor *d; ··· 291 242 goto skip_to_next_endpoint_or_interface_descriptor; 292 243 293 244 /* Check for duplicate endpoint addresses */ 294 - for (i = 0; i < ifp->desc.bNumEndpoints; ++i) { 295 - if (ifp->endpoint[i].desc.bEndpointAddress == 296 - d->bEndpointAddress) { 297 - dev_warn(ddev, "config %d interface %d altsetting %d has a duplicate endpoint with address 0x%X, skipping\n", 298 - cfgno, inum, asnum, d->bEndpointAddress); 299 - goto skip_to_next_endpoint_or_interface_descriptor; 300 - } 245 + if (config_endpoint_is_duplicate(config, inum, asnum, d)) { 246 + dev_warn(ddev, "config %d interface %d altsetting %d has a duplicate endpoint with address 0x%X, skipping\n", 247 + cfgno, inum, asnum, d->bEndpointAddress); 248 + goto skip_to_next_endpoint_or_interface_descriptor; 301 249 } 302 250 303 251 endpoint = &ifp->endpoint[ifp->desc.bNumEndpoints]; ··· 392 346 endpoint->desc.wMaxPacketSize = cpu_to_le16(8); 393 347 } 394 348 395 - /* Validate the wMaxPacketSize field */ 349 + /* 350 + * Validate the wMaxPacketSize field. 351 + * Some devices have isochronous endpoints in altsetting 0; 352 + * the USB-2 spec requires such endpoints to have wMaxPacketSize = 0 353 + * (see the end of section 5.6.3), so don't warn about them. 354 + */ 396 355 maxp = usb_endpoint_maxp(&endpoint->desc); 397 - if (maxp == 0) { 398 - dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has wMaxPacketSize 0, skipping\n", 356 + if (maxp == 0 && !(usb_endpoint_xfer_isoc(d) && asnum == 0)) { 357 + dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid wMaxPacketSize 0\n", 399 358 cfgno, inum, asnum, d->bEndpointAddress); 400 - goto skip_to_next_endpoint_or_interface_descriptor; 401 359 } 402 360 403 361 /* Find the highest legal maxpacket size for this endpoint */ ··· 572 522 if (((struct usb_descriptor_header *) buffer)->bDescriptorType 573 523 == USB_DT_INTERFACE) 574 524 break; 575 - retval = usb_parse_endpoint(ddev, cfgno, inum, asnum, alt, 576 - num_ep, buffer, size); 525 + retval = usb_parse_endpoint(ddev, cfgno, config, inum, asnum, 526 + alt, num_ep, buffer, size); 577 527 if (retval < 0) 578 528 return retval; 579 529 ++n;
+1 -1
drivers/usb/core/hub.c
··· 2692 2692 #define SET_ADDRESS_TRIES 2 2693 2693 #define GET_DESCRIPTOR_TRIES 2 2694 2694 #define SET_CONFIG_TRIES (2 * (use_both_schemes + 1)) 2695 - #define USE_NEW_SCHEME(i, scheme) ((i) / 2 == (int)scheme) 2695 + #define USE_NEW_SCHEME(i, scheme) ((i) / 2 == (int)(scheme)) 2696 2696 2697 2697 #define HUB_ROOT_RESET_TIME 60 /* times are in msec */ 2698 2698 #define HUB_SHORT_RESET_TIME 10
+7
drivers/usb/dwc3/gadget.c
··· 2467 2467 2468 2468 static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req) 2469 2469 { 2470 + /* 2471 + * For OUT direction, host may send less than the setup 2472 + * length. Return true for all OUT requests. 2473 + */ 2474 + if (!req->direction) 2475 + return true; 2476 + 2470 2477 return req->request.actual == req->request.length; 2471 2478 } 2472 2479
+1
drivers/usb/gadget/udc/Kconfig
··· 445 445 tristate "NVIDIA Tegra Superspeed USB 3.0 Device Controller" 446 446 depends on ARCH_TEGRA || COMPILE_TEST 447 447 depends on PHY_TEGRA_XUSB 448 + select USB_ROLE_SWITCH 448 449 help 449 450 Enables NVIDIA Tegra USB 3.0 device mode controller driver. 450 451
+6 -2
drivers/usb/host/ohci-da8xx.c
··· 415 415 } 416 416 417 417 da8xx_ohci->oc_gpio = devm_gpiod_get_optional(dev, "oc", GPIOD_IN); 418 - if (IS_ERR(da8xx_ohci->oc_gpio)) 418 + if (IS_ERR(da8xx_ohci->oc_gpio)) { 419 + error = PTR_ERR(da8xx_ohci->oc_gpio); 419 420 goto err; 421 + } 420 422 421 423 if (da8xx_ohci->oc_gpio) { 422 424 oc_irq = gpiod_to_irq(da8xx_ohci->oc_gpio); 423 - if (oc_irq < 0) 425 + if (oc_irq < 0) { 426 + error = oc_irq; 424 427 goto err; 428 + } 425 429 426 430 error = devm_request_threaded_irq(dev, oc_irq, NULL, 427 431 ohci_da8xx_oc_thread, IRQF_TRIGGER_RISING |
+5 -2
drivers/usb/musb/jz4740.c
··· 75 75 static int jz4740_musb_init(struct musb *musb) 76 76 { 77 77 struct device *dev = musb->controller->parent; 78 + int err; 78 79 79 80 if (dev->of_node) 80 81 musb->xceiv = devm_usb_get_phy_by_phandle(dev, "phys", 0); 81 82 else 82 83 musb->xceiv = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2); 83 84 if (IS_ERR(musb->xceiv)) { 84 - dev_err(dev, "No transceiver configured\n"); 85 - return PTR_ERR(musb->xceiv); 85 + err = PTR_ERR(musb->xceiv); 86 + if (err != -EPROBE_DEFER) 87 + dev_err(dev, "No transceiver configured: %d", err); 88 + return err; 86 89 } 87 90 88 91 /* Silicon does not implement ConfigData register.
+11
drivers/usb/musb/musb_core.c
··· 1840 1840 #define MUSB_QUIRK_B_INVALID_VBUS_91 (MUSB_DEVCTL_BDEVICE | \ 1841 1841 (2 << MUSB_DEVCTL_VBUS_SHIFT) | \ 1842 1842 MUSB_DEVCTL_SESSION) 1843 + #define MUSB_QUIRK_B_DISCONNECT_99 (MUSB_DEVCTL_BDEVICE | \ 1844 + (3 << MUSB_DEVCTL_VBUS_SHIFT) | \ 1845 + MUSB_DEVCTL_SESSION) 1843 1846 #define MUSB_QUIRK_A_DISCONNECT_19 ((3 << MUSB_DEVCTL_VBUS_SHIFT) | \ 1844 1847 MUSB_DEVCTL_SESSION) 1845 1848 ··· 1865 1862 s = MUSB_DEVCTL_FSDEV | MUSB_DEVCTL_LSDEV | 1866 1863 MUSB_DEVCTL_HR; 1867 1864 switch (devctl & ~s) { 1865 + case MUSB_QUIRK_B_DISCONNECT_99: 1866 + musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n"); 1867 + schedule_delayed_work(&musb->irq_work, 1868 + msecs_to_jiffies(1000)); 1869 + break; 1868 1870 case MUSB_QUIRK_B_INVALID_VBUS_91: 1869 1871 if (musb->quirk_retries && !musb->flush_irq_work) { 1870 1872 musb_dbg(musb, ··· 2317 2309 musb_platform_disable(musb); 2318 2310 musb_disable_interrupts(musb); 2319 2311 musb_writeb(musb->mregs, MUSB_DEVCTL, 0); 2312 + 2313 + /* MUSB_POWER_SOFTCONN might be already set, JZ4740 does this. */ 2314 + musb_writeb(musb->mregs, MUSB_POWER, 0); 2320 2315 2321 2316 /* Init IRQ workqueue before request_irq */ 2322 2317 INIT_DELAYED_WORK(&musb->irq_work, musb_irq_work);
+1 -1
drivers/usb/musb/musbhsdma.c
··· 425 425 controller->controller.channel_abort = dma_channel_abort; 426 426 427 427 if (request_irq(irq, dma_controller_irq, 0, 428 - dev_name(musb->controller), &controller->controller)) { 428 + dev_name(musb->controller), controller)) { 429 429 dev_err(dev, "request_irq %d failed!\n", irq); 430 430 musb_dma_controller_destroy(&controller->controller); 431 431
+10
drivers/usb/serial/option.c
··· 567 567 /* Interface must have two endpoints */ 568 568 #define NUMEP2 BIT(16) 569 569 570 + /* Device needs ZLP */ 571 + #define ZLP BIT(17) 572 + 570 573 571 574 static const struct usb_device_id option_ids[] = { 572 575 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) }, ··· 1175 1172 .driver_info = NCTRL(0) | RSVD(3) }, 1176 1173 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1102, 0xff), /* Telit ME910 (ECM) */ 1177 1174 .driver_info = NCTRL(0) }, 1175 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x110a, 0xff), /* Telit ME910G1 */ 1176 + .driver_info = NCTRL(0) | RSVD(3) }, 1178 1177 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910), 1179 1178 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1180 1179 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4), ··· 1201 1196 .driver_info = NCTRL(0) | RSVD(1) }, 1202 1197 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff), /* Telit LN940 (MBIM) */ 1203 1198 .driver_info = NCTRL(0) }, 1199 + { USB_DEVICE(TELIT_VENDOR_ID, 0x9010), /* Telit SBL FN980 flashing device */ 1200 + .driver_info = NCTRL(0) | ZLP }, 1204 1201 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */ 1205 1202 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0002, 0xff, 0xff, 0xff), 1206 1203 .driver_info = RSVD(1) }, ··· 2103 2096 2104 2097 if (!(device_flags & NCTRL(iface_desc->bInterfaceNumber))) 2105 2098 data->use_send_setup = 1; 2099 + 2100 + if (device_flags & ZLP) 2101 + data->use_zlp = 1; 2106 2102 2107 2103 spin_lock_init(&data->susp_lock); 2108 2104
+1
drivers/usb/serial/usb-wwan.h
··· 38 38 spinlock_t susp_lock; 39 39 unsigned int suspended:1; 40 40 unsigned int use_send_setup:1; 41 + unsigned int use_zlp:1; 41 42 int in_flight; 42 43 unsigned int open_ports; 43 44 void *private;
+4
drivers/usb/serial/usb_wwan.c
··· 461 461 void (*callback) (struct urb *)) 462 462 { 463 463 struct usb_serial *serial = port->serial; 464 + struct usb_wwan_intf_private *intfdata = usb_get_serial_data(serial); 464 465 struct urb *urb; 465 466 466 467 urb = usb_alloc_urb(0, GFP_KERNEL); /* No ISO */ ··· 471 470 usb_fill_bulk_urb(urb, serial->dev, 472 471 usb_sndbulkpipe(serial->dev, endpoint) | dir, 473 472 buf, len, callback, ctx); 473 + 474 + if (intfdata->use_zlp && dir == USB_DIR_OUT) 475 + urb->transfer_flags |= URB_ZERO_PACKET; 474 476 475 477 return urb; 476 478 }
+15 -5
drivers/usb/typec/tcpm/tcpci.c
··· 432 432 433 433 if (status & TCPC_ALERT_RX_STATUS) { 434 434 struct pd_message msg; 435 - unsigned int cnt; 435 + unsigned int cnt, payload_cnt; 436 436 u16 header; 437 437 438 438 regmap_read(tcpci->regmap, TCPC_RX_BYTE_CNT, &cnt); 439 + /* 440 + * 'cnt' corresponds to READABLE_BYTE_COUNT in section 4.4.14 441 + * of the TCPCI spec [Rev 2.0 Ver 1.0 October 2017] and is 442 + * defined in table 4-36 as one greater than the number of 443 + * bytes received. And that number includes the header. So: 444 + */ 445 + if (cnt > 3) 446 + payload_cnt = cnt - (1 + sizeof(msg.header)); 447 + else 448 + payload_cnt = 0; 439 449 440 450 tcpci_read16(tcpci, TCPC_RX_HDR, &header); 441 451 msg.header = cpu_to_le16(header); 442 452 443 - if (WARN_ON(cnt > sizeof(msg.payload))) 444 - cnt = sizeof(msg.payload); 453 + if (WARN_ON(payload_cnt > sizeof(msg.payload))) 454 + payload_cnt = sizeof(msg.payload); 445 455 446 - if (cnt > 0) 456 + if (payload_cnt > 0) 447 457 regmap_raw_read(tcpci->regmap, TCPC_RX_DATA, 448 - &msg.payload, cnt); 458 + &msg.payload, payload_cnt); 449 459 450 460 /* Read complete, clear RX status alert bit */ 451 461 tcpci_write16(tcpci, TCPC_ALERT, TCPC_ALERT_RX_STATUS);
+9 -9
drivers/usb/typec/ucsi/ucsi.h
··· 99 99 #define UCSI_ENABLE_NTFY_CMD_COMPLETE BIT(16) 100 100 #define UCSI_ENABLE_NTFY_EXT_PWR_SRC_CHANGE BIT(17) 101 101 #define UCSI_ENABLE_NTFY_PWR_OPMODE_CHANGE BIT(18) 102 - #define UCSI_ENABLE_NTFY_CAP_CHANGE BIT(19) 103 - #define UCSI_ENABLE_NTFY_PWR_LEVEL_CHANGE BIT(20) 104 - #define UCSI_ENABLE_NTFY_PD_RESET_COMPLETE BIT(21) 105 - #define UCSI_ENABLE_NTFY_CAM_CHANGE BIT(22) 106 - #define UCSI_ENABLE_NTFY_BAT_STATUS_CHANGE BIT(23) 107 - #define UCSI_ENABLE_NTFY_PARTNER_CHANGE BIT(24) 108 - #define UCSI_ENABLE_NTFY_PWR_DIR_CHANGE BIT(25) 109 - #define UCSI_ENABLE_NTFY_CONNECTOR_CHANGE BIT(26) 110 - #define UCSI_ENABLE_NTFY_ERROR BIT(27) 102 + #define UCSI_ENABLE_NTFY_CAP_CHANGE BIT(21) 103 + #define UCSI_ENABLE_NTFY_PWR_LEVEL_CHANGE BIT(22) 104 + #define UCSI_ENABLE_NTFY_PD_RESET_COMPLETE BIT(23) 105 + #define UCSI_ENABLE_NTFY_CAM_CHANGE BIT(24) 106 + #define UCSI_ENABLE_NTFY_BAT_STATUS_CHANGE BIT(25) 107 + #define UCSI_ENABLE_NTFY_PARTNER_CHANGE BIT(27) 108 + #define UCSI_ENABLE_NTFY_PWR_DIR_CHANGE BIT(28) 109 + #define UCSI_ENABLE_NTFY_CONNECTOR_CHANGE BIT(30) 110 + #define UCSI_ENABLE_NTFY_ERROR BIT(31) 111 111 #define UCSI_ENABLE_NTFY_ALL 0xdbe70000 112 112 113 113 /* SET_UOR command bits */
+2
drivers/watchdog/Kconfig
··· 687 687 config MAX77620_WATCHDOG 688 688 tristate "Maxim Max77620 Watchdog Timer" 689 689 depends on MFD_MAX77620 || COMPILE_TEST 690 + select WATCHDOG_CORE 690 691 help 691 692 This is the driver for the Max77620 watchdog timer. 692 693 Say 'Y' here to enable the watchdog timer support for ··· 1445 1444 config TQMX86_WDT 1446 1445 tristate "TQ-Systems TQMX86 Watchdog Timer" 1447 1446 depends on X86 1447 + select WATCHDOG_CORE 1448 1448 help 1449 1449 This is the driver for the hardware watchdog timer in the TQMX86 IO 1450 1450 controller found on some of their ComExpress Modules.
+1 -1
drivers/watchdog/imx7ulp_wdt.c
··· 112 112 { 113 113 struct imx7ulp_wdt_device *wdt = watchdog_get_drvdata(wdog); 114 114 115 - imx7ulp_wdt_enable(wdt->base, true); 115 + imx7ulp_wdt_enable(wdog, true); 116 116 imx7ulp_wdt_set_timeout(&wdt->wdd, 1); 117 117 118 118 /* wait for wdog to fire */
+2 -2
drivers/watchdog/orion_wdt.c
··· 602 602 set_bit(WDOG_HW_RUNNING, &dev->wdt.status); 603 603 604 604 /* Request the IRQ only after the watchdog is disabled */ 605 - irq = platform_get_irq(pdev, 0); 605 + irq = platform_get_irq_optional(pdev, 0); 606 606 if (irq > 0) { 607 607 /* 608 608 * Not all supported platforms specify an interrupt for the ··· 617 617 } 618 618 619 619 /* Optional 2nd interrupt for pretimeout */ 620 - irq = platform_get_irq(pdev, 1); 620 + irq = platform_get_irq_optional(pdev, 1); 621 621 if (irq > 0) { 622 622 orion_wdt_info.options |= WDIOF_PRETIMEOUT; 623 623 ret = devm_request_irq(&pdev->dev, irq, orion_wdt_pre_irq,
+1
drivers/watchdog/rn5t618_wdt.c
··· 188 188 189 189 module_platform_driver(rn5t618_wdt_driver); 190 190 191 + MODULE_ALIAS("platform:rn5t618-wdt"); 191 192 MODULE_AUTHOR("Beniamino Galvani <b.galvani@gmail.com>"); 192 193 MODULE_DESCRIPTION("RN5T618 watchdog driver"); 193 194 MODULE_LICENSE("GPL v2");
+1 -1
drivers/watchdog/w83627hf_wdt.c
··· 420 420 cr_wdt_csr = NCT6102D_WDT_CSR; 421 421 break; 422 422 case NCT6116_ID: 423 - ret = nct6102; 423 + ret = nct6116; 424 424 cr_wdt_timeout = NCT6102D_WDT_TIMEOUT; 425 425 cr_wdt_control = NCT6102D_WDT_CONTROL; 426 426 cr_wdt_csr = NCT6102D_WDT_CSR;
+6 -1
fs/btrfs/compression.c
··· 447 447 448 448 if (blkcg_css) { 449 449 bio->bi_opf |= REQ_CGROUP_PUNT; 450 - bio_associate_blkg_from_css(bio, blkcg_css); 450 + kthread_associate_blkcg(blkcg_css); 451 451 } 452 452 refcount_set(&cb->pending_bios, 1); 453 453 ··· 491 491 bio->bi_opf = REQ_OP_WRITE | write_flags; 492 492 bio->bi_private = cb; 493 493 bio->bi_end_io = end_compressed_bio_write; 494 + if (blkcg_css) 495 + bio->bi_opf |= REQ_CGROUP_PUNT; 494 496 bio_add_page(bio, page, PAGE_SIZE, 0); 495 497 } 496 498 if (bytes_left < PAGE_SIZE) { ··· 518 516 bio->bi_status = ret; 519 517 bio_endio(bio); 520 518 } 519 + 520 + if (blkcg_css) 521 + kthread_associate_blkcg(NULL); 521 522 522 523 return 0; 523 524 }
+3 -3
fs/btrfs/inode.c
··· 1479 1479 disk_num_bytes = 1480 1480 btrfs_file_extent_disk_num_bytes(leaf, fi); 1481 1481 /* 1482 - * If extent we got ends before our range starts, skip 1483 - * to next extent 1482 + * If the extent we got ends before our current offset, 1483 + * skip to the next extent. 1484 1484 */ 1485 - if (extent_end <= start) { 1485 + if (extent_end <= cur_offset) { 1486 1486 path->slots[0]++; 1487 1487 goto next_slot; 1488 1488 }
+5 -28
fs/buffer.c
··· 3031 3031 * errors, this only handles the "we need to be able to 3032 3032 * do IO at the final sector" case. 3033 3033 */ 3034 - void guard_bio_eod(int op, struct bio *bio) 3034 + void guard_bio_eod(struct bio *bio) 3035 3035 { 3036 3036 sector_t maxsector; 3037 - struct bio_vec *bvec = bio_last_bvec_all(bio); 3038 - unsigned truncated_bytes; 3039 3037 struct hd_struct *part; 3040 3038 3041 3039 rcu_read_lock(); ··· 3059 3061 if (likely((bio->bi_iter.bi_size >> 9) <= maxsector)) 3060 3062 return; 3061 3063 3062 - /* Uhhuh. We've got a bio that straddles the device size! */ 3063 - truncated_bytes = bio->bi_iter.bi_size - (maxsector << 9); 3064 - 3065 - /* 3066 - * The bio contains more than one segment which spans EOD, just return 3067 - * and let IO layer turn it into an EIO 3068 - */ 3069 - if (truncated_bytes > bvec->bv_len) 3070 - return; 3071 - 3072 - /* Truncate the bio.. */ 3073 - bio->bi_iter.bi_size -= truncated_bytes; 3074 - bvec->bv_len -= truncated_bytes; 3075 - 3076 - /* ..and clear the end of the buffer for reads */ 3077 - if (op == REQ_OP_READ) { 3078 - struct bio_vec bv; 3079 - 3080 - mp_bvec_last_segment(bvec, &bv); 3081 - zero_user(bv.bv_page, bv.bv_offset + bv.bv_len, 3082 - truncated_bytes); 3083 - } 3064 + bio_truncate(bio, maxsector << 9); 3084 3065 } 3085 3066 3086 3067 static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, ··· 3095 3118 bio->bi_end_io = end_bio_bh_io_sync; 3096 3119 bio->bi_private = bh; 3097 3120 3098 - /* Take care of bh's that straddle the end of the device */ 3099 - guard_bio_eod(op, bio); 3100 - 3101 3121 if (buffer_meta(bh)) 3102 3122 op_flags |= REQ_META; 3103 3123 if (buffer_prio(bh)) 3104 3124 op_flags |= REQ_PRIO; 3105 3125 bio_set_op_attrs(bio, op, op_flags); 3126 + 3127 + /* Take care of bh's that straddle the end of the device */ 3128 + guard_bio_eod(bio); 3106 3129 3107 3130 if (wbc) { 3108 3131 wbc_init_bio(wbc, bio);
+1 -1
fs/char_dev.c
··· 352 352 353 353 if (owner && !try_module_get(owner)) 354 354 return NULL; 355 - kobj = kobject_get(&p->kobj); 355 + kobj = kobject_get_unless_zero(&p->kobj); 356 356 if (!kobj) 357 357 module_put(owner); 358 358 return kobj;
+1
fs/cifs/cifsglob.h
··· 1693 1693 struct timespec64 cf_atime; 1694 1694 struct timespec64 cf_mtime; 1695 1695 struct timespec64 cf_ctime; 1696 + u32 cf_cifstag; 1696 1697 }; 1697 1698 1698 1699 static inline void free_dfs_info_param(struct dfs_info3_param *param)
+54 -9
fs/cifs/readdir.c
··· 139 139 dput(dentry); 140 140 } 141 141 142 + static bool reparse_file_needs_reval(const struct cifs_fattr *fattr) 143 + { 144 + if (!(fattr->cf_cifsattrs & ATTR_REPARSE)) 145 + return false; 146 + /* 147 + * The DFS tags should be only intepreted by server side as per 148 + * MS-FSCC 2.1.2.1, but let's include them anyway. 149 + * 150 + * Besides, if cf_cifstag is unset (0), then we still need it to be 151 + * revalidated to know exactly what reparse point it is. 152 + */ 153 + switch (fattr->cf_cifstag) { 154 + case IO_REPARSE_TAG_DFS: 155 + case IO_REPARSE_TAG_DFSR: 156 + case IO_REPARSE_TAG_SYMLINK: 157 + case IO_REPARSE_TAG_NFS: 158 + case 0: 159 + return true; 160 + } 161 + return false; 162 + } 163 + 142 164 static void 143 165 cifs_fill_common_info(struct cifs_fattr *fattr, struct cifs_sb_info *cifs_sb) 144 166 { ··· 180 158 * is a symbolic link, DFS referral or a reparse point with a direct 181 159 * access like junctions, deduplicated files, NFS symlinks. 182 160 */ 183 - if (fattr->cf_cifsattrs & ATTR_REPARSE) 161 + if (reparse_file_needs_reval(fattr)) 184 162 fattr->cf_flags |= CIFS_FATTR_NEED_REVAL; 185 163 186 164 /* non-unix readdir doesn't provide nlink */ ··· 216 194 } 217 195 } 218 196 197 + static void __dir_info_to_fattr(struct cifs_fattr *fattr, const void *info) 198 + { 199 + const FILE_DIRECTORY_INFO *fi = info; 200 + 201 + memset(fattr, 0, sizeof(*fattr)); 202 + fattr->cf_cifsattrs = le32_to_cpu(fi->ExtFileAttributes); 203 + fattr->cf_eof = le64_to_cpu(fi->EndOfFile); 204 + fattr->cf_bytes = le64_to_cpu(fi->AllocationSize); 205 + fattr->cf_createtime = le64_to_cpu(fi->CreationTime); 206 + fattr->cf_atime = cifs_NTtimeToUnix(fi->LastAccessTime); 207 + fattr->cf_ctime = cifs_NTtimeToUnix(fi->ChangeTime); 208 + fattr->cf_mtime = cifs_NTtimeToUnix(fi->LastWriteTime); 209 + } 210 + 219 211 void 220 212 cifs_dir_info_to_fattr(struct cifs_fattr *fattr, FILE_DIRECTORY_INFO *info, 221 213 struct cifs_sb_info *cifs_sb) 222 214 { 223 - memset(fattr, 0, sizeof(*fattr)); 224 - fattr->cf_cifsattrs = le32_to_cpu(info->ExtFileAttributes); 225 - fattr->cf_eof = le64_to_cpu(info->EndOfFile); 226 - fattr->cf_bytes = le64_to_cpu(info->AllocationSize); 227 - fattr->cf_createtime = le64_to_cpu(info->CreationTime); 228 - fattr->cf_atime = cifs_NTtimeToUnix(info->LastAccessTime); 229 - fattr->cf_ctime = cifs_NTtimeToUnix(info->ChangeTime); 230 - fattr->cf_mtime = cifs_NTtimeToUnix(info->LastWriteTime); 215 + __dir_info_to_fattr(fattr, info); 216 + cifs_fill_common_info(fattr, cifs_sb); 217 + } 231 218 219 + static void cifs_fulldir_info_to_fattr(struct cifs_fattr *fattr, 220 + SEARCH_ID_FULL_DIR_INFO *info, 221 + struct cifs_sb_info *cifs_sb) 222 + { 223 + __dir_info_to_fattr(fattr, info); 224 + 225 + /* See MS-FSCC 2.4.18 FileIdFullDirectoryInformation */ 226 + if (fattr->cf_cifsattrs & ATTR_REPARSE) 227 + fattr->cf_cifstag = le32_to_cpu(info->EaSize); 232 228 cifs_fill_common_info(fattr, cifs_sb); 233 229 } 234 230 ··· 794 754 cifs_std_info_to_fattr(&fattr, 795 755 (FIND_FILE_STANDARD_INFO *)find_entry, 796 756 cifs_sb); 757 + break; 758 + case SMB_FIND_FILE_ID_FULL_DIR_INFO: 759 + cifs_fulldir_info_to_fattr(&fattr, 760 + (SEARCH_ID_FULL_DIR_INFO *)find_entry, 761 + cifs_sb); 797 762 break; 798 763 default: 799 764 cifs_dir_info_to_fattr(&fattr,
+1 -1
fs/cifs/smb2file.c
··· 67 67 goto out; 68 68 69 69 70 - if (oparms->tcon->use_resilient) { 70 + if (oparms->tcon->use_resilient) { 71 71 /* default timeout is 0, servers pick default (120 seconds) */ 72 72 nr_ioctl_req.Timeout = 73 73 cpu_to_le32(oparms->tcon->handle_timeout);
+2
fs/direct-io.c
··· 39 39 #include <linux/atomic.h> 40 40 #include <linux/prefetch.h> 41 41 42 + #include "internal.h" 43 + 42 44 /* 43 45 * How many user pages to map in one call to get_user_pages(). This determines 44 46 * the size of a structure in the slab cache
+6 -1
fs/file.c
··· 960 960 return ksys_dup3(oldfd, newfd, 0); 961 961 } 962 962 963 - SYSCALL_DEFINE1(dup, unsigned int, fildes) 963 + int ksys_dup(unsigned int fildes) 964 964 { 965 965 int ret = -EBADF; 966 966 struct file *file = fget_raw(fildes); ··· 973 973 fput(file); 974 974 } 975 975 return ret; 976 + } 977 + 978 + SYSCALL_DEFINE1(dup, unsigned int, fildes) 979 + { 980 + return ksys_dup(fildes); 976 981 } 977 982 978 983 int f_dupfd(unsigned int from, struct file *file, unsigned flags)
+3 -1
fs/hugetlbfs/inode.c
··· 1498 1498 /* other hstates are optional */ 1499 1499 i = 0; 1500 1500 for_each_hstate(h) { 1501 - if (i == default_hstate_idx) 1501 + if (i == default_hstate_idx) { 1502 + i++; 1502 1503 continue; 1504 + } 1503 1505 1504 1506 mnt = mount_one_hugetlbfs(h); 1505 1507 if (IS_ERR(mnt))
+1 -1
fs/internal.h
··· 38 38 /* 39 39 * buffer.c 40 40 */ 41 - extern void guard_bio_eod(int rw, struct bio *bio); 41 + extern void guard_bio_eod(struct bio *bio); 42 42 extern int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, 43 43 get_block_t *get_block, struct iomap *iomap); 44 44
+2 -8
fs/io-wq.c
··· 92 92 struct io_wqe_acct acct[2]; 93 93 94 94 struct hlist_nulls_head free_list; 95 - struct hlist_nulls_head busy_list; 96 95 struct list_head all_list; 97 96 98 97 struct io_wq *wq; ··· 326 327 if (worker->flags & IO_WORKER_F_FREE) { 327 328 worker->flags &= ~IO_WORKER_F_FREE; 328 329 hlist_nulls_del_init_rcu(&worker->nulls_node); 329 - hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->busy_list); 330 330 } 331 331 332 332 /* ··· 363 365 { 364 366 if (!(worker->flags & IO_WORKER_F_FREE)) { 365 367 worker->flags |= IO_WORKER_F_FREE; 366 - hlist_nulls_del_init_rcu(&worker->nulls_node); 367 368 hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list); 368 369 } 369 370 ··· 428 431 /* flush any pending signals before assigning new work */ 429 432 if (signal_pending(current)) 430 433 flush_signals(current); 434 + 435 + cond_resched(); 431 436 432 437 spin_lock_irq(&worker->lock); 433 438 worker->cur_work = work; ··· 797 798 798 799 set_bit(IO_WQ_BIT_CANCEL, &wq->state); 799 800 800 - /* 801 - * Browse both lists, as there's a gap between handing work off 802 - * to a worker and the worker putting itself on the busy_list 803 - */ 804 801 rcu_read_lock(); 805 802 for_each_node(node) { 806 803 struct io_wqe *wqe = wq->wqes[node]; ··· 1044 1049 spin_lock_init(&wqe->lock); 1045 1050 INIT_WQ_LIST(&wqe->work_list); 1046 1051 INIT_HLIST_NULLS_HEAD(&wqe->free_list, 0); 1047 - INIT_HLIST_NULLS_HEAD(&wqe->busy_list, 1); 1048 1052 INIT_LIST_HEAD(&wqe->all_list); 1049 1053 } 1050 1054
+355 -347
fs/io_uring.c
··· 330 330 struct file *file; 331 331 u64 addr; 332 332 int flags; 333 + unsigned count; 334 + }; 335 + 336 + struct io_rw { 337 + /* NOTE: kiocb has the file as the first member, so don't do it here */ 338 + struct kiocb kiocb; 339 + u64 addr; 340 + u64 len; 341 + }; 342 + 343 + struct io_connect { 344 + struct file *file; 345 + struct sockaddr __user *addr; 346 + int addr_len; 347 + }; 348 + 349 + struct io_sr_msg { 350 + struct file *file; 351 + struct user_msghdr __user *msg; 352 + int msg_flags; 333 353 }; 334 354 335 355 struct io_async_connect { ··· 371 351 }; 372 352 373 353 struct io_async_ctx { 374 - struct io_uring_sqe sqe; 375 354 union { 376 355 struct io_async_rw rw; 377 356 struct io_async_msghdr msg; ··· 388 369 struct io_kiocb { 389 370 union { 390 371 struct file *file; 391 - struct kiocb rw; 372 + struct io_rw rw; 392 373 struct io_poll_iocb poll; 393 374 struct io_accept accept; 394 375 struct io_sync sync; 395 376 struct io_cancel cancel; 396 377 struct io_timeout timeout; 378 + struct io_connect connect; 379 + struct io_sr_msg sr_msg; 397 380 }; 398 381 399 - const struct io_uring_sqe *sqe; 400 382 struct io_async_ctx *io; 401 383 struct file *ring_file; 402 384 int ring_fd; ··· 431 411 #define REQ_F_INFLIGHT 16384 /* on inflight list */ 432 412 #define REQ_F_COMP_LOCKED 32768 /* completion under lock */ 433 413 #define REQ_F_HARDLINK 65536 /* doesn't sever on completion < 0 */ 434 - #define REQ_F_PREPPED 131072 /* request already opcode prepared */ 435 414 u64 user_data; 436 415 u32 result; 437 416 u32 sequence; ··· 628 609 { 629 610 bool do_hashed = false; 630 611 631 - if (req->sqe) { 632 - switch (req->opcode) { 633 - case IORING_OP_WRITEV: 634 - case IORING_OP_WRITE_FIXED: 635 - /* only regular files should be hashed for writes */ 636 - if (req->flags & REQ_F_ISREG) 637 - do_hashed = true; 638 - /* fall-through */ 639 - case IORING_OP_READV: 640 - case IORING_OP_READ_FIXED: 641 - case IORING_OP_SENDMSG: 642 - case IORING_OP_RECVMSG: 643 - case IORING_OP_ACCEPT: 644 - case IORING_OP_POLL_ADD: 645 - case IORING_OP_CONNECT: 646 - /* 647 - * We know REQ_F_ISREG is not set on some of these 648 - * opcodes, but this enables us to keep the check in 649 - * just one place. 650 - */ 651 - if (!(req->flags & REQ_F_ISREG)) 652 - req->work.flags |= IO_WQ_WORK_UNBOUND; 653 - break; 654 - } 655 - if (io_req_needs_user(req)) 656 - req->work.flags |= IO_WQ_WORK_NEEDS_USER; 612 + switch (req->opcode) { 613 + case IORING_OP_WRITEV: 614 + case IORING_OP_WRITE_FIXED: 615 + /* only regular files should be hashed for writes */ 616 + if (req->flags & REQ_F_ISREG) 617 + do_hashed = true; 618 + /* fall-through */ 619 + case IORING_OP_READV: 620 + case IORING_OP_READ_FIXED: 621 + case IORING_OP_SENDMSG: 622 + case IORING_OP_RECVMSG: 623 + case IORING_OP_ACCEPT: 624 + case IORING_OP_POLL_ADD: 625 + case IORING_OP_CONNECT: 626 + /* 627 + * We know REQ_F_ISREG is not set on some of these 628 + * opcodes, but this enables us to keep the check in 629 + * just one place. 630 + */ 631 + if (!(req->flags & REQ_F_ISREG)) 632 + req->work.flags |= IO_WQ_WORK_UNBOUND; 633 + break; 657 634 } 635 + if (io_req_needs_user(req)) 636 + req->work.flags |= IO_WQ_WORK_NEEDS_USER; 658 637 659 638 *link = io_prep_linked_timeout(req); 660 639 return do_hashed; ··· 1197 1180 1198 1181 ret = 0; 1199 1182 list_for_each_entry_safe(req, tmp, &ctx->poll_list, list) { 1200 - struct kiocb *kiocb = &req->rw; 1183 + struct kiocb *kiocb = &req->rw.kiocb; 1201 1184 1202 1185 /* 1203 1186 * Move completed entries to our local list. If we find a ··· 1352 1335 1353 1336 static void io_complete_rw_common(struct kiocb *kiocb, long res) 1354 1337 { 1355 - struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw); 1338 + struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb); 1356 1339 1357 1340 if (kiocb->ki_flags & IOCB_WRITE) 1358 1341 kiocb_end_write(req); ··· 1364 1347 1365 1348 static void io_complete_rw(struct kiocb *kiocb, long res, long res2) 1366 1349 { 1367 - struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw); 1350 + struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb); 1368 1351 1369 1352 io_complete_rw_common(kiocb, res); 1370 1353 io_put_req(req); ··· 1372 1355 1373 1356 static struct io_kiocb *__io_complete_rw(struct kiocb *kiocb, long res) 1374 1357 { 1375 - struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw); 1358 + struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb); 1376 1359 struct io_kiocb *nxt = NULL; 1377 1360 1378 1361 io_complete_rw_common(kiocb, res); ··· 1383 1366 1384 1367 static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2) 1385 1368 { 1386 - struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw); 1369 + struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb); 1387 1370 1388 1371 if (kiocb->ki_flags & IOCB_WRITE) 1389 1372 kiocb_end_write(req); ··· 1417 1400 1418 1401 list_req = list_first_entry(&ctx->poll_list, struct io_kiocb, 1419 1402 list); 1420 - if (list_req->rw.ki_filp != req->rw.ki_filp) 1403 + if (list_req->file != req->file) 1421 1404 ctx->poll_multi_file = true; 1422 1405 } 1423 1406 ··· 1488 1471 return false; 1489 1472 } 1490 1473 1491 - static int io_prep_rw(struct io_kiocb *req, bool force_nonblock) 1474 + static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe, 1475 + bool force_nonblock) 1492 1476 { 1493 - const struct io_uring_sqe *sqe = req->sqe; 1494 1477 struct io_ring_ctx *ctx = req->ctx; 1495 - struct kiocb *kiocb = &req->rw; 1478 + struct kiocb *kiocb = &req->rw.kiocb; 1496 1479 unsigned ioprio; 1497 1480 int ret; 1498 1481 ··· 1541 1524 return -EINVAL; 1542 1525 kiocb->ki_complete = io_complete_rw; 1543 1526 } 1527 + 1528 + req->rw.addr = READ_ONCE(sqe->addr); 1529 + req->rw.len = READ_ONCE(sqe->len); 1530 + /* we own ->private, reuse it for the buffer index */ 1531 + req->rw.kiocb.private = (void *) (unsigned long) 1532 + READ_ONCE(sqe->buf_index); 1544 1533 return 0; 1545 1534 } 1546 1535 ··· 1580 1557 io_rw_done(kiocb, ret); 1581 1558 } 1582 1559 1583 - static ssize_t io_import_fixed(struct io_ring_ctx *ctx, int rw, 1584 - const struct io_uring_sqe *sqe, 1560 + static ssize_t io_import_fixed(struct io_kiocb *req, int rw, 1585 1561 struct iov_iter *iter) 1586 1562 { 1587 - size_t len = READ_ONCE(sqe->len); 1563 + struct io_ring_ctx *ctx = req->ctx; 1564 + size_t len = req->rw.len; 1588 1565 struct io_mapped_ubuf *imu; 1589 1566 unsigned index, buf_index; 1590 1567 size_t offset; ··· 1594 1571 if (unlikely(!ctx->user_bufs)) 1595 1572 return -EFAULT; 1596 1573 1597 - buf_index = READ_ONCE(sqe->buf_index); 1574 + buf_index = (unsigned long) req->rw.kiocb.private; 1598 1575 if (unlikely(buf_index >= ctx->nr_user_bufs)) 1599 1576 return -EFAULT; 1600 1577 1601 1578 index = array_index_nospec(buf_index, ctx->nr_user_bufs); 1602 1579 imu = &ctx->user_bufs[index]; 1603 - buf_addr = READ_ONCE(sqe->addr); 1580 + buf_addr = req->rw.addr; 1604 1581 1605 1582 /* overflow */ 1606 1583 if (buf_addr + len < buf_addr) ··· 1657 1634 static ssize_t io_import_iovec(int rw, struct io_kiocb *req, 1658 1635 struct iovec **iovec, struct iov_iter *iter) 1659 1636 { 1660 - const struct io_uring_sqe *sqe = req->sqe; 1661 - void __user *buf = u64_to_user_ptr(READ_ONCE(sqe->addr)); 1662 - size_t sqe_len = READ_ONCE(sqe->len); 1637 + void __user *buf = u64_to_user_ptr(req->rw.addr); 1638 + size_t sqe_len = req->rw.len; 1663 1639 u8 opcode; 1664 1640 1665 - /* 1666 - * We're reading ->opcode for the second time, but the first read 1667 - * doesn't care whether it's _FIXED or not, so it doesn't matter 1668 - * whether ->opcode changes concurrently. The first read does care 1669 - * about whether it is a READ or a WRITE, so we don't trust this read 1670 - * for that purpose and instead let the caller pass in the read/write 1671 - * flag. 1672 - */ 1673 1641 opcode = req->opcode; 1674 1642 if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) { 1675 1643 *iovec = NULL; 1676 - return io_import_fixed(req->ctx, rw, sqe, iter); 1644 + return io_import_fixed(req, rw, iter); 1677 1645 } 1646 + 1647 + /* buffer index only valid with fixed read/write */ 1648 + if (req->rw.kiocb.private) 1649 + return -EINVAL; 1678 1650 1679 1651 if (req->io) { 1680 1652 struct io_async_rw *iorw = &req->io->rw; ··· 1768 1750 static int io_alloc_async_ctx(struct io_kiocb *req) 1769 1751 { 1770 1752 req->io = kmalloc(sizeof(*req->io), GFP_KERNEL); 1771 - if (req->io) { 1772 - memcpy(&req->io->sqe, req->sqe, sizeof(req->io->sqe)); 1773 - req->sqe = &req->io->sqe; 1774 - return 0; 1775 - } 1776 - 1777 - return 1; 1753 + return req->io == NULL; 1778 1754 } 1779 1755 1780 1756 static void io_rw_async(struct io_wq_work **workptr) ··· 1794 1782 return 0; 1795 1783 } 1796 1784 1797 - static int io_read_prep(struct io_kiocb *req, struct iovec **iovec, 1798 - struct iov_iter *iter, bool force_nonblock) 1785 + static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe, 1786 + bool force_nonblock) 1799 1787 { 1788 + struct io_async_ctx *io; 1789 + struct iov_iter iter; 1800 1790 ssize_t ret; 1801 1791 1802 - ret = io_prep_rw(req, force_nonblock); 1792 + ret = io_prep_rw(req, sqe, force_nonblock); 1803 1793 if (ret) 1804 1794 return ret; 1805 1795 1806 1796 if (unlikely(!(req->file->f_mode & FMODE_READ))) 1807 1797 return -EBADF; 1808 1798 1809 - return io_import_iovec(READ, req, iovec, iter); 1799 + if (!req->io) 1800 + return 0; 1801 + 1802 + io = req->io; 1803 + io->rw.iov = io->rw.fast_iov; 1804 + req->io = NULL; 1805 + ret = io_import_iovec(READ, req, &io->rw.iov, &iter); 1806 + req->io = io; 1807 + if (ret < 0) 1808 + return ret; 1809 + 1810 + io_req_map_rw(req, ret, io->rw.iov, io->rw.fast_iov, &iter); 1811 + return 0; 1810 1812 } 1811 1813 1812 1814 static int io_read(struct io_kiocb *req, struct io_kiocb **nxt, 1813 1815 bool force_nonblock) 1814 1816 { 1815 1817 struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; 1816 - struct kiocb *kiocb = &req->rw; 1818 + struct kiocb *kiocb = &req->rw.kiocb; 1817 1819 struct iov_iter iter; 1818 - struct file *file; 1819 1820 size_t iov_count; 1820 1821 ssize_t io_size, ret; 1821 1822 1822 - if (!req->io) { 1823 - ret = io_read_prep(req, &iovec, &iter, force_nonblock); 1824 - if (ret < 0) 1825 - return ret; 1826 - } else { 1827 - ret = io_import_iovec(READ, req, &iovec, &iter); 1828 - if (ret < 0) 1829 - return ret; 1830 - } 1823 + ret = io_import_iovec(READ, req, &iovec, &iter); 1824 + if (ret < 0) 1825 + return ret; 1831 1826 1832 1827 /* Ensure we clear previously set non-block flag */ 1833 1828 if (!force_nonblock) 1834 - req->rw.ki_flags &= ~IOCB_NOWAIT; 1829 + req->rw.kiocb.ki_flags &= ~IOCB_NOWAIT; 1835 1830 1836 - file = req->file; 1837 1831 io_size = ret; 1838 1832 if (req->flags & REQ_F_LINK) 1839 1833 req->result = io_size; ··· 1848 1830 * If the file doesn't support async, mark it as REQ_F_MUST_PUNT so 1849 1831 * we know to async punt it even if it was opened O_NONBLOCK 1850 1832 */ 1851 - if (force_nonblock && !io_file_supports_async(file)) { 1833 + if (force_nonblock && !io_file_supports_async(req->file)) { 1852 1834 req->flags |= REQ_F_MUST_PUNT; 1853 1835 goto copy_iov; 1854 1836 } 1855 1837 1856 1838 iov_count = iov_iter_count(&iter); 1857 - ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_count); 1839 + ret = rw_verify_area(READ, req->file, &kiocb->ki_pos, iov_count); 1858 1840 if (!ret) { 1859 1841 ssize_t ret2; 1860 1842 1861 - if (file->f_op->read_iter) 1862 - ret2 = call_read_iter(file, kiocb, &iter); 1843 + if (req->file->f_op->read_iter) 1844 + ret2 = call_read_iter(req->file, kiocb, &iter); 1863 1845 else 1864 - ret2 = loop_rw_iter(READ, file, kiocb, &iter); 1846 + ret2 = loop_rw_iter(READ, req->file, kiocb, &iter); 1865 1847 1866 - /* 1867 - * In case of a short read, punt to async. This can happen 1868 - * if we have data partially cached. Alternatively we can 1869 - * return the short read, in which case the application will 1870 - * need to issue another SQE and wait for it. That SQE will 1871 - * need async punt anyway, so it's more efficient to do it 1872 - * here. 1873 - */ 1874 - if (force_nonblock && !(req->flags & REQ_F_NOWAIT) && 1875 - (req->flags & REQ_F_ISREG) && 1876 - ret2 > 0 && ret2 < io_size) 1877 - ret2 = -EAGAIN; 1878 1848 /* Catch -EAGAIN return for forced non-blocking submission */ 1879 1849 if (!force_nonblock || ret2 != -EAGAIN) { 1880 1850 kiocb_done(kiocb, ret2, nxt, req->in_async); ··· 1881 1875 return ret; 1882 1876 } 1883 1877 1884 - static int io_write_prep(struct io_kiocb *req, struct iovec **iovec, 1885 - struct iov_iter *iter, bool force_nonblock) 1878 + static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe, 1879 + bool force_nonblock) 1886 1880 { 1881 + struct io_async_ctx *io; 1882 + struct iov_iter iter; 1887 1883 ssize_t ret; 1888 1884 1889 - ret = io_prep_rw(req, force_nonblock); 1885 + ret = io_prep_rw(req, sqe, force_nonblock); 1890 1886 if (ret) 1891 1887 return ret; 1892 1888 1893 1889 if (unlikely(!(req->file->f_mode & FMODE_WRITE))) 1894 1890 return -EBADF; 1895 1891 1896 - return io_import_iovec(WRITE, req, iovec, iter); 1892 + if (!req->io) 1893 + return 0; 1894 + 1895 + io = req->io; 1896 + io->rw.iov = io->rw.fast_iov; 1897 + req->io = NULL; 1898 + ret = io_import_iovec(WRITE, req, &io->rw.iov, &iter); 1899 + req->io = io; 1900 + if (ret < 0) 1901 + return ret; 1902 + 1903 + io_req_map_rw(req, ret, io->rw.iov, io->rw.fast_iov, &iter); 1904 + return 0; 1897 1905 } 1898 1906 1899 1907 static int io_write(struct io_kiocb *req, struct io_kiocb **nxt, 1900 1908 bool force_nonblock) 1901 1909 { 1902 1910 struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; 1903 - struct kiocb *kiocb = &req->rw; 1911 + struct kiocb *kiocb = &req->rw.kiocb; 1904 1912 struct iov_iter iter; 1905 - struct file *file; 1906 1913 size_t iov_count; 1907 1914 ssize_t ret, io_size; 1908 1915 1909 - if (!req->io) { 1910 - ret = io_write_prep(req, &iovec, &iter, force_nonblock); 1911 - if (ret < 0) 1912 - return ret; 1913 - } else { 1914 - ret = io_import_iovec(WRITE, req, &iovec, &iter); 1915 - if (ret < 0) 1916 - return ret; 1917 - } 1916 + ret = io_import_iovec(WRITE, req, &iovec, &iter); 1917 + if (ret < 0) 1918 + return ret; 1918 1919 1919 1920 /* Ensure we clear previously set non-block flag */ 1920 1921 if (!force_nonblock) 1921 - req->rw.ki_flags &= ~IOCB_NOWAIT; 1922 + req->rw.kiocb.ki_flags &= ~IOCB_NOWAIT; 1922 1923 1923 - file = kiocb->ki_filp; 1924 1924 io_size = ret; 1925 1925 if (req->flags & REQ_F_LINK) 1926 1926 req->result = io_size; ··· 1946 1934 goto copy_iov; 1947 1935 1948 1936 iov_count = iov_iter_count(&iter); 1949 - ret = rw_verify_area(WRITE, file, &kiocb->ki_pos, iov_count); 1937 + ret = rw_verify_area(WRITE, req->file, &kiocb->ki_pos, iov_count); 1950 1938 if (!ret) { 1951 1939 ssize_t ret2; 1952 1940 ··· 1958 1946 * we return to userspace. 1959 1947 */ 1960 1948 if (req->flags & REQ_F_ISREG) { 1961 - __sb_start_write(file_inode(file)->i_sb, 1949 + __sb_start_write(file_inode(req->file)->i_sb, 1962 1950 SB_FREEZE_WRITE, true); 1963 - __sb_writers_release(file_inode(file)->i_sb, 1951 + __sb_writers_release(file_inode(req->file)->i_sb, 1964 1952 SB_FREEZE_WRITE); 1965 1953 } 1966 1954 kiocb->ki_flags |= IOCB_WRITE; 1967 1955 1968 - if (file->f_op->write_iter) 1969 - ret2 = call_write_iter(file, kiocb, &iter); 1956 + if (req->file->f_op->write_iter) 1957 + ret2 = call_write_iter(req->file, kiocb, &iter); 1970 1958 else 1971 - ret2 = loop_rw_iter(WRITE, file, kiocb, &iter); 1959 + ret2 = loop_rw_iter(WRITE, req->file, kiocb, &iter); 1972 1960 if (!force_nonblock || ret2 != -EAGAIN) { 1973 1961 kiocb_done(kiocb, ret2, nxt, req->in_async); 1974 1962 } else { ··· 2001 1989 return 0; 2002 1990 } 2003 1991 2004 - static int io_prep_fsync(struct io_kiocb *req) 1992 + static int io_prep_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe) 2005 1993 { 2006 - const struct io_uring_sqe *sqe = req->sqe; 2007 1994 struct io_ring_ctx *ctx = req->ctx; 2008 1995 2009 - if (req->flags & REQ_F_PREPPED) 2010 - return 0; 2011 1996 if (!req->file) 2012 1997 return -EBADF; 2013 1998 ··· 2019 2010 2020 2011 req->sync.off = READ_ONCE(sqe->off); 2021 2012 req->sync.len = READ_ONCE(sqe->len); 2022 - req->flags |= REQ_F_PREPPED; 2023 2013 return 0; 2024 2014 } 2025 2015 ··· 2044 2036 if (io_req_cancelled(req)) 2045 2037 return; 2046 2038 2047 - ret = vfs_fsync_range(req->rw.ki_filp, req->sync.off, 2039 + ret = vfs_fsync_range(req->file, req->sync.off, 2048 2040 end > 0 ? end : LLONG_MAX, 2049 2041 req->sync.flags & IORING_FSYNC_DATASYNC); 2050 2042 if (ret < 0) ··· 2059 2051 bool force_nonblock) 2060 2052 { 2061 2053 struct io_wq_work *work, *old_work; 2062 - int ret; 2063 - 2064 - ret = io_prep_fsync(req); 2065 - if (ret) 2066 - return ret; 2067 2054 2068 2055 /* fsync always requires a blocking context */ 2069 2056 if (force_nonblock) { ··· 2074 2071 return 0; 2075 2072 } 2076 2073 2077 - static int io_prep_sfr(struct io_kiocb *req) 2074 + static int io_prep_sfr(struct io_kiocb *req, const struct io_uring_sqe *sqe) 2078 2075 { 2079 - const struct io_uring_sqe *sqe = req->sqe; 2080 2076 struct io_ring_ctx *ctx = req->ctx; 2081 2077 2082 - if (req->flags & REQ_F_PREPPED) 2083 - return 0; 2084 2078 if (!req->file) 2085 2079 return -EBADF; 2086 2080 ··· 2089 2089 req->sync.off = READ_ONCE(sqe->off); 2090 2090 req->sync.len = READ_ONCE(sqe->len); 2091 2091 req->sync.flags = READ_ONCE(sqe->sync_range_flags); 2092 - req->flags |= REQ_F_PREPPED; 2093 2092 return 0; 2094 2093 } 2095 2094 ··· 2101 2102 if (io_req_cancelled(req)) 2102 2103 return; 2103 2104 2104 - ret = sync_file_range(req->rw.ki_filp, req->sync.off, req->sync.len, 2105 + ret = sync_file_range(req->file, req->sync.off, req->sync.len, 2105 2106 req->sync.flags); 2106 2107 if (ret < 0) 2107 2108 req_set_fail_links(req); ··· 2115 2116 bool force_nonblock) 2116 2117 { 2117 2118 struct io_wq_work *work, *old_work; 2118 - int ret; 2119 - 2120 - ret = io_prep_sfr(req); 2121 - if (ret) 2122 - return ret; 2123 2119 2124 2120 /* sync_file_range always requires a blocking context */ 2125 2121 if (force_nonblock) { ··· 2143 2149 } 2144 2150 #endif 2145 2151 2146 - static int io_sendmsg_prep(struct io_kiocb *req, struct io_async_ctx *io) 2152 + static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) 2147 2153 { 2148 2154 #if defined(CONFIG_NET) 2149 - const struct io_uring_sqe *sqe = req->sqe; 2150 - struct user_msghdr __user *msg; 2151 - unsigned flags; 2155 + struct io_sr_msg *sr = &req->sr_msg; 2156 + struct io_async_ctx *io = req->io; 2152 2157 2153 - flags = READ_ONCE(sqe->msg_flags); 2154 - msg = (struct user_msghdr __user *)(unsigned long) READ_ONCE(sqe->addr); 2158 + sr->msg_flags = READ_ONCE(sqe->msg_flags); 2159 + sr->msg = u64_to_user_ptr(READ_ONCE(sqe->addr)); 2160 + 2161 + if (!io) 2162 + return 0; 2163 + 2155 2164 io->msg.iov = io->msg.fast_iov; 2156 - return sendmsg_copy_msghdr(&io->msg.msg, msg, flags, &io->msg.iov); 2165 + return sendmsg_copy_msghdr(&io->msg.msg, sr->msg, sr->msg_flags, 2166 + &io->msg.iov); 2157 2167 #else 2158 - return 0; 2168 + return -EOPNOTSUPP; 2159 2169 #endif 2160 2170 } 2161 2171 ··· 2167 2169 bool force_nonblock) 2168 2170 { 2169 2171 #if defined(CONFIG_NET) 2170 - const struct io_uring_sqe *sqe = req->sqe; 2171 2172 struct io_async_msghdr *kmsg = NULL; 2172 2173 struct socket *sock; 2173 2174 int ret; ··· 2180 2183 struct sockaddr_storage addr; 2181 2184 unsigned flags; 2182 2185 2183 - flags = READ_ONCE(sqe->msg_flags); 2184 - if (flags & MSG_DONTWAIT) 2185 - req->flags |= REQ_F_NOWAIT; 2186 - else if (force_nonblock) 2187 - flags |= MSG_DONTWAIT; 2188 - 2189 2186 if (req->io) { 2190 2187 kmsg = &req->io->msg; 2191 2188 kmsg->msg.msg_name = &addr; ··· 2188 2197 kmsg->iov = kmsg->fast_iov; 2189 2198 kmsg->msg.msg_iter.iov = kmsg->iov; 2190 2199 } else { 2200 + struct io_sr_msg *sr = &req->sr_msg; 2201 + 2191 2202 kmsg = &io.msg; 2192 2203 kmsg->msg.msg_name = &addr; 2193 - ret = io_sendmsg_prep(req, &io); 2204 + 2205 + io.msg.iov = io.msg.fast_iov; 2206 + ret = sendmsg_copy_msghdr(&io.msg.msg, sr->msg, 2207 + sr->msg_flags, &io.msg.iov); 2194 2208 if (ret) 2195 - goto out; 2209 + return ret; 2196 2210 } 2211 + 2212 + flags = req->sr_msg.msg_flags; 2213 + if (flags & MSG_DONTWAIT) 2214 + req->flags |= REQ_F_NOWAIT; 2215 + else if (force_nonblock) 2216 + flags |= MSG_DONTWAIT; 2197 2217 2198 2218 ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags); 2199 2219 if (force_nonblock && ret == -EAGAIN) { ··· 2220 2218 ret = -EINTR; 2221 2219 } 2222 2220 2223 - out: 2224 2221 if (!io_wq_current_is_worker() && kmsg && kmsg->iov != kmsg->fast_iov) 2225 2222 kfree(kmsg->iov); 2226 2223 io_cqring_add_event(req, ret); ··· 2232 2231 #endif 2233 2232 } 2234 2233 2235 - static int io_recvmsg_prep(struct io_kiocb *req, struct io_async_ctx *io) 2234 + static int io_recvmsg_prep(struct io_kiocb *req, 2235 + const struct io_uring_sqe *sqe) 2236 2236 { 2237 2237 #if defined(CONFIG_NET) 2238 - const struct io_uring_sqe *sqe = req->sqe; 2239 - struct user_msghdr __user *msg; 2240 - unsigned flags; 2238 + struct io_sr_msg *sr = &req->sr_msg; 2239 + struct io_async_ctx *io = req->io; 2241 2240 2242 - flags = READ_ONCE(sqe->msg_flags); 2243 - msg = (struct user_msghdr __user *)(unsigned long) READ_ONCE(sqe->addr); 2241 + sr->msg_flags = READ_ONCE(sqe->msg_flags); 2242 + sr->msg = u64_to_user_ptr(READ_ONCE(sqe->addr)); 2243 + 2244 + if (!io) 2245 + return 0; 2246 + 2244 2247 io->msg.iov = io->msg.fast_iov; 2245 - return recvmsg_copy_msghdr(&io->msg.msg, msg, flags, &io->msg.uaddr, 2246 - &io->msg.iov); 2248 + return recvmsg_copy_msghdr(&io->msg.msg, sr->msg, sr->msg_flags, 2249 + &io->msg.uaddr, &io->msg.iov); 2247 2250 #else 2248 - return 0; 2251 + return -EOPNOTSUPP; 2249 2252 #endif 2250 2253 } 2251 2254 ··· 2257 2252 bool force_nonblock) 2258 2253 { 2259 2254 #if defined(CONFIG_NET) 2260 - const struct io_uring_sqe *sqe = req->sqe; 2261 2255 struct io_async_msghdr *kmsg = NULL; 2262 2256 struct socket *sock; 2263 2257 int ret; ··· 2266 2262 2267 2263 sock = sock_from_file(req->file, &ret); 2268 2264 if (sock) { 2269 - struct user_msghdr __user *msg; 2270 2265 struct io_async_ctx io; 2271 2266 struct sockaddr_storage addr; 2272 2267 unsigned flags; 2273 2268 2274 - flags = READ_ONCE(sqe->msg_flags); 2275 - if (flags & MSG_DONTWAIT) 2276 - req->flags |= REQ_F_NOWAIT; 2277 - else if (force_nonblock) 2278 - flags |= MSG_DONTWAIT; 2279 - 2280 - msg = (struct user_msghdr __user *) (unsigned long) 2281 - READ_ONCE(sqe->addr); 2282 2269 if (req->io) { 2283 2270 kmsg = &req->io->msg; 2284 2271 kmsg->msg.msg_name = &addr; ··· 2278 2283 kmsg->iov = kmsg->fast_iov; 2279 2284 kmsg->msg.msg_iter.iov = kmsg->iov; 2280 2285 } else { 2286 + struct io_sr_msg *sr = &req->sr_msg; 2287 + 2281 2288 kmsg = &io.msg; 2282 2289 kmsg->msg.msg_name = &addr; 2283 - ret = io_recvmsg_prep(req, &io); 2290 + 2291 + io.msg.iov = io.msg.fast_iov; 2292 + ret = recvmsg_copy_msghdr(&io.msg.msg, sr->msg, 2293 + sr->msg_flags, &io.msg.uaddr, 2294 + &io.msg.iov); 2284 2295 if (ret) 2285 - goto out; 2296 + return ret; 2286 2297 } 2287 2298 2288 - ret = __sys_recvmsg_sock(sock, &kmsg->msg, msg, kmsg->uaddr, flags); 2299 + flags = req->sr_msg.msg_flags; 2300 + if (flags & MSG_DONTWAIT) 2301 + req->flags |= REQ_F_NOWAIT; 2302 + else if (force_nonblock) 2303 + flags |= MSG_DONTWAIT; 2304 + 2305 + ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.msg, 2306 + kmsg->uaddr, flags); 2289 2307 if (force_nonblock && ret == -EAGAIN) { 2290 2308 if (req->io) 2291 2309 return -EAGAIN; ··· 2312 2304 ret = -EINTR; 2313 2305 } 2314 2306 2315 - out: 2316 2307 if (!io_wq_current_is_worker() && kmsg && kmsg->iov != kmsg->fast_iov) 2317 2308 kfree(kmsg->iov); 2318 2309 io_cqring_add_event(req, ret); ··· 2324 2317 #endif 2325 2318 } 2326 2319 2327 - static int io_accept_prep(struct io_kiocb *req) 2320 + static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) 2328 2321 { 2329 2322 #if defined(CONFIG_NET) 2330 - const struct io_uring_sqe *sqe = req->sqe; 2331 2323 struct io_accept *accept = &req->accept; 2332 - 2333 - if (req->flags & REQ_F_PREPPED) 2334 - return 0; 2335 2324 2336 2325 if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL))) 2337 2326 return -EINVAL; 2338 2327 if (sqe->ioprio || sqe->len || sqe->buf_index) 2339 2328 return -EINVAL; 2340 2329 2341 - accept->addr = (struct sockaddr __user *) 2342 - (unsigned long) READ_ONCE(sqe->addr); 2343 - accept->addr_len = (int __user *) (unsigned long) READ_ONCE(sqe->addr2); 2330 + accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr)); 2331 + accept->addr_len = u64_to_user_ptr(READ_ONCE(sqe->addr2)); 2344 2332 accept->flags = READ_ONCE(sqe->accept_flags); 2345 - req->flags |= REQ_F_PREPPED; 2346 2333 return 0; 2347 2334 #else 2348 2335 return -EOPNOTSUPP; ··· 2384 2383 #if defined(CONFIG_NET) 2385 2384 int ret; 2386 2385 2387 - ret = io_accept_prep(req); 2388 - if (ret) 2389 - return ret; 2390 - 2391 2386 ret = __io_accept(req, nxt, force_nonblock); 2392 2387 if (ret == -EAGAIN && force_nonblock) { 2393 2388 req->work.func = io_accept_finish; ··· 2397 2400 #endif 2398 2401 } 2399 2402 2400 - static int io_connect_prep(struct io_kiocb *req, struct io_async_ctx *io) 2403 + static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) 2401 2404 { 2402 2405 #if defined(CONFIG_NET) 2403 - const struct io_uring_sqe *sqe = req->sqe; 2404 - struct sockaddr __user *addr; 2405 - int addr_len; 2406 + struct io_connect *conn = &req->connect; 2407 + struct io_async_ctx *io = req->io; 2406 2408 2407 - addr = (struct sockaddr __user *) (unsigned long) READ_ONCE(sqe->addr); 2408 - addr_len = READ_ONCE(sqe->addr2); 2409 - return move_addr_to_kernel(addr, addr_len, &io->connect.address); 2409 + if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL))) 2410 + return -EINVAL; 2411 + if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags) 2412 + return -EINVAL; 2413 + 2414 + conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr)); 2415 + conn->addr_len = READ_ONCE(sqe->addr2); 2416 + 2417 + if (!io) 2418 + return 0; 2419 + 2420 + return move_addr_to_kernel(conn->addr, conn->addr_len, 2421 + &io->connect.address); 2410 2422 #else 2411 - return 0; 2423 + return -EOPNOTSUPP; 2412 2424 #endif 2413 2425 } 2414 2426 ··· 2425 2419 bool force_nonblock) 2426 2420 { 2427 2421 #if defined(CONFIG_NET) 2428 - const struct io_uring_sqe *sqe = req->sqe; 2429 2422 struct io_async_ctx __io, *io; 2430 2423 unsigned file_flags; 2431 - int addr_len, ret; 2432 - 2433 - if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL))) 2434 - return -EINVAL; 2435 - if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags) 2436 - return -EINVAL; 2437 - 2438 - addr_len = READ_ONCE(sqe->addr2); 2439 - file_flags = force_nonblock ? O_NONBLOCK : 0; 2424 + int ret; 2440 2425 2441 2426 if (req->io) { 2442 2427 io = req->io; 2443 2428 } else { 2444 - ret = io_connect_prep(req, &__io); 2429 + ret = move_addr_to_kernel(req->connect.addr, 2430 + req->connect.addr_len, 2431 + &__io.connect.address); 2445 2432 if (ret) 2446 2433 goto out; 2447 2434 io = &__io; 2448 2435 } 2449 2436 2450 - ret = __sys_connect_file(req->file, &io->connect.address, addr_len, 2451 - file_flags); 2437 + file_flags = force_nonblock ? O_NONBLOCK : 0; 2438 + 2439 + ret = __sys_connect_file(req->file, &io->connect.address, 2440 + req->connect.addr_len, file_flags); 2452 2441 if ((ret == -EAGAIN || ret == -EINPROGRESS) && force_nonblock) { 2453 2442 if (req->io) 2454 2443 return -EAGAIN; ··· 2514 2513 return -ENOENT; 2515 2514 } 2516 2515 2517 - static int io_poll_remove_prep(struct io_kiocb *req) 2516 + static int io_poll_remove_prep(struct io_kiocb *req, 2517 + const struct io_uring_sqe *sqe) 2518 2518 { 2519 - const struct io_uring_sqe *sqe = req->sqe; 2520 - 2521 - if (req->flags & REQ_F_PREPPED) 2522 - return 0; 2523 2519 if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 2524 2520 return -EINVAL; 2525 2521 if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index || ··· 2524 2526 return -EINVAL; 2525 2527 2526 2528 req->poll.addr = READ_ONCE(sqe->addr); 2527 - req->flags |= REQ_F_PREPPED; 2528 2529 return 0; 2529 2530 } 2530 2531 ··· 2536 2539 struct io_ring_ctx *ctx = req->ctx; 2537 2540 u64 addr; 2538 2541 int ret; 2539 - 2540 - ret = io_poll_remove_prep(req); 2541 - if (ret) 2542 - return ret; 2543 2542 2544 2543 addr = req->poll.addr; 2545 2544 spin_lock_irq(&ctx->completion_lock); ··· 2674 2681 hlist_add_head(&req->hash_node, list); 2675 2682 } 2676 2683 2677 - static int io_poll_add_prep(struct io_kiocb *req) 2684 + static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) 2678 2685 { 2679 - const struct io_uring_sqe *sqe = req->sqe; 2680 2686 struct io_poll_iocb *poll = &req->poll; 2681 2687 u16 events; 2682 2688 2683 - if (req->flags & REQ_F_PREPPED) 2684 - return 0; 2685 2689 if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 2686 2690 return -EINVAL; 2687 2691 if (sqe->addr || sqe->ioprio || sqe->off || sqe->len || sqe->buf_index) ··· 2686 2696 if (!poll->file) 2687 2697 return -EBADF; 2688 2698 2689 - req->flags |= REQ_F_PREPPED; 2690 2699 events = READ_ONCE(sqe->poll_events); 2691 2700 poll->events = demangle_poll(events) | EPOLLERR | EPOLLHUP; 2692 2701 return 0; ··· 2698 2709 struct io_poll_table ipt; 2699 2710 bool cancel = false; 2700 2711 __poll_t mask; 2701 - int ret; 2702 - 2703 - ret = io_poll_add_prep(req); 2704 - if (ret) 2705 - return ret; 2706 2712 2707 2713 INIT_IO_WORK(&req->work, io_poll_complete_work); 2708 2714 INIT_HLIST_NODE(&req->hash_node); ··· 2816 2832 return 0; 2817 2833 } 2818 2834 2819 - static int io_timeout_remove_prep(struct io_kiocb *req) 2835 + static int io_timeout_remove_prep(struct io_kiocb *req, 2836 + const struct io_uring_sqe *sqe) 2820 2837 { 2821 - const struct io_uring_sqe *sqe = req->sqe; 2822 - 2823 - if (req->flags & REQ_F_PREPPED) 2824 - return 0; 2825 2838 if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 2826 2839 return -EINVAL; 2827 2840 if (sqe->flags || sqe->ioprio || sqe->buf_index || sqe->len) ··· 2829 2848 if (req->timeout.flags) 2830 2849 return -EINVAL; 2831 2850 2832 - req->flags |= REQ_F_PREPPED; 2833 2851 return 0; 2834 2852 } 2835 2853 ··· 2839 2859 { 2840 2860 struct io_ring_ctx *ctx = req->ctx; 2841 2861 int ret; 2842 - 2843 - ret = io_timeout_remove_prep(req); 2844 - if (ret) 2845 - return ret; 2846 2862 2847 2863 spin_lock_irq(&ctx->completion_lock); 2848 2864 ret = io_timeout_cancel(ctx, req->timeout.addr); ··· 2853 2877 return 0; 2854 2878 } 2855 2879 2856 - static int io_timeout_prep(struct io_kiocb *req, struct io_async_ctx *io, 2880 + static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe, 2857 2881 bool is_timeout_link) 2858 2882 { 2859 - const struct io_uring_sqe *sqe = req->sqe; 2860 2883 struct io_timeout_data *data; 2861 2884 unsigned flags; 2862 2885 ··· 2869 2894 if (flags & ~IORING_TIMEOUT_ABS) 2870 2895 return -EINVAL; 2871 2896 2872 - data = &io->timeout; 2897 + req->timeout.count = READ_ONCE(sqe->off); 2898 + 2899 + if (!req->io && io_alloc_async_ctx(req)) 2900 + return -ENOMEM; 2901 + 2902 + data = &req->io->timeout; 2873 2903 data->req = req; 2874 2904 req->flags |= REQ_F_TIMEOUT; 2875 2905 ··· 2892 2912 2893 2913 static int io_timeout(struct io_kiocb *req) 2894 2914 { 2895 - const struct io_uring_sqe *sqe = req->sqe; 2896 2915 unsigned count; 2897 2916 struct io_ring_ctx *ctx = req->ctx; 2898 2917 struct io_timeout_data *data; 2899 2918 struct list_head *entry; 2900 2919 unsigned span = 0; 2901 - int ret; 2902 2920 2903 - if (!req->io) { 2904 - if (io_alloc_async_ctx(req)) 2905 - return -ENOMEM; 2906 - ret = io_timeout_prep(req, req->io, false); 2907 - if (ret) 2908 - return ret; 2909 - } 2910 2921 data = &req->io->timeout; 2911 2922 2912 2923 /* ··· 2905 2934 * timeout event to be satisfied. If it isn't set, then this is 2906 2935 * a pure timeout request, sequence isn't used. 2907 2936 */ 2908 - count = READ_ONCE(sqe->off); 2937 + count = req->timeout.count; 2909 2938 if (!count) { 2910 2939 req->flags |= REQ_F_TIMEOUT_NOSEQ; 2911 2940 spin_lock_irq(&ctx->completion_lock); ··· 3023 3052 io_put_req_find_next(req, nxt); 3024 3053 } 3025 3054 3026 - static int io_async_cancel_prep(struct io_kiocb *req) 3055 + static int io_async_cancel_prep(struct io_kiocb *req, 3056 + const struct io_uring_sqe *sqe) 3027 3057 { 3028 - const struct io_uring_sqe *sqe = req->sqe; 3029 - 3030 - if (req->flags & REQ_F_PREPPED) 3031 - return 0; 3032 3058 if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 3033 3059 return -EINVAL; 3034 3060 if (sqe->flags || sqe->ioprio || sqe->off || sqe->len || 3035 3061 sqe->cancel_flags) 3036 3062 return -EINVAL; 3037 3063 3038 - req->flags |= REQ_F_PREPPED; 3039 3064 req->cancel.addr = READ_ONCE(sqe->addr); 3040 3065 return 0; 3041 3066 } ··· 3039 3072 static int io_async_cancel(struct io_kiocb *req, struct io_kiocb **nxt) 3040 3073 { 3041 3074 struct io_ring_ctx *ctx = req->ctx; 3042 - int ret; 3043 - 3044 - ret = io_async_cancel_prep(req); 3045 - if (ret) 3046 - return ret; 3047 3075 3048 3076 io_async_find_and_cancel(ctx, req, req->cancel.addr, nxt, 0); 3049 3077 return 0; 3050 3078 } 3051 3079 3052 - static int io_req_defer_prep(struct io_kiocb *req) 3080 + static int io_req_defer_prep(struct io_kiocb *req, 3081 + const struct io_uring_sqe *sqe) 3053 3082 { 3054 - struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; 3055 - struct io_async_ctx *io = req->io; 3056 - struct iov_iter iter; 3057 3083 ssize_t ret = 0; 3058 3084 3059 3085 switch (req->opcode) { ··· 3054 3094 break; 3055 3095 case IORING_OP_READV: 3056 3096 case IORING_OP_READ_FIXED: 3057 - /* ensure prep does right import */ 3058 - req->io = NULL; 3059 - ret = io_read_prep(req, &iovec, &iter, true); 3060 - req->io = io; 3061 - if (ret < 0) 3062 - break; 3063 - io_req_map_rw(req, ret, iovec, inline_vecs, &iter); 3064 - ret = 0; 3097 + ret = io_read_prep(req, sqe, true); 3065 3098 break; 3066 3099 case IORING_OP_WRITEV: 3067 3100 case IORING_OP_WRITE_FIXED: 3068 - /* ensure prep does right import */ 3069 - req->io = NULL; 3070 - ret = io_write_prep(req, &iovec, &iter, true); 3071 - req->io = io; 3072 - if (ret < 0) 3073 - break; 3074 - io_req_map_rw(req, ret, iovec, inline_vecs, &iter); 3075 - ret = 0; 3101 + ret = io_write_prep(req, sqe, true); 3076 3102 break; 3077 3103 case IORING_OP_POLL_ADD: 3078 - ret = io_poll_add_prep(req); 3104 + ret = io_poll_add_prep(req, sqe); 3079 3105 break; 3080 3106 case IORING_OP_POLL_REMOVE: 3081 - ret = io_poll_remove_prep(req); 3107 + ret = io_poll_remove_prep(req, sqe); 3082 3108 break; 3083 3109 case IORING_OP_FSYNC: 3084 - ret = io_prep_fsync(req); 3110 + ret = io_prep_fsync(req, sqe); 3085 3111 break; 3086 3112 case IORING_OP_SYNC_FILE_RANGE: 3087 - ret = io_prep_sfr(req); 3113 + ret = io_prep_sfr(req, sqe); 3088 3114 break; 3089 3115 case IORING_OP_SENDMSG: 3090 - ret = io_sendmsg_prep(req, io); 3116 + ret = io_sendmsg_prep(req, sqe); 3091 3117 break; 3092 3118 case IORING_OP_RECVMSG: 3093 - ret = io_recvmsg_prep(req, io); 3119 + ret = io_recvmsg_prep(req, sqe); 3094 3120 break; 3095 3121 case IORING_OP_CONNECT: 3096 - ret = io_connect_prep(req, io); 3122 + ret = io_connect_prep(req, sqe); 3097 3123 break; 3098 3124 case IORING_OP_TIMEOUT: 3099 - ret = io_timeout_prep(req, io, false); 3125 + ret = io_timeout_prep(req, sqe, false); 3100 3126 break; 3101 3127 case IORING_OP_TIMEOUT_REMOVE: 3102 - ret = io_timeout_remove_prep(req); 3128 + ret = io_timeout_remove_prep(req, sqe); 3103 3129 break; 3104 3130 case IORING_OP_ASYNC_CANCEL: 3105 - ret = io_async_cancel_prep(req); 3131 + ret = io_async_cancel_prep(req, sqe); 3106 3132 break; 3107 3133 case IORING_OP_LINK_TIMEOUT: 3108 - ret = io_timeout_prep(req, io, true); 3134 + ret = io_timeout_prep(req, sqe, true); 3109 3135 break; 3110 3136 case IORING_OP_ACCEPT: 3111 - ret = io_accept_prep(req); 3137 + ret = io_accept_prep(req, sqe); 3112 3138 break; 3113 3139 default: 3114 3140 printk_once(KERN_WARNING "io_uring: unhandled opcode %d\n", ··· 3106 3160 return ret; 3107 3161 } 3108 3162 3109 - static int io_req_defer(struct io_kiocb *req) 3163 + static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe) 3110 3164 { 3111 3165 struct io_ring_ctx *ctx = req->ctx; 3112 3166 int ret; ··· 3115 3169 if (!req_need_defer(req) && list_empty(&ctx->defer_list)) 3116 3170 return 0; 3117 3171 3118 - if (io_alloc_async_ctx(req)) 3172 + if (!req->io && io_alloc_async_ctx(req)) 3119 3173 return -EAGAIN; 3120 3174 3121 - ret = io_req_defer_prep(req); 3175 + ret = io_req_defer_prep(req, sqe); 3122 3176 if (ret < 0) 3123 3177 return ret; 3124 3178 ··· 3134 3188 return -EIOCBQUEUED; 3135 3189 } 3136 3190 3137 - __attribute__((nonnull)) 3138 - static int io_issue_sqe(struct io_kiocb *req, struct io_kiocb **nxt, 3139 - bool force_nonblock) 3191 + static int io_issue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe, 3192 + struct io_kiocb **nxt, bool force_nonblock) 3140 3193 { 3141 3194 struct io_ring_ctx *ctx = req->ctx; 3142 3195 int ret; ··· 3145 3200 ret = io_nop(req); 3146 3201 break; 3147 3202 case IORING_OP_READV: 3148 - if (unlikely(req->sqe->buf_index)) 3149 - return -EINVAL; 3203 + case IORING_OP_READ_FIXED: 3204 + if (sqe) { 3205 + ret = io_read_prep(req, sqe, force_nonblock); 3206 + if (ret < 0) 3207 + break; 3208 + } 3150 3209 ret = io_read(req, nxt, force_nonblock); 3151 3210 break; 3152 3211 case IORING_OP_WRITEV: 3153 - if (unlikely(req->sqe->buf_index)) 3154 - return -EINVAL; 3155 - ret = io_write(req, nxt, force_nonblock); 3156 - break; 3157 - case IORING_OP_READ_FIXED: 3158 - ret = io_read(req, nxt, force_nonblock); 3159 - break; 3160 3212 case IORING_OP_WRITE_FIXED: 3213 + if (sqe) { 3214 + ret = io_write_prep(req, sqe, force_nonblock); 3215 + if (ret < 0) 3216 + break; 3217 + } 3161 3218 ret = io_write(req, nxt, force_nonblock); 3162 3219 break; 3163 3220 case IORING_OP_FSYNC: 3221 + if (sqe) { 3222 + ret = io_prep_fsync(req, sqe); 3223 + if (ret < 0) 3224 + break; 3225 + } 3164 3226 ret = io_fsync(req, nxt, force_nonblock); 3165 3227 break; 3166 3228 case IORING_OP_POLL_ADD: 3229 + if (sqe) { 3230 + ret = io_poll_add_prep(req, sqe); 3231 + if (ret) 3232 + break; 3233 + } 3167 3234 ret = io_poll_add(req, nxt); 3168 3235 break; 3169 3236 case IORING_OP_POLL_REMOVE: 3237 + if (sqe) { 3238 + ret = io_poll_remove_prep(req, sqe); 3239 + if (ret < 0) 3240 + break; 3241 + } 3170 3242 ret = io_poll_remove(req); 3171 3243 break; 3172 3244 case IORING_OP_SYNC_FILE_RANGE: 3245 + if (sqe) { 3246 + ret = io_prep_sfr(req, sqe); 3247 + if (ret < 0) 3248 + break; 3249 + } 3173 3250 ret = io_sync_file_range(req, nxt, force_nonblock); 3174 3251 break; 3175 3252 case IORING_OP_SENDMSG: 3253 + if (sqe) { 3254 + ret = io_sendmsg_prep(req, sqe); 3255 + if (ret < 0) 3256 + break; 3257 + } 3176 3258 ret = io_sendmsg(req, nxt, force_nonblock); 3177 3259 break; 3178 3260 case IORING_OP_RECVMSG: 3261 + if (sqe) { 3262 + ret = io_recvmsg_prep(req, sqe); 3263 + if (ret) 3264 + break; 3265 + } 3179 3266 ret = io_recvmsg(req, nxt, force_nonblock); 3180 3267 break; 3181 3268 case IORING_OP_TIMEOUT: 3269 + if (sqe) { 3270 + ret = io_timeout_prep(req, sqe, false); 3271 + if (ret) 3272 + break; 3273 + } 3182 3274 ret = io_timeout(req); 3183 3275 break; 3184 3276 case IORING_OP_TIMEOUT_REMOVE: 3277 + if (sqe) { 3278 + ret = io_timeout_remove_prep(req, sqe); 3279 + if (ret) 3280 + break; 3281 + } 3185 3282 ret = io_timeout_remove(req); 3186 3283 break; 3187 3284 case IORING_OP_ACCEPT: 3285 + if (sqe) { 3286 + ret = io_accept_prep(req, sqe); 3287 + if (ret) 3288 + break; 3289 + } 3188 3290 ret = io_accept(req, nxt, force_nonblock); 3189 3291 break; 3190 3292 case IORING_OP_CONNECT: 3293 + if (sqe) { 3294 + ret = io_connect_prep(req, sqe); 3295 + if (ret) 3296 + break; 3297 + } 3191 3298 ret = io_connect(req, nxt, force_nonblock); 3192 3299 break; 3193 3300 case IORING_OP_ASYNC_CANCEL: 3301 + if (sqe) { 3302 + ret = io_async_cancel_prep(req, sqe); 3303 + if (ret) 3304 + break; 3305 + } 3194 3306 ret = io_async_cancel(req, nxt); 3195 3307 break; 3196 3308 default: ··· 3291 3289 req->has_user = (work->flags & IO_WQ_WORK_HAS_MM) != 0; 3292 3290 req->in_async = true; 3293 3291 do { 3294 - ret = io_issue_sqe(req, &nxt, false); 3292 + ret = io_issue_sqe(req, NULL, &nxt, false); 3295 3293 /* 3296 3294 * We can get EAGAIN for polled IO even though we're 3297 3295 * forcing a sync submission from here, since we can't ··· 3357 3355 return table->files[index & IORING_FILE_TABLE_MASK]; 3358 3356 } 3359 3357 3360 - static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req) 3358 + static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req, 3359 + const struct io_uring_sqe *sqe) 3361 3360 { 3362 3361 struct io_ring_ctx *ctx = req->ctx; 3363 3362 unsigned flags; 3364 3363 int fd, ret; 3365 3364 3366 - flags = READ_ONCE(req->sqe->flags); 3367 - fd = READ_ONCE(req->sqe->fd); 3365 + flags = READ_ONCE(sqe->flags); 3366 + fd = READ_ONCE(sqe->fd); 3368 3367 3369 3368 if (flags & IOSQE_IO_DRAIN) 3370 3369 req->flags |= REQ_F_IO_DRAIN; ··· 3497 3494 return nxt; 3498 3495 } 3499 3496 3500 - static void __io_queue_sqe(struct io_kiocb *req) 3497 + static void __io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe) 3501 3498 { 3502 3499 struct io_kiocb *linked_timeout; 3503 3500 struct io_kiocb *nxt = NULL; ··· 3506 3503 again: 3507 3504 linked_timeout = io_prep_linked_timeout(req); 3508 3505 3509 - ret = io_issue_sqe(req, &nxt, true); 3506 + ret = io_issue_sqe(req, sqe, &nxt, true); 3510 3507 3511 3508 /* 3512 3509 * We async punt it if the file wasn't marked NOWAIT, or if the file ··· 3553 3550 } 3554 3551 } 3555 3552 3556 - static void io_queue_sqe(struct io_kiocb *req) 3553 + static void io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe) 3557 3554 { 3558 3555 int ret; 3559 3556 ··· 3563 3560 } 3564 3561 req->ctx->drain_next = (req->flags & REQ_F_DRAIN_LINK); 3565 3562 3566 - ret = io_req_defer(req); 3563 + ret = io_req_defer(req, sqe); 3567 3564 if (ret) { 3568 3565 if (ret != -EIOCBQUEUED) { 3569 3566 io_cqring_add_event(req, ret); ··· 3571 3568 io_double_put_req(req); 3572 3569 } 3573 3570 } else 3574 - __io_queue_sqe(req); 3571 + __io_queue_sqe(req, sqe); 3575 3572 } 3576 3573 3577 3574 static inline void io_queue_link_head(struct io_kiocb *req) ··· 3580 3577 io_cqring_add_event(req, -ECANCELED); 3581 3578 io_double_put_req(req); 3582 3579 } else 3583 - io_queue_sqe(req); 3580 + io_queue_sqe(req, NULL); 3584 3581 } 3585 3582 3586 3583 #define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK| \ 3587 3584 IOSQE_IO_HARDLINK) 3588 3585 3589 - static bool io_submit_sqe(struct io_kiocb *req, struct io_submit_state *state, 3590 - struct io_kiocb **link) 3586 + static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe, 3587 + struct io_submit_state *state, struct io_kiocb **link) 3591 3588 { 3592 3589 struct io_ring_ctx *ctx = req->ctx; 3593 3590 int ret; 3594 3591 3595 3592 /* enforce forwards compatibility on users */ 3596 - if (unlikely(req->sqe->flags & ~SQE_VALID_FLAGS)) { 3593 + if (unlikely(sqe->flags & ~SQE_VALID_FLAGS)) { 3597 3594 ret = -EINVAL; 3598 3595 goto err_req; 3599 3596 } 3600 3597 3601 - ret = io_req_set_file(state, req); 3598 + ret = io_req_set_file(state, req, sqe); 3602 3599 if (unlikely(ret)) { 3603 3600 err_req: 3604 3601 io_cqring_add_event(req, ret); ··· 3616 3613 if (*link) { 3617 3614 struct io_kiocb *prev = *link; 3618 3615 3619 - if (req->sqe->flags & IOSQE_IO_DRAIN) 3616 + if (sqe->flags & IOSQE_IO_DRAIN) 3620 3617 (*link)->flags |= REQ_F_DRAIN_LINK | REQ_F_IO_DRAIN; 3621 3618 3622 - if (req->sqe->flags & IOSQE_IO_HARDLINK) 3619 + if (sqe->flags & IOSQE_IO_HARDLINK) 3623 3620 req->flags |= REQ_F_HARDLINK; 3624 3621 3625 3622 if (io_alloc_async_ctx(req)) { ··· 3627 3624 goto err_req; 3628 3625 } 3629 3626 3630 - ret = io_req_defer_prep(req); 3627 + ret = io_req_defer_prep(req, sqe); 3631 3628 if (ret) { 3632 3629 /* fail even hard links since we don't submit */ 3633 3630 prev->flags |= REQ_F_FAIL_LINK; ··· 3635 3632 } 3636 3633 trace_io_uring_link(ctx, req, prev); 3637 3634 list_add_tail(&req->link_list, &prev->link_list); 3638 - } else if (req->sqe->flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK)) { 3635 + } else if (sqe->flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK)) { 3639 3636 req->flags |= REQ_F_LINK; 3640 - if (req->sqe->flags & IOSQE_IO_HARDLINK) 3637 + if (sqe->flags & IOSQE_IO_HARDLINK) 3641 3638 req->flags |= REQ_F_HARDLINK; 3642 3639 3643 3640 INIT_LIST_HEAD(&req->link_list); 3641 + ret = io_req_defer_prep(req, sqe); 3642 + if (ret) 3643 + req->flags |= REQ_F_FAIL_LINK; 3644 3644 *link = req; 3645 3645 } else { 3646 - io_queue_sqe(req); 3646 + io_queue_sqe(req, sqe); 3647 3647 } 3648 3648 3649 3649 return true; ··· 3691 3685 } 3692 3686 3693 3687 /* 3694 - * Fetch an sqe, if one is available. Note that req->sqe will point to memory 3688 + * Fetch an sqe, if one is available. Note that sqe_ptr will point to memory 3695 3689 * that is mapped by userspace. This means that care needs to be taken to 3696 3690 * ensure that reads are stable, as we cannot rely on userspace always 3697 3691 * being a good citizen. If members of the sqe are validated and then later 3698 3692 * used, it's important that those reads are done through READ_ONCE() to 3699 3693 * prevent a re-load down the line. 3700 3694 */ 3701 - static bool io_get_sqring(struct io_ring_ctx *ctx, struct io_kiocb *req) 3695 + static bool io_get_sqring(struct io_ring_ctx *ctx, struct io_kiocb *req, 3696 + const struct io_uring_sqe **sqe_ptr) 3702 3697 { 3703 3698 struct io_rings *rings = ctx->rings; 3704 3699 u32 *sq_array = ctx->sq_array; ··· 3726 3719 * link list. 3727 3720 */ 3728 3721 req->sequence = ctx->cached_sq_head; 3729 - req->sqe = &ctx->sq_sqes[head]; 3730 - req->opcode = READ_ONCE(req->sqe->opcode); 3731 - req->user_data = READ_ONCE(req->sqe->user_data); 3722 + *sqe_ptr = &ctx->sq_sqes[head]; 3723 + req->opcode = READ_ONCE((*sqe_ptr)->opcode); 3724 + req->user_data = READ_ONCE((*sqe_ptr)->user_data); 3732 3725 ctx->cached_sq_head++; 3733 3726 return true; 3734 3727 } ··· 3760 3753 } 3761 3754 3762 3755 for (i = 0; i < nr; i++) { 3756 + const struct io_uring_sqe *sqe; 3763 3757 struct io_kiocb *req; 3764 3758 unsigned int sqe_flags; 3765 3759 ··· 3770 3762 submitted = -EAGAIN; 3771 3763 break; 3772 3764 } 3773 - if (!io_get_sqring(ctx, req)) { 3765 + if (!io_get_sqring(ctx, req, &sqe)) { 3774 3766 __io_free_req(req); 3775 3767 break; 3776 3768 } ··· 3784 3776 } 3785 3777 3786 3778 submitted++; 3787 - sqe_flags = req->sqe->flags; 3779 + sqe_flags = sqe->flags; 3788 3780 3789 3781 req->ring_file = ring_file; 3790 3782 req->ring_fd = ring_fd; ··· 3792 3784 req->in_async = async; 3793 3785 req->needs_fixed_file = async; 3794 3786 trace_io_uring_submit_sqe(ctx, req->user_data, true, async); 3795 - if (!io_submit_sqe(req, statep, &link)) 3787 + if (!io_submit_sqe(req, sqe, statep, &link)) 3796 3788 break; 3797 3789 /* 3798 3790 * If previous wasn't linked and we have a linked command, ··· 4710 4702 if (copy_from_user(&ciov, &ciovs[index], sizeof(ciov))) 4711 4703 return -EFAULT; 4712 4704 4713 - dst->iov_base = (void __user *) (unsigned long) ciov.iov_base; 4705 + dst->iov_base = u64_to_user_ptr((u64)ciov.iov_base); 4714 4706 dst->iov_len = ciov.iov_len; 4715 4707 return 0; 4716 4708 }
+1 -1
fs/locks.c
··· 2853 2853 } 2854 2854 if (inode) { 2855 2855 /* userspace relies on this representation of dev_t */ 2856 - seq_printf(f, "%d %02x:%02x:%ld ", fl_pid, 2856 + seq_printf(f, "%d %02x:%02x:%lu ", fl_pid, 2857 2857 MAJOR(inode->i_sb->s_dev), 2858 2858 MINOR(inode->i_sb->s_dev), inode->i_ino); 2859 2859 } else {
+1 -1
fs/mpage.c
··· 62 62 { 63 63 bio->bi_end_io = mpage_end_io; 64 64 bio_set_op_attrs(bio, op, op_flags); 65 - guard_bio_eod(op, bio); 65 + guard_bio_eod(bio); 66 66 submit_bio(bio); 67 67 return NULL; 68 68 }
+1 -1
fs/namespace.c
··· 1728 1728 dentry->d_fsdata == &mntns_operations; 1729 1729 } 1730 1730 1731 - struct mnt_namespace *to_mnt_ns(struct ns_common *ns) 1731 + static struct mnt_namespace *to_mnt_ns(struct ns_common *ns) 1732 1732 { 1733 1733 return container_of(ns, struct mnt_namespace, ns); 1734 1734 }
+3
fs/nsfs.c
··· 3 3 #include <linux/pseudo_fs.h> 4 4 #include <linux/file.h> 5 5 #include <linux/fs.h> 6 + #include <linux/proc_fs.h> 6 7 #include <linux/proc_ns.h> 7 8 #include <linux/magic.h> 8 9 #include <linux/ktime.h> ··· 11 10 #include <linux/user_namespace.h> 12 11 #include <linux/nsfs.h> 13 12 #include <linux/uaccess.h> 13 + 14 + #include "internal.h" 14 15 15 16 static struct vfsmount *nsfs_mnt; 16 17
+1
fs/ocfs2/dlmglue.c
··· 3282 3282 3283 3283 debugfs_create_u32("locking_filter", 0600, osb->osb_debug_root, 3284 3284 &dlm_debug->d_filter_secs); 3285 + ocfs2_get_dlm_debug(dlm_debug); 3285 3286 } 3286 3287 3287 3288 static void ocfs2_dlm_shutdown_debug(struct ocfs2_super *osb)
+8
fs/ocfs2/journal.c
··· 1066 1066 1067 1067 ocfs2_clear_journal_error(osb->sb, journal->j_journal, osb->slot_num); 1068 1068 1069 + if (replayed) { 1070 + jbd2_journal_lock_updates(journal->j_journal); 1071 + status = jbd2_journal_flush(journal->j_journal); 1072 + jbd2_journal_unlock_updates(journal->j_journal); 1073 + if (status < 0) 1074 + mlog_errno(status); 1075 + } 1076 + 1069 1077 status = ocfs2_journal_toggle_dirty(osb, 1, replayed); 1070 1078 if (status < 0) { 1071 1079 mlog_errno(status);
+5 -2
fs/posix_acl.c
··· 631 631 632 632 /** 633 633 * posix_acl_update_mode - update mode in set_acl 634 + * @inode: target inode 635 + * @mode_p: mode (pointer) for update 636 + * @acl: acl pointer 634 637 * 635 638 * Update the file mode when setting an ACL: compute the new file permission 636 639 * bits based on the ACL. In addition, if the ACL is equivalent to the new 637 - * file mode, set *acl to NULL to indicate that no ACL should be set. 640 + * file mode, set *@acl to NULL to indicate that no ACL should be set. 638 641 * 639 - * As with chmod, clear the setgit bit if the caller is not in the owning group 642 + * As with chmod, clear the setgid bit if the caller is not in the owning group 640 643 * or capable of CAP_FSETID (see inode_change_ok). 641 644 * 642 645 * Called from set_acl inode operations.
+13
fs/pstore/ram.c
··· 407 407 408 408 prz = cxt->dprzs[cxt->dump_write_cnt]; 409 409 410 + /* 411 + * Since this is a new crash dump, we need to reset the buffer in 412 + * case it still has an old dump present. Without this, the new dump 413 + * will get appended, which would seriously confuse anything trying 414 + * to check dump file contents. Specifically, ramoops_read_kmsg_hdr() 415 + * expects to find a dump header in the beginning of buffer data, so 416 + * we must to reset the buffer values, in order to ensure that the 417 + * header will be written to the beginning of the buffer. 418 + */ 419 + persistent_ram_zap(prz); 420 + 410 421 /* Build header and append record contents. */ 411 422 hlen = ramoops_write_kmsg_hdr(prz, record); 412 423 if (!hlen) ··· 583 572 prz_ar[i] = persistent_ram_new(*paddr, zone_sz, sig, 584 573 &cxt->ecc_info, 585 574 cxt->memtype, flags, label); 575 + kfree(label); 586 576 if (IS_ERR(prz_ar[i])) { 587 577 err = PTR_ERR(prz_ar[i]); 588 578 dev_err(dev, "failed to request %s mem region (0x%zx@0x%llx): %d\n", ··· 629 617 label = kasprintf(GFP_KERNEL, "ramoops:%s", name); 630 618 *prz = persistent_ram_new(*paddr, sz, sig, &cxt->ecc_info, 631 619 cxt->memtype, PRZ_FLAG_ZAP_OLD, label); 620 + kfree(label); 632 621 if (IS_ERR(*prz)) { 633 622 int err = PTR_ERR(*prz); 634 623
+1 -1
fs/pstore/ram_core.c
··· 574 574 /* Initialize general buffer state. */ 575 575 raw_spin_lock_init(&prz->buffer_lock); 576 576 prz->flags = flags; 577 - prz->label = label; 577 + prz->label = kstrdup(label, GFP_KERNEL); 578 578 579 579 ret = persistent_ram_buffer_map(start, size, prz, memtype); 580 580 if (ret)
+2
include/linux/ahci_platform.h
··· 19 19 struct platform_device; 20 20 struct scsi_host_template; 21 21 22 + int ahci_platform_enable_phys(struct ahci_host_priv *hpriv); 23 + void ahci_platform_disable_phys(struct ahci_host_priv *hpriv); 22 24 int ahci_platform_enable_clks(struct ahci_host_priv *hpriv); 23 25 void ahci_platform_disable_clks(struct ahci_host_priv *hpriv); 24 26 int ahci_platform_enable_regulators(struct ahci_host_priv *hpriv);
+1
include/linux/bio.h
··· 470 470 gfp_t); 471 471 extern int bio_uncopy_user(struct bio *); 472 472 void zero_fill_bio_iter(struct bio *bio, struct bvec_iter iter); 473 + void bio_truncate(struct bio *bio, unsigned new_size); 473 474 474 475 static inline void zero_fill_bio(struct bio *bio) 475 476 {
-22
include/linux/bvec.h
··· 153 153 } 154 154 } 155 155 156 - /* 157 - * Get the last single-page segment from the multi-page bvec and store it 158 - * in @seg 159 - */ 160 - static inline void mp_bvec_last_segment(const struct bio_vec *bvec, 161 - struct bio_vec *seg) 162 - { 163 - unsigned total = bvec->bv_offset + bvec->bv_len; 164 - unsigned last_page = (total - 1) / PAGE_SIZE; 165 - 166 - seg->bv_page = bvec->bv_page + last_page; 167 - 168 - /* the whole segment is inside the last page */ 169 - if (bvec->bv_offset >= last_page * PAGE_SIZE) { 170 - seg->bv_offset = bvec->bv_offset % PAGE_SIZE; 171 - seg->bv_len = bvec->bv_len; 172 - } else { 173 - seg->bv_offset = 0; 174 - seg->bv_len = total - last_page * PAGE_SIZE; 175 - } 176 - } 177 - 178 156 #endif /* __LINUX_BVEC_ITER_H */
+34
include/linux/can/dev.h
··· 18 18 #include <linux/can/error.h> 19 19 #include <linux/can/led.h> 20 20 #include <linux/can/netlink.h> 21 + #include <linux/can/skb.h> 21 22 #include <linux/netdevice.h> 22 23 23 24 /* ··· 92 91 #define get_can_dlc(i) (min_t(__u8, (i), CAN_MAX_DLC)) 93 92 #define get_canfd_dlc(i) (min_t(__u8, (i), CANFD_MAX_DLC)) 94 93 94 + /* Check for outgoing skbs that have not been created by the CAN subsystem */ 95 + static inline bool can_skb_headroom_valid(struct net_device *dev, 96 + struct sk_buff *skb) 97 + { 98 + /* af_packet creates a headroom of HH_DATA_MOD bytes which is fine */ 99 + if (WARN_ON_ONCE(skb_headroom(skb) < sizeof(struct can_skb_priv))) 100 + return false; 101 + 102 + /* af_packet does not apply CAN skb specific settings */ 103 + if (skb->ip_summed == CHECKSUM_NONE) { 104 + /* init headroom */ 105 + can_skb_prv(skb)->ifindex = dev->ifindex; 106 + can_skb_prv(skb)->skbcnt = 0; 107 + 108 + skb->ip_summed = CHECKSUM_UNNECESSARY; 109 + 110 + /* preform proper loopback on capable devices */ 111 + if (dev->flags & IFF_ECHO) 112 + skb->pkt_type = PACKET_LOOPBACK; 113 + else 114 + skb->pkt_type = PACKET_HOST; 115 + 116 + skb_reset_mac_header(skb); 117 + skb_reset_network_header(skb); 118 + skb_reset_transport_header(skb); 119 + } 120 + 121 + return true; 122 + } 123 + 95 124 /* Drop a given socketbuffer if it does not contain a valid CAN frame. */ 96 125 static inline bool can_dropped_invalid_skb(struct net_device *dev, 97 126 struct sk_buff *skb) ··· 137 106 cfd->len > CANFD_MAX_DLEN)) 138 107 goto inval_skb; 139 108 } else 109 + goto inval_skb; 110 + 111 + if (!can_skb_headroom_valid(dev, skb)) 140 112 goto inval_skb; 141 113 142 114 return false;
+4 -1
include/linux/dmaengine.h
··· 1364 1364 static inline int dmaengine_desc_set_reuse(struct dma_async_tx_descriptor *tx) 1365 1365 { 1366 1366 struct dma_slave_caps caps; 1367 + int ret; 1367 1368 1368 - dma_get_slave_caps(tx->chan, &caps); 1369 + ret = dma_get_slave_caps(tx->chan, &caps); 1370 + if (ret) 1371 + return ret; 1369 1372 1370 1373 if (caps.descriptor_reuse) { 1371 1374 tx->flags |= DMA_CTRL_REUSE;
+8
include/linux/if_ether.h
··· 24 24 return (struct ethhdr *)skb_mac_header(skb); 25 25 } 26 26 27 + /* Prefer this version in TX path, instead of 28 + * skb_reset_mac_header() + eth_hdr() 29 + */ 30 + static inline struct ethhdr *skb_eth_hdr(const struct sk_buff *skb) 31 + { 32 + return (struct ethhdr *)skb->data; 33 + } 34 + 27 35 static inline struct ethhdr *inner_eth_hdr(const struct sk_buff *skb) 28 36 { 29 37 return (struct ethhdr *)skb_inner_mac_header(skb);
-9
include/linux/kernel.h
··· 79 79 */ 80 80 #define round_down(x, y) ((x) & ~__round_mask(x, y)) 81 81 82 - /** 83 - * FIELD_SIZEOF - get the size of a struct's field 84 - * @t: the target struct 85 - * @f: the target struct's field 86 - * Return: the size of @f in the struct definition without having a 87 - * declared instance of @t. 88 - */ 89 - #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f)) 90 - 91 82 #define typeof_member(T, m) typeof(((T*)0)->m) 92 83 93 84 #define DIV_ROUND_UP __KERNEL_DIV_ROUND_UP
+1
include/linux/libata.h
··· 1175 1175 struct ata_taskfile *tf, u16 *id); 1176 1176 extern void ata_qc_complete(struct ata_queued_cmd *qc); 1177 1177 extern int ata_qc_complete_multiple(struct ata_port *ap, u64 qc_active); 1178 + extern u64 ata_qc_get_active(struct ata_port *ap); 1178 1179 extern void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd); 1179 1180 extern int ata_std_bios_param(struct scsi_device *sdev, 1180 1181 struct block_device *bdev,
+5 -2
include/linux/memory_hotplug.h
··· 122 122 123 123 extern void arch_remove_memory(int nid, u64 start, u64 size, 124 124 struct vmem_altmap *altmap); 125 - extern void __remove_pages(struct zone *zone, unsigned long start_pfn, 126 - unsigned long nr_pages, struct vmem_altmap *altmap); 125 + extern void __remove_pages(unsigned long start_pfn, unsigned long nr_pages, 126 + struct vmem_altmap *altmap); 127 127 128 128 /* reasonably generic interface to expand the physical pages */ 129 129 extern int __add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, ··· 342 342 extern int add_memory_resource(int nid, struct resource *resource); 343 343 extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, 344 344 unsigned long nr_pages, struct vmem_altmap *altmap); 345 + extern void remove_pfn_range_from_zone(struct zone *zone, 346 + unsigned long start_pfn, 347 + unsigned long nr_pages); 345 348 extern bool is_memblock_offlined(struct memory_block *mem); 346 349 extern int sparse_add_section(int nid, unsigned long pfn, 347 350 unsigned long nr_pages, struct vmem_altmap *altmap);
+8
include/linux/mfd/mt6397/rtc.h
··· 46 46 47 47 #define RTC_AL_SEC 0x0018 48 48 49 + #define RTC_AL_SEC_MASK 0x003f 50 + #define RTC_AL_MIN_MASK 0x003f 51 + #define RTC_AL_HOU_MASK 0x001f 52 + #define RTC_AL_DOM_MASK 0x001f 53 + #define RTC_AL_DOW_MASK 0x0007 54 + #define RTC_AL_MTH_MASK 0x000f 55 + #define RTC_AL_YEA_MASK 0x007f 56 + 49 57 #define RTC_PDN2 0x002e 50 58 #define RTC_PDN2_PWRON_ALARM BIT(4) 51 59
+1 -1
include/linux/mtd/flashchip.h
··· 40 40 FL_READING, 41 41 FL_CACHEDPRG, 42 42 /* These 4 come from onenand_state_t, which has been unified here */ 43 - FL_RESETING, 43 + FL_RESETTING, 44 44 FL_OTPING, 45 45 FL_PREPARING_ERASE, 46 46 FL_VERIFYING_ERASE,
+1 -1
include/linux/of_mdio.h
··· 55 55 } 56 56 57 57 #else /* CONFIG_OF_MDIO */ 58 - static bool of_mdiobus_child_is_phy(struct device_node *child) 58 + static inline bool of_mdiobus_child_is_phy(struct device_node *child) 59 59 { 60 60 return false; 61 61 }
+11 -8
include/linux/posix-clock.h
··· 69 69 * 70 70 * @ops: Functional interface to the clock 71 71 * @cdev: Character device instance for this clock 72 - * @kref: Reference count. 72 + * @dev: Pointer to the clock's device. 73 73 * @rwsem: Protects the 'zombie' field from concurrent access. 74 74 * @zombie: If 'zombie' is true, then the hardware has disappeared. 75 - * @release: A function to free the structure when the reference count reaches 76 - * zero. May be NULL if structure is statically allocated. 77 75 * 78 76 * Drivers should embed their struct posix_clock within a private 79 77 * structure, obtaining a reference to it during callbacks using 80 78 * container_of(). 79 + * 80 + * Drivers should supply an initialized but not exposed struct device 81 + * to posix_clock_register(). It is used to manage lifetime of the 82 + * driver's private structure. It's 'release' field should be set to 83 + * a release function for this private structure. 81 84 */ 82 85 struct posix_clock { 83 86 struct posix_clock_operations ops; 84 87 struct cdev cdev; 85 - struct kref kref; 88 + struct device *dev; 86 89 struct rw_semaphore rwsem; 87 90 bool zombie; 88 - void (*release)(struct posix_clock *clk); 89 91 }; 90 92 91 93 /** 92 94 * posix_clock_register() - register a new clock 93 - * @clk: Pointer to the clock. Caller must provide 'ops' and 'release' 94 - * @devid: Allocated device id 95 + * @clk: Pointer to the clock. Caller must provide 'ops' field 96 + * @dev: Pointer to the initialized device. Caller must provide 97 + * 'release' field 95 98 * 96 99 * A clock driver calls this function to register itself with the 97 100 * clock device subsystem. If 'clk' points to dynamically allocated ··· 103 100 * 104 101 * Returns zero on success, non-zero otherwise. 105 102 */ 106 - int posix_clock_register(struct posix_clock *clk, dev_t devid); 103 + int posix_clock_register(struct posix_clock *clk, struct device *dev); 107 104 108 105 /** 109 106 * posix_clock_unregister() - unregister a clock
+2 -2
include/linux/spi/spi.h
··· 689 689 /* Helper calls for driver to timestamp transfer */ 690 690 void spi_take_timestamp_pre(struct spi_controller *ctlr, 691 691 struct spi_transfer *xfer, 692 - const void *tx, bool irqs_off); 692 + size_t progress, bool irqs_off); 693 693 void spi_take_timestamp_post(struct spi_controller *ctlr, 694 694 struct spi_transfer *xfer, 695 - const void *tx, bool irqs_off); 695 + size_t progress, bool irqs_off); 696 696 697 697 /* the spi driver core manages memory for the spi_controller classdev */ 698 698 extern struct spi_controller *__spi_alloc_controller(struct device *host,
+1 -1
include/linux/sxgbe_platform.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * 10G controller driver for Samsung EXYNOS SoCs 3 + * 10G controller driver for Samsung Exynos SoCs 4 4 * 5 5 * Copyright (C) 2013 Samsung Electronics Co., Ltd. 6 6 * http://www.samsung.com
+1
include/linux/syscalls.h
··· 1232 1232 */ 1233 1233 1234 1234 int ksys_umount(char __user *name, int flags); 1235 + int ksys_dup(unsigned int fildes); 1235 1236 int ksys_chroot(const char __user *filename); 1236 1237 ssize_t ksys_write(unsigned int fd, const char __user *buf, size_t count); 1237 1238 int ksys_chdir(const char __user *filename);
+11 -2
include/net/dst.h
··· 516 516 struct dst_entry *dst = skb_dst(skb); 517 517 518 518 if (dst && dst->ops->update_pmtu) 519 - dst->ops->update_pmtu(dst, NULL, skb, mtu); 519 + dst->ops->update_pmtu(dst, NULL, skb, mtu, true); 520 + } 521 + 522 + /* update dst pmtu but not do neighbor confirm */ 523 + static inline void skb_dst_update_pmtu_no_confirm(struct sk_buff *skb, u32 mtu) 524 + { 525 + struct dst_entry *dst = skb_dst(skb); 526 + 527 + if (dst && dst->ops->update_pmtu) 528 + dst->ops->update_pmtu(dst, NULL, skb, mtu, false); 520 529 } 521 530 522 531 static inline void skb_tunnel_check_pmtu(struct sk_buff *skb, ··· 535 526 u32 encap_mtu = dst_mtu(encap_dst); 536 527 537 528 if (skb->len > encap_mtu - headroom) 538 - skb_dst_update_pmtu(skb, encap_mtu - headroom); 529 + skb_dst_update_pmtu_no_confirm(skb, encap_mtu - headroom); 539 530 } 540 531 541 532 #endif /* _NET_DST_H */
+2 -1
include/net/dst_ops.h
··· 27 27 struct dst_entry * (*negative_advice)(struct dst_entry *); 28 28 void (*link_failure)(struct sk_buff *); 29 29 void (*update_pmtu)(struct dst_entry *dst, struct sock *sk, 30 - struct sk_buff *skb, u32 mtu); 30 + struct sk_buff *skb, u32 mtu, 31 + bool confirm_neigh); 31 32 void (*redirect)(struct dst_entry *dst, struct sock *sk, 32 33 struct sk_buff *skb); 33 34 int (*local_out)(struct net *net, struct sock *sk, struct sk_buff *skb);
+6
include/net/netfilter/nf_flow_table.h
··· 106 106 }; 107 107 108 108 #define NF_FLOW_TIMEOUT (30 * HZ) 109 + #define nf_flowtable_time_stamp (u32)jiffies 110 + 111 + static inline __s32 nf_flow_timeout_delta(unsigned int timeout) 112 + { 113 + return (__s32)(timeout - nf_flowtable_time_stamp); 114 + } 109 115 110 116 struct nf_flow_route { 111 117 struct {
+5
include/net/sch_generic.h
··· 308 308 int (*delete)(struct tcf_proto *tp, void *arg, 309 309 bool *last, bool rtnl_held, 310 310 struct netlink_ext_ack *); 311 + bool (*delete_empty)(struct tcf_proto *tp); 311 312 void (*walk)(struct tcf_proto *tp, 312 313 struct tcf_walker *arg, bool rtnl_held); 313 314 int (*reoffload)(struct tcf_proto *tp, bool add, ··· 337 336 int flags; 338 337 }; 339 338 339 + /* Classifiers setting TCF_PROTO_OPS_DOIT_UNLOCKED in tcf_proto_ops->flags 340 + * are expected to implement tcf_proto_ops->delete_empty(), otherwise race 341 + * conditions can occur when filters are inserted/deleted simultaneously. 342 + */ 340 343 enum tcf_proto_ops_flags { 341 344 TCF_PROTO_OPS_DOIT_UNLOCKED = 1, 342 345 };
+4 -4
include/trace/events/preemptirq.h
··· 18 18 TP_ARGS(ip, parent_ip), 19 19 20 20 TP_STRUCT__entry( 21 - __field(u32, caller_offs) 22 - __field(u32, parent_offs) 21 + __field(s32, caller_offs) 22 + __field(s32, parent_offs) 23 23 ), 24 24 25 25 TP_fast_assign( 26 - __entry->caller_offs = (u32)(ip - (unsigned long)_stext); 27 - __entry->parent_offs = (u32)(parent_ip - (unsigned long)_stext); 26 + __entry->caller_offs = (s32)(ip - (unsigned long)_stext); 27 + __entry->parent_offs = (s32)(parent_ip - (unsigned long)_stext); 28 28 ), 29 29 30 30 TP_printk("caller=%pS parent=%pS",
+1
include/uapi/linux/input.h
··· 34 34 __kernel_ulong_t __sec; 35 35 #if defined(__sparc__) && defined(__arch64__) 36 36 unsigned int __usec; 37 + unsigned int __pad; 37 38 #else 38 39 __kernel_ulong_t __usec; 39 40 #endif
+5 -5
include/uapi/linux/kcov.h
··· 9 9 * and the comment before kcov_remote_start() for usage details. 10 10 */ 11 11 struct kcov_remote_arg { 12 - unsigned int trace_mode; /* KCOV_TRACE_PC or KCOV_TRACE_CMP */ 13 - unsigned int area_size; /* Length of coverage buffer in words */ 14 - unsigned int num_handles; /* Size of handles array */ 15 - __u64 common_handle; 16 - __u64 handles[0]; 12 + __u32 trace_mode; /* KCOV_TRACE_PC or KCOV_TRACE_CMP */ 13 + __u32 area_size; /* Length of coverage buffer in words */ 14 + __u32 num_handles; /* Size of handles array */ 15 + __aligned_u64 common_handle; 16 + __aligned_u64 handles[0]; 17 17 }; 18 18 19 19 #define KCOV_REMOTE_MAX_HANDLES 0x100
+6 -20
init/main.c
··· 93 93 #include <linux/rodata_test.h> 94 94 #include <linux/jump_label.h> 95 95 #include <linux/mem_encrypt.h> 96 - #include <linux/file.h> 97 96 98 97 #include <asm/io.h> 99 98 #include <asm/bugs.h> ··· 1157 1158 1158 1159 void console_on_rootfs(void) 1159 1160 { 1160 - struct file *file; 1161 - unsigned int i; 1161 + /* Open the /dev/console as stdin, this should never fail */ 1162 + if (ksys_open((const char __user *) "/dev/console", O_RDWR, 0) < 0) 1163 + pr_err("Warning: unable to open an initial console.\n"); 1162 1164 1163 - /* Open /dev/console in kernelspace, this should never fail */ 1164 - file = filp_open("/dev/console", O_RDWR, 0); 1165 - if (IS_ERR(file)) 1166 - goto err_out; 1167 - 1168 - /* create stdin/stdout/stderr, this should never fail */ 1169 - for (i = 0; i < 3; i++) { 1170 - if (f_dupfd(i, file, 0) != i) 1171 - goto err_out; 1172 - } 1173 - 1174 - return; 1175 - 1176 - err_out: 1177 - /* no panic -- this might not be fatal */ 1178 - pr_err("Warning: unable to open an initial console.\n"); 1179 - return; 1165 + /* create stdout/stderr */ 1166 + (void) ksys_dup(0); 1167 + (void) ksys_dup(0); 1180 1168 } 1181 1169 1182 1170 static noinline void __init kernel_init_freeable(void)
+9 -2
kernel/bpf/cgroup.c
··· 35 35 */ 36 36 static void cgroup_bpf_release(struct work_struct *work) 37 37 { 38 - struct cgroup *cgrp = container_of(work, struct cgroup, 39 - bpf.release_work); 38 + struct cgroup *p, *cgrp = container_of(work, struct cgroup, 39 + bpf.release_work); 40 40 enum bpf_cgroup_storage_type stype; 41 41 struct bpf_prog_array *old_array; 42 42 unsigned int type; ··· 64 64 } 65 65 66 66 mutex_unlock(&cgroup_mutex); 67 + 68 + for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p)) 69 + cgroup_bpf_put(p); 67 70 68 71 percpu_ref_exit(&cgrp->bpf.refcnt); 69 72 cgroup_put(cgrp); ··· 202 199 */ 203 200 #define NR ARRAY_SIZE(cgrp->bpf.effective) 204 201 struct bpf_prog_array *arrays[NR] = {}; 202 + struct cgroup *p; 205 203 int ret, i; 206 204 207 205 ret = percpu_ref_init(&cgrp->bpf.refcnt, cgroup_bpf_release_fn, 0, 208 206 GFP_KERNEL); 209 207 if (ret) 210 208 return ret; 209 + 210 + for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p)) 211 + cgroup_bpf_get(p); 211 212 212 213 for (i = 0; i < NR; i++) 213 214 INIT_LIST_HEAD(&cgrp->bpf.progs[i]);
+29 -23
kernel/bpf/verifier.c
··· 907 907 BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5 908 908 }; 909 909 910 - static void __mark_reg_not_init(struct bpf_reg_state *reg); 910 + static void __mark_reg_not_init(const struct bpf_verifier_env *env, 911 + struct bpf_reg_state *reg); 911 912 912 913 /* Mark the unknown part of a register (variable offset or scalar value) as 913 914 * known to have the value @imm. ··· 946 945 verbose(env, "mark_reg_known_zero(regs, %u)\n", regno); 947 946 /* Something bad happened, let's kill all regs */ 948 947 for (regno = 0; regno < MAX_BPF_REG; regno++) 949 - __mark_reg_not_init(regs + regno); 948 + __mark_reg_not_init(env, regs + regno); 950 949 return; 951 950 } 952 951 __mark_reg_known_zero(regs + regno); ··· 1055 1054 } 1056 1055 1057 1056 /* Mark a register as having a completely unknown (scalar) value. */ 1058 - static void __mark_reg_unknown(struct bpf_reg_state *reg) 1057 + static void __mark_reg_unknown(const struct bpf_verifier_env *env, 1058 + struct bpf_reg_state *reg) 1059 1059 { 1060 1060 /* 1061 1061 * Clear type, id, off, and union(map_ptr, range) and ··· 1066 1064 reg->type = SCALAR_VALUE; 1067 1065 reg->var_off = tnum_unknown; 1068 1066 reg->frameno = 0; 1067 + reg->precise = env->subprog_cnt > 1 || !env->allow_ptr_leaks ? 1068 + true : false; 1069 1069 __mark_reg_unbounded(reg); 1070 1070 } 1071 1071 ··· 1078 1074 verbose(env, "mark_reg_unknown(regs, %u)\n", regno); 1079 1075 /* Something bad happened, let's kill all regs except FP */ 1080 1076 for (regno = 0; regno < BPF_REG_FP; regno++) 1081 - __mark_reg_not_init(regs + regno); 1077 + __mark_reg_not_init(env, regs + regno); 1082 1078 return; 1083 1079 } 1084 - regs += regno; 1085 - __mark_reg_unknown(regs); 1086 - /* constant backtracking is enabled for root without bpf2bpf calls */ 1087 - regs->precise = env->subprog_cnt > 1 || !env->allow_ptr_leaks ? 1088 - true : false; 1080 + __mark_reg_unknown(env, regs + regno); 1089 1081 } 1090 1082 1091 - static void __mark_reg_not_init(struct bpf_reg_state *reg) 1083 + static void __mark_reg_not_init(const struct bpf_verifier_env *env, 1084 + struct bpf_reg_state *reg) 1092 1085 { 1093 - __mark_reg_unknown(reg); 1086 + __mark_reg_unknown(env, reg); 1094 1087 reg->type = NOT_INIT; 1095 1088 } 1096 1089 ··· 1098 1097 verbose(env, "mark_reg_not_init(regs, %u)\n", regno); 1099 1098 /* Something bad happened, let's kill all regs except FP */ 1100 1099 for (regno = 0; regno < BPF_REG_FP; regno++) 1101 - __mark_reg_not_init(regs + regno); 1100 + __mark_reg_not_init(env, regs + regno); 1102 1101 return; 1103 1102 } 1104 - __mark_reg_not_init(regs + regno); 1103 + __mark_reg_not_init(env, regs + regno); 1105 1104 } 1106 1105 1107 1106 #define DEF_NOT_SUBREG (0) ··· 3235 3234 } 3236 3235 if (state->stack[spi].slot_type[0] == STACK_SPILL && 3237 3236 state->stack[spi].spilled_ptr.type == SCALAR_VALUE) { 3238 - __mark_reg_unknown(&state->stack[spi].spilled_ptr); 3237 + __mark_reg_unknown(env, &state->stack[spi].spilled_ptr); 3239 3238 for (j = 0; j < BPF_REG_SIZE; j++) 3240 3239 state->stack[spi].slot_type[j] = STACK_MISC; 3241 3240 goto mark; ··· 3893 3892 if (!reg) 3894 3893 continue; 3895 3894 if (reg_is_pkt_pointer_any(reg)) 3896 - __mark_reg_unknown(reg); 3895 + __mark_reg_unknown(env, reg); 3897 3896 } 3898 3897 } 3899 3898 ··· 3921 3920 if (!reg) 3922 3921 continue; 3923 3922 if (reg->ref_obj_id == ref_obj_id) 3924 - __mark_reg_unknown(reg); 3923 + __mark_reg_unknown(env, reg); 3925 3924 } 3926 3925 } 3927 3926 ··· 4583 4582 /* Taint dst register if offset had invalid bounds derived from 4584 4583 * e.g. dead branches. 4585 4584 */ 4586 - __mark_reg_unknown(dst_reg); 4585 + __mark_reg_unknown(env, dst_reg); 4587 4586 return 0; 4588 4587 } 4589 4588 ··· 4835 4834 /* Taint dst register if offset had invalid bounds derived from 4836 4835 * e.g. dead branches. 4837 4836 */ 4838 - __mark_reg_unknown(dst_reg); 4837 + __mark_reg_unknown(env, dst_reg); 4839 4838 return 0; 4840 4839 } 4841 4840 4842 4841 if (!src_known && 4843 4842 opcode != BPF_ADD && opcode != BPF_SUB && opcode != BPF_AND) { 4844 - __mark_reg_unknown(dst_reg); 4843 + __mark_reg_unknown(env, dst_reg); 4845 4844 return 0; 4846 4845 } 4847 4846 ··· 6264 6263 static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) 6265 6264 { 6266 6265 struct bpf_reg_state *regs = cur_regs(env); 6266 + static const int ctx_reg = BPF_REG_6; 6267 6267 u8 mode = BPF_MODE(insn->code); 6268 6268 int i, err; 6269 6269 ··· 6298 6296 } 6299 6297 6300 6298 /* check whether implicit source operand (register R6) is readable */ 6301 - err = check_reg_arg(env, BPF_REG_6, SRC_OP); 6299 + err = check_reg_arg(env, ctx_reg, SRC_OP); 6302 6300 if (err) 6303 6301 return err; 6304 6302 ··· 6317 6315 return -EINVAL; 6318 6316 } 6319 6317 6320 - if (regs[BPF_REG_6].type != PTR_TO_CTX) { 6318 + if (regs[ctx_reg].type != PTR_TO_CTX) { 6321 6319 verbose(env, 6322 6320 "at the time of BPF_LD_ABS|IND R6 != pointer to skb\n"); 6323 6321 return -EINVAL; ··· 6329 6327 if (err) 6330 6328 return err; 6331 6329 } 6330 + 6331 + err = check_ctx_reg(env, &regs[ctx_reg], ctx_reg); 6332 + if (err < 0) 6333 + return err; 6332 6334 6333 6335 /* reset caller saved regs to unreadable */ 6334 6336 for (i = 0; i < CALLER_SAVED_REGS; i++) { ··· 6988 6982 /* since the register is unused, clear its state 6989 6983 * to make further comparison simpler 6990 6984 */ 6991 - __mark_reg_not_init(&st->regs[i]); 6985 + __mark_reg_not_init(env, &st->regs[i]); 6992 6986 } 6993 6987 6994 6988 for (i = 0; i < st->allocated_stack / BPF_REG_SIZE; i++) { ··· 6996 6990 /* liveness must not touch this stack slot anymore */ 6997 6991 st->stack[i].spilled_ptr.live |= REG_LIVE_DONE; 6998 6992 if (!(live & REG_LIVE_READ)) { 6999 - __mark_reg_not_init(&st->stack[i].spilled_ptr); 6993 + __mark_reg_not_init(env, &st->stack[i].spilled_ptr); 7000 6994 for (j = 0; j < BPF_REG_SIZE; j++) 7001 6995 st->stack[i].slot_type[j] = STACK_INVALID; 7002 6996 }
+3 -3
kernel/cred.c
··· 223 223 new->magic = CRED_MAGIC; 224 224 #endif 225 225 226 - if (security_cred_alloc_blank(new, GFP_KERNEL) < 0) 226 + if (security_cred_alloc_blank(new, GFP_KERNEL_ACCOUNT) < 0) 227 227 goto error; 228 228 229 229 return new; ··· 282 282 new->security = NULL; 283 283 #endif 284 284 285 - if (security_prepare_creds(new, old, GFP_KERNEL) < 0) 285 + if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0) 286 286 goto error; 287 287 validate_creds(new); 288 288 return new; ··· 715 715 #ifdef CONFIG_SECURITY 716 716 new->security = NULL; 717 717 #endif 718 - if (security_prepare_creds(new, old, GFP_KERNEL) < 0) 718 + if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0) 719 719 goto error; 720 720 721 721 put_cred(old);
+8 -4
kernel/exit.c
··· 517 517 } 518 518 519 519 write_unlock_irq(&tasklist_lock); 520 - if (unlikely(pid_ns == &init_pid_ns)) { 521 - panic("Attempted to kill init! exitcode=0x%08x\n", 522 - father->signal->group_exit_code ?: father->exit_code); 523 - } 524 520 525 521 list_for_each_entry_safe(p, n, dead, ptrace_entry) { 526 522 list_del_init(&p->ptrace_entry); ··· 762 766 acct_update_integrals(tsk); 763 767 group_dead = atomic_dec_and_test(&tsk->signal->live); 764 768 if (group_dead) { 769 + /* 770 + * If the last thread of global init has exited, panic 771 + * immediately to get a useable coredump. 772 + */ 773 + if (unlikely(is_global_init(tsk))) 774 + panic("Attempted to kill init! exitcode=0x%08x\n", 775 + tsk->signal->group_exit_code ?: (int)code); 776 + 765 777 #ifdef CONFIG_POSIX_TIMERS 766 778 hrtimer_cancel(&tsk->signal->real_timer); 767 779 exit_itimers(tsk->signal);
+10
kernel/fork.c
··· 2578 2578 #endif 2579 2579 2580 2580 #ifdef __ARCH_WANT_SYS_CLONE3 2581 + 2582 + /* 2583 + * copy_thread implementations handle CLONE_SETTLS by reading the TLS value from 2584 + * the registers containing the syscall arguments for clone. This doesn't work 2585 + * with clone3 since the TLS value is passed in clone_args instead. 2586 + */ 2587 + #ifndef CONFIG_HAVE_COPY_THREAD_TLS 2588 + #error clone3 requires copy_thread_tls support in arch 2589 + #endif 2590 + 2581 2591 noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, 2582 2592 struct clone_args __user *uargs, 2583 2593 size_t usize)
+7
kernel/seccomp.c
··· 1026 1026 struct seccomp_notif unotif; 1027 1027 ssize_t ret; 1028 1028 1029 + /* Verify that we're not given garbage to keep struct extensible. */ 1030 + ret = check_zeroed_user(buf, sizeof(unotif)); 1031 + if (ret < 0) 1032 + return ret; 1033 + if (!ret) 1034 + return -EINVAL; 1035 + 1029 1036 memset(&unotif, 0, sizeof(unotif)); 1030 1037 1031 1038 ret = down_interruptible(&filter->notif->request);
+19 -11
kernel/taskstats.c
··· 554 554 static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk) 555 555 { 556 556 struct signal_struct *sig = tsk->signal; 557 - struct taskstats *stats; 557 + struct taskstats *stats_new, *stats; 558 558 559 - if (sig->stats || thread_group_empty(tsk)) 560 - goto ret; 559 + /* Pairs with smp_store_release() below. */ 560 + stats = smp_load_acquire(&sig->stats); 561 + if (stats || thread_group_empty(tsk)) 562 + return stats; 561 563 562 564 /* No problem if kmem_cache_zalloc() fails */ 563 - stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL); 565 + stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL); 564 566 565 567 spin_lock_irq(&tsk->sighand->siglock); 566 - if (!sig->stats) { 567 - sig->stats = stats; 568 - stats = NULL; 568 + stats = sig->stats; 569 + if (!stats) { 570 + /* 571 + * Pairs with smp_store_release() above and order the 572 + * kmem_cache_zalloc(). 573 + */ 574 + smp_store_release(&sig->stats, stats_new); 575 + stats = stats_new; 576 + stats_new = NULL; 569 577 } 570 578 spin_unlock_irq(&tsk->sighand->siglock); 571 579 572 - if (stats) 573 - kmem_cache_free(taskstats_cache, stats); 574 - ret: 575 - return sig->stats; 580 + if (stats_new) 581 + kmem_cache_free(taskstats_cache, stats_new); 582 + 583 + return stats; 576 584 } 577 585 578 586 /* Send pid data out on exit */
+13 -18
kernel/time/posix-clock.c
··· 14 14 15 15 #include "posix-timers.h" 16 16 17 - static void delete_clock(struct kref *kref); 18 - 19 17 /* 20 18 * Returns NULL if the posix_clock instance attached to 'fp' is old and stale. 21 19 */ ··· 123 125 err = 0; 124 126 125 127 if (!err) { 126 - kref_get(&clk->kref); 128 + get_device(clk->dev); 127 129 fp->private_data = clk; 128 130 } 129 131 out: ··· 139 141 if (clk->ops.release) 140 142 err = clk->ops.release(clk); 141 143 142 - kref_put(&clk->kref, delete_clock); 144 + put_device(clk->dev); 143 145 144 146 fp->private_data = NULL; 145 147 ··· 159 161 #endif 160 162 }; 161 163 162 - int posix_clock_register(struct posix_clock *clk, dev_t devid) 164 + int posix_clock_register(struct posix_clock *clk, struct device *dev) 163 165 { 164 166 int err; 165 167 166 - kref_init(&clk->kref); 167 168 init_rwsem(&clk->rwsem); 168 169 169 170 cdev_init(&clk->cdev, &posix_clock_file_operations); 171 + err = cdev_device_add(&clk->cdev, dev); 172 + if (err) { 173 + pr_err("%s unable to add device %d:%d\n", 174 + dev_name(dev), MAJOR(dev->devt), MINOR(dev->devt)); 175 + return err; 176 + } 170 177 clk->cdev.owner = clk->ops.owner; 171 - err = cdev_add(&clk->cdev, devid, 1); 178 + clk->dev = dev; 172 179 173 - return err; 180 + return 0; 174 181 } 175 182 EXPORT_SYMBOL_GPL(posix_clock_register); 176 183 177 - static void delete_clock(struct kref *kref) 178 - { 179 - struct posix_clock *clk = container_of(kref, struct posix_clock, kref); 180 - 181 - if (clk->release) 182 - clk->release(clk); 183 - } 184 - 185 184 void posix_clock_unregister(struct posix_clock *clk) 186 185 { 187 - cdev_del(&clk->cdev); 186 + cdev_device_del(&clk->cdev, clk->dev); 188 187 189 188 down_write(&clk->rwsem); 190 189 clk->zombie = true; 191 190 up_write(&clk->rwsem); 192 191 193 - kref_put(&clk->kref, delete_clock); 192 + put_device(clk->dev); 194 193 } 195 194 EXPORT_SYMBOL_GPL(posix_clock_unregister); 196 195
+14
kernel/trace/fgraph.c
··· 96 96 return 0; 97 97 } 98 98 99 + /* 100 + * Not all archs define MCOUNT_INSN_SIZE which is used to look for direct 101 + * functions. But those archs currently don't support direct functions 102 + * anyway, and ftrace_find_rec_direct() is just a stub for them. 103 + * Define MCOUNT_INSN_SIZE to keep those archs compiling. 104 + */ 105 + #ifndef MCOUNT_INSN_SIZE 106 + /* Make sure this only works without direct calls */ 107 + # ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 108 + # error MCOUNT_INSN_SIZE not defined with direct calls enabled 109 + # endif 110 + # define MCOUNT_INSN_SIZE 0 111 + #endif 112 + 99 113 int function_graph_enter(unsigned long ret, unsigned long func, 100 114 unsigned long frame_pointer, unsigned long *retp) 101 115 {
+3 -3
kernel/trace/ftrace.c
··· 526 526 } 527 527 528 528 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 529 - avg = rec->time; 530 - do_div(avg, rec->counter); 529 + avg = div64_ul(rec->time, rec->counter); 531 530 if (tracing_thresh && (avg < tracing_thresh)) 532 531 goto out; 533 532 #endif ··· 552 553 * Divide only 1000 for ns^2 -> us^2 conversion. 553 554 * trace_print_graph_duration will divide 1000 again. 554 555 */ 555 - do_div(stddev, rec->counter * (rec->counter - 1) * 1000); 556 + stddev = div64_ul(stddev, 557 + rec->counter * (rec->counter - 1) * 1000); 556 558 } 557 559 558 560 trace_seq_init(&s);
+1 -1
kernel/trace/trace_events_inject.c
··· 195 195 unsigned long irq_flags; 196 196 void *entry = NULL; 197 197 int entry_size; 198 - u64 val; 198 + u64 val = 0; 199 199 int len; 200 200 201 201 entry = trace_alloc_entry(call, &entry_size);
+3 -1
kernel/trace/trace_sched_wakeup.c
··· 630 630 if (ret) { 631 631 pr_info("wakeup trace: Couldn't activate tracepoint" 632 632 " probe to kernel_sched_migrate_task\n"); 633 - return; 633 + goto fail_deprobe_sched_switch; 634 634 } 635 635 636 636 wakeup_reset(tr); ··· 648 648 printk(KERN_ERR "failed to start wakeup tracer\n"); 649 649 650 650 return; 651 + fail_deprobe_sched_switch: 652 + unregister_trace_sched_switch(probe_wakeup_sched_switch, NULL); 651 653 fail_deprobe_wake_new: 652 654 unregister_trace_sched_wakeup_new(probe_wakeup, NULL); 653 655 fail_deprobe:
+1 -1
kernel/trace/trace_seq.c
··· 381 381 int prefix_type, int rowsize, int groupsize, 382 382 const void *buf, size_t len, bool ascii) 383 383 { 384 - unsigned int save_len = s->seq.len; 384 + unsigned int save_len = s->seq.len; 385 385 386 386 if (s->full) 387 387 return 0;
+5
kernel/trace/trace_stack.c
··· 283 283 local_irq_restore(flags); 284 284 } 285 285 286 + /* Some archs may not define MCOUNT_INSN_SIZE */ 287 + #ifndef MCOUNT_INSN_SIZE 288 + # define MCOUNT_INSN_SIZE 0 289 + #endif 290 + 286 291 static void 287 292 stack_trace_call(unsigned long ip, unsigned long parent_ip, 288 293 struct ftrace_ops *op, struct pt_regs *pt_regs)
+6 -2
mm/gup_benchmark.c
··· 26 26 unsigned long i, nr_pages, addr, next; 27 27 int nr; 28 28 struct page **pages; 29 + int ret = 0; 29 30 30 31 if (gup->size > ULONG_MAX) 31 32 return -EINVAL; ··· 64 63 NULL); 65 64 break; 66 65 default: 67 - return -1; 66 + kvfree(pages); 67 + ret = -EINVAL; 68 + goto out; 68 69 } 69 70 70 71 if (nr <= 0) ··· 88 85 gup->put_delta_usec = ktime_us_delta(end_time, start_time); 89 86 90 87 kvfree(pages); 91 - return 0; 88 + out: 89 + return ret; 92 90 } 93 91 94 92 static long gup_benchmark_ioctl(struct file *filep, unsigned int cmd,
+50 -1
mm/hugetlb.c
··· 27 27 #include <linux/swapops.h> 28 28 #include <linux/jhash.h> 29 29 #include <linux/numa.h> 30 + #include <linux/llist.h> 30 31 31 32 #include <asm/page.h> 32 33 #include <asm/pgtable.h> ··· 1137 1136 page[2].mapping = NULL; 1138 1137 } 1139 1138 1140 - void free_huge_page(struct page *page) 1139 + static void __free_huge_page(struct page *page) 1141 1140 { 1142 1141 /* 1143 1142 * Can't pass hstate in here because it is called from the ··· 1198 1197 enqueue_huge_page(h, page); 1199 1198 } 1200 1199 spin_unlock(&hugetlb_lock); 1200 + } 1201 + 1202 + /* 1203 + * As free_huge_page() can be called from a non-task context, we have 1204 + * to defer the actual freeing in a workqueue to prevent potential 1205 + * hugetlb_lock deadlock. 1206 + * 1207 + * free_hpage_workfn() locklessly retrieves the linked list of pages to 1208 + * be freed and frees them one-by-one. As the page->mapping pointer is 1209 + * going to be cleared in __free_huge_page() anyway, it is reused as the 1210 + * llist_node structure of a lockless linked list of huge pages to be freed. 1211 + */ 1212 + static LLIST_HEAD(hpage_freelist); 1213 + 1214 + static void free_hpage_workfn(struct work_struct *work) 1215 + { 1216 + struct llist_node *node; 1217 + struct page *page; 1218 + 1219 + node = llist_del_all(&hpage_freelist); 1220 + 1221 + while (node) { 1222 + page = container_of((struct address_space **)node, 1223 + struct page, mapping); 1224 + node = node->next; 1225 + __free_huge_page(page); 1226 + } 1227 + } 1228 + static DECLARE_WORK(free_hpage_work, free_hpage_workfn); 1229 + 1230 + void free_huge_page(struct page *page) 1231 + { 1232 + /* 1233 + * Defer freeing if in non-task context to avoid hugetlb_lock deadlock. 1234 + */ 1235 + if (!in_task()) { 1236 + /* 1237 + * Only call schedule_work() if hpage_freelist is previously 1238 + * empty. Otherwise, schedule_work() had been called but the 1239 + * workfn hasn't retrieved the list yet. 1240 + */ 1241 + if (llist_add((struct llist_node *)&page->mapping, 1242 + &hpage_freelist)) 1243 + schedule_work(&free_hpage_work); 1244 + return; 1245 + } 1246 + 1247 + __free_huge_page(page); 1201 1248 } 1202 1249 1203 1250 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
+16 -15
mm/memory_hotplug.c
··· 483 483 pgdat->node_spanned_pages = node_end_pfn - node_start_pfn; 484 484 } 485 485 486 - static void __remove_zone(struct zone *zone, unsigned long start_pfn, 487 - unsigned long nr_pages) 486 + void __ref remove_pfn_range_from_zone(struct zone *zone, 487 + unsigned long start_pfn, 488 + unsigned long nr_pages) 488 489 { 489 490 struct pglist_data *pgdat = zone->zone_pgdat; 490 491 unsigned long flags; ··· 500 499 return; 501 500 #endif 502 501 502 + clear_zone_contiguous(zone); 503 + 503 504 pgdat_resize_lock(zone->zone_pgdat, &flags); 504 505 shrink_zone_span(zone, start_pfn, start_pfn + nr_pages); 505 506 update_pgdat_span(pgdat); 506 507 pgdat_resize_unlock(zone->zone_pgdat, &flags); 508 + 509 + set_zone_contiguous(zone); 507 510 } 508 511 509 - static void __remove_section(struct zone *zone, unsigned long pfn, 510 - unsigned long nr_pages, unsigned long map_offset, 511 - struct vmem_altmap *altmap) 512 + static void __remove_section(unsigned long pfn, unsigned long nr_pages, 513 + unsigned long map_offset, 514 + struct vmem_altmap *altmap) 512 515 { 513 516 struct mem_section *ms = __nr_to_section(pfn_to_section_nr(pfn)); 514 517 515 518 if (WARN_ON_ONCE(!valid_section(ms))) 516 519 return; 517 520 518 - __remove_zone(zone, pfn, nr_pages); 519 521 sparse_remove_section(ms, pfn, nr_pages, map_offset, altmap); 520 522 } 521 523 522 524 /** 523 - * __remove_pages() - remove sections of pages from a zone 524 - * @zone: zone from which pages need to be removed 525 + * __remove_pages() - remove sections of pages 525 526 * @pfn: starting pageframe (must be aligned to start of a section) 526 527 * @nr_pages: number of pages to remove (must be multiple of section size) 527 528 * @altmap: alternative device page map or %NULL if default memmap is used ··· 533 530 * sure that pages are marked reserved and zones are adjust properly by 534 531 * calling offline_pages(). 535 532 */ 536 - void __remove_pages(struct zone *zone, unsigned long pfn, 537 - unsigned long nr_pages, struct vmem_altmap *altmap) 533 + void __remove_pages(unsigned long pfn, unsigned long nr_pages, 534 + struct vmem_altmap *altmap) 538 535 { 539 536 unsigned long map_offset = 0; 540 537 unsigned long nr, start_sec, end_sec; 541 538 542 539 map_offset = vmem_altmap_offset(altmap); 543 - 544 - clear_zone_contiguous(zone); 545 540 546 541 if (check_pfn_span(pfn, nr_pages, "remove")) 547 542 return; ··· 552 551 cond_resched(); 553 552 pfns = min(nr_pages, PAGES_PER_SECTION 554 553 - (pfn & ~PAGE_SECTION_MASK)); 555 - __remove_section(zone, pfn, pfns, map_offset, altmap); 554 + __remove_section(pfn, pfns, map_offset, altmap); 556 555 pfn += pfns; 557 556 nr_pages -= pfns; 558 557 map_offset = 0; 559 558 } 560 - 561 - set_zone_contiguous(zone); 562 559 } 563 560 564 561 int set_online_page_callback(online_page_callback_t callback) ··· 868 869 (unsigned long long) pfn << PAGE_SHIFT, 869 870 (((unsigned long long) pfn + nr_pages) << PAGE_SHIFT) - 1); 870 871 memory_notify(MEM_CANCEL_ONLINE, &arg); 872 + remove_pfn_range_from_zone(zone, pfn, nr_pages); 871 873 mem_hotplug_done(); 872 874 return ret; 873 875 } ··· 1628 1628 writeback_set_ratelimit(); 1629 1629 1630 1630 memory_notify(MEM_OFFLINE, &arg); 1631 + remove_pfn_range_from_zone(zone, start_pfn, nr_pages); 1631 1632 mem_hotplug_done(); 1632 1633 return 0; 1633 1634
+1 -1
mm/memremap.c
··· 120 120 121 121 mem_hotplug_begin(); 122 122 if (pgmap->type == MEMORY_DEVICE_PRIVATE) { 123 - __remove_pages(page_zone(first_page), PHYS_PFN(res->start), 123 + __remove_pages(PHYS_PFN(res->start), 124 124 PHYS_PFN(resource_size(res)), NULL); 125 125 } else { 126 126 arch_remove_memory(nid, res->start, resource_size(res),
+17 -6
mm/migrate.c
··· 1512 1512 /* 1513 1513 * Resolves the given address to a struct page, isolates it from the LRU and 1514 1514 * puts it to the given pagelist. 1515 - * Returns -errno if the page cannot be found/isolated or 0 when it has been 1516 - * queued or the page doesn't need to be migrated because it is already on 1517 - * the target node 1515 + * Returns: 1516 + * errno - if the page cannot be found/isolated 1517 + * 0 - when it doesn't have to be migrated because it is already on the 1518 + * target node 1519 + * 1 - when it has been queued 1518 1520 */ 1519 1521 static int add_page_for_migration(struct mm_struct *mm, unsigned long addr, 1520 1522 int node, struct list_head *pagelist, bool migrate_all) ··· 1555 1553 if (PageHuge(page)) { 1556 1554 if (PageHead(page)) { 1557 1555 isolate_huge_page(page, pagelist); 1558 - err = 0; 1556 + err = 1; 1559 1557 } 1560 1558 } else { 1561 1559 struct page *head; ··· 1565 1563 if (err) 1566 1564 goto out_putpage; 1567 1565 1568 - err = 0; 1566 + err = 1; 1569 1567 list_add_tail(&head->lru, pagelist); 1570 1568 mod_node_page_state(page_pgdat(head), 1571 1569 NR_ISOLATED_ANON + page_is_file_cache(head), ··· 1642 1640 */ 1643 1641 err = add_page_for_migration(mm, addr, current_node, 1644 1642 &pagelist, flags & MPOL_MF_MOVE_ALL); 1645 - if (!err) 1643 + 1644 + if (!err) { 1645 + /* The page is already on the target node */ 1646 + err = store_status(status, i, current_node, 1); 1647 + if (err) 1648 + goto out_flush; 1646 1649 continue; 1650 + } else if (err > 0) { 1651 + /* The page is successfully queued for migration */ 1652 + continue; 1653 + } 1647 1654 1648 1655 err = store_status(status, i, err, 1); 1649 1656 if (err)
-6
mm/mmap.c
··· 90 90 * MAP_PRIVATE r: (no) no r: (yes) yes r: (no) yes r: (no) yes 91 91 * w: (no) no w: (no) no w: (copy) copy w: (no) no 92 92 * x: (no) no x: (no) yes x: (no) yes x: (yes) yes 93 - * 94 - * On arm64, PROT_EXEC has the following behaviour for both MAP_SHARED and 95 - * MAP_PRIVATE: 96 - * r: (no) no 97 - * w: (no) no 98 - * x: (yes) yes 99 93 */ 100 94 pgprot_t protection_map[16] __ro_after_init = { 101 95 __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111,
+1 -1
mm/oom_kill.c
··· 890 890 K(get_mm_counter(mm, MM_FILEPAGES)), 891 891 K(get_mm_counter(mm, MM_SHMEMPAGES)), 892 892 from_kuid(&init_user_ns, task_uid(victim)), 893 - mm_pgtables_bytes(mm), victim->signal->oom_score_adj); 893 + mm_pgtables_bytes(mm) >> 10, victim->signal->oom_score_adj); 894 894 task_unlock(victim); 895 895 896 896 /*
+5
mm/zsmalloc.c
··· 2069 2069 zs_pool_dec_isolated(pool); 2070 2070 } 2071 2071 2072 + if (page_zone(newpage) != page_zone(page)) { 2073 + dec_zone_page_state(page, NR_ZSPAGES); 2074 + inc_zone_page_state(newpage, NR_ZSPAGES); 2075 + } 2076 + 2072 2077 reset_page(page); 2073 2078 put_page(page); 2074 2079 page = newpage;
+1
net/8021q/vlan.h
··· 126 126 void vlan_setup(struct net_device *dev); 127 127 int register_vlan_dev(struct net_device *dev, struct netlink_ext_ack *extack); 128 128 void unregister_vlan_dev(struct net_device *dev, struct list_head *head); 129 + void vlan_dev_uninit(struct net_device *dev); 129 130 bool vlan_dev_inherit_address(struct net_device *dev, 130 131 struct net_device *real_dev); 131 132
+2 -1
net/8021q/vlan_dev.c
··· 586 586 return 0; 587 587 } 588 588 589 - static void vlan_dev_uninit(struct net_device *dev) 589 + /* Note: this function might be called multiple times for the same device. */ 590 + void vlan_dev_uninit(struct net_device *dev) 590 591 { 591 592 struct vlan_priority_tci_mapping *pm; 592 593 struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
+12 -7
net/8021q/vlan_netlink.c
··· 108 108 struct ifla_vlan_flags *flags; 109 109 struct ifla_vlan_qos_mapping *m; 110 110 struct nlattr *attr; 111 - int rem; 111 + int rem, err; 112 112 113 113 if (data[IFLA_VLAN_FLAGS]) { 114 114 flags = nla_data(data[IFLA_VLAN_FLAGS]); 115 - vlan_dev_change_flags(dev, flags->flags, flags->mask); 115 + err = vlan_dev_change_flags(dev, flags->flags, flags->mask); 116 + if (err) 117 + return err; 116 118 } 117 119 if (data[IFLA_VLAN_INGRESS_QOS]) { 118 120 nla_for_each_nested(attr, data[IFLA_VLAN_INGRESS_QOS], rem) { ··· 125 123 if (data[IFLA_VLAN_EGRESS_QOS]) { 126 124 nla_for_each_nested(attr, data[IFLA_VLAN_EGRESS_QOS], rem) { 127 125 m = nla_data(attr); 128 - vlan_dev_set_egress_priority(dev, m->from, m->to); 126 + err = vlan_dev_set_egress_priority(dev, m->from, m->to); 127 + if (err) 128 + return err; 129 129 } 130 130 } 131 131 return 0; ··· 183 179 return -EINVAL; 184 180 185 181 err = vlan_changelink(dev, tb, data, extack); 186 - if (err < 0) 187 - return err; 188 - 189 - return register_vlan_dev(dev, extack); 182 + if (!err) 183 + err = register_vlan_dev(dev, extack); 184 + if (err) 185 + vlan_dev_uninit(dev); 186 + return err; 190 187 } 191 188 192 189 static inline size_t vlan_qos_map_size(unsigned int n)
+2 -1
net/bridge/br_nf_core.c
··· 22 22 #endif 23 23 24 24 static void fake_update_pmtu(struct dst_entry *dst, struct sock *sk, 25 - struct sk_buff *skb, u32 mtu) 25 + struct sk_buff *skb, u32 mtu, 26 + bool confirm_neigh) 26 27 { 27 28 } 28 29
+17 -18
net/bridge/netfilter/ebtables.c
··· 1867 1867 } 1868 1868 1869 1869 static int ebt_buf_add(struct ebt_entries_buf_state *state, 1870 - void *data, unsigned int sz) 1870 + const void *data, unsigned int sz) 1871 1871 { 1872 1872 if (state->buf_kern_start == NULL) 1873 1873 goto count_only; ··· 1901 1901 EBT_COMPAT_TARGET, 1902 1902 }; 1903 1903 1904 - static int compat_mtw_from_user(struct compat_ebt_entry_mwt *mwt, 1904 + static int compat_mtw_from_user(const struct compat_ebt_entry_mwt *mwt, 1905 1905 enum compat_mwt compat_mwt, 1906 1906 struct ebt_entries_buf_state *state, 1907 1907 const unsigned char *base) ··· 1979 1979 /* return size of all matches, watchers or target, including necessary 1980 1980 * alignment and padding. 1981 1981 */ 1982 - static int ebt_size_mwt(struct compat_ebt_entry_mwt *match32, 1982 + static int ebt_size_mwt(const struct compat_ebt_entry_mwt *match32, 1983 1983 unsigned int size_left, enum compat_mwt type, 1984 1984 struct ebt_entries_buf_state *state, const void *base) 1985 1985 { 1986 + const char *buf = (const char *)match32; 1986 1987 int growth = 0; 1987 - char *buf; 1988 1988 1989 1989 if (size_left == 0) 1990 1990 return 0; 1991 1991 1992 - buf = (char *) match32; 1993 - 1994 - while (size_left >= sizeof(*match32)) { 1992 + do { 1995 1993 struct ebt_entry_match *match_kern; 1996 1994 int ret; 1995 + 1996 + if (size_left < sizeof(*match32)) 1997 + return -EINVAL; 1997 1998 1998 1999 match_kern = (struct ebt_entry_match *) state->buf_kern_start; 1999 2000 if (match_kern) { ··· 2032 2031 if (match_kern) 2033 2032 match_kern->match_size = ret; 2034 2033 2035 - /* rule should have no remaining data after target */ 2036 - if (type == EBT_COMPAT_TARGET && size_left) 2037 - return -EINVAL; 2038 - 2039 2034 match32 = (struct compat_ebt_entry_mwt *) buf; 2040 - } 2035 + } while (size_left); 2041 2036 2042 2037 return growth; 2043 2038 } 2044 2039 2045 2040 /* called for all ebt_entry structures. */ 2046 - static int size_entry_mwt(struct ebt_entry *entry, const unsigned char *base, 2041 + static int size_entry_mwt(const struct ebt_entry *entry, const unsigned char *base, 2047 2042 unsigned int *total, 2048 2043 struct ebt_entries_buf_state *state) 2049 2044 { 2050 - unsigned int i, j, startoff, new_offset = 0; 2045 + unsigned int i, j, startoff, next_expected_off, new_offset = 0; 2051 2046 /* stores match/watchers/targets & offset of next struct ebt_entry: */ 2052 2047 unsigned int offsets[4]; 2053 2048 unsigned int *offsets_update = NULL; ··· 2129 2132 return ret; 2130 2133 } 2131 2134 2132 - startoff = state->buf_user_offset - startoff; 2133 - 2134 - if (WARN_ON(*total < startoff)) 2135 + next_expected_off = state->buf_user_offset - startoff; 2136 + if (next_expected_off != entry->next_offset) 2135 2137 return -EINVAL; 2136 - *total -= startoff; 2138 + 2139 + if (*total < entry->next_offset) 2140 + return -EINVAL; 2141 + *total -= entry->next_offset; 2137 2142 return 0; 2138 2143 } 2139 2144
+4 -2
net/decnet/dn_route.c
··· 110 110 static struct dst_entry *dn_dst_negative_advice(struct dst_entry *); 111 111 static void dn_dst_link_failure(struct sk_buff *); 112 112 static void dn_dst_update_pmtu(struct dst_entry *dst, struct sock *sk, 113 - struct sk_buff *skb , u32 mtu); 113 + struct sk_buff *skb , u32 mtu, 114 + bool confirm_neigh); 114 115 static void dn_dst_redirect(struct dst_entry *dst, struct sock *sk, 115 116 struct sk_buff *skb); 116 117 static struct neighbour *dn_dst_neigh_lookup(const struct dst_entry *dst, ··· 252 251 * advertise to the other end). 253 252 */ 254 253 static void dn_dst_update_pmtu(struct dst_entry *dst, struct sock *sk, 255 - struct sk_buff *skb, u32 mtu) 254 + struct sk_buff *skb, u32 mtu, 255 + bool confirm_neigh) 256 256 { 257 257 struct dn_route *rt = (struct dn_route *) dst; 258 258 struct neighbour *n = rt->n;
+40 -12
net/hsr/hsr_debugfs.c
··· 20 20 #include "hsr_main.h" 21 21 #include "hsr_framereg.h" 22 22 23 + static struct dentry *hsr_debugfs_root_dir; 24 + 23 25 static void print_mac_address(struct seq_file *sfp, unsigned char *mac) 24 26 { 25 27 seq_printf(sfp, "%02x:%02x:%02x:%02x:%02x:%02x:", ··· 65 63 return single_open(filp, hsr_node_table_show, inode->i_private); 66 64 } 67 65 66 + void hsr_debugfs_rename(struct net_device *dev) 67 + { 68 + struct hsr_priv *priv = netdev_priv(dev); 69 + struct dentry *d; 70 + 71 + d = debugfs_rename(hsr_debugfs_root_dir, priv->node_tbl_root, 72 + hsr_debugfs_root_dir, dev->name); 73 + if (IS_ERR(d)) 74 + netdev_warn(dev, "failed to rename\n"); 75 + else 76 + priv->node_tbl_root = d; 77 + } 78 + 68 79 static const struct file_operations hsr_fops = { 69 - .owner = THIS_MODULE, 70 80 .open = hsr_node_table_open, 71 81 .read = seq_read, 72 82 .llseek = seq_lseek, ··· 92 78 * When debugfs is configured this routine sets up the node_table file per 93 79 * hsr device for dumping the node_table entries 94 80 */ 95 - int hsr_debugfs_init(struct hsr_priv *priv, struct net_device *hsr_dev) 81 + void hsr_debugfs_init(struct hsr_priv *priv, struct net_device *hsr_dev) 96 82 { 97 - int rc = -1; 98 83 struct dentry *de = NULL; 99 84 100 - de = debugfs_create_dir(hsr_dev->name, NULL); 101 - if (!de) { 102 - pr_err("Cannot create hsr debugfs root\n"); 103 - return rc; 85 + de = debugfs_create_dir(hsr_dev->name, hsr_debugfs_root_dir); 86 + if (IS_ERR(de)) { 87 + pr_err("Cannot create hsr debugfs directory\n"); 88 + return; 104 89 } 105 90 106 91 priv->node_tbl_root = de; ··· 107 94 de = debugfs_create_file("node_table", S_IFREG | 0444, 108 95 priv->node_tbl_root, priv, 109 96 &hsr_fops); 110 - if (!de) { 111 - pr_err("Cannot create hsr node_table directory\n"); 112 - return rc; 97 + if (IS_ERR(de)) { 98 + pr_err("Cannot create hsr node_table file\n"); 99 + debugfs_remove(priv->node_tbl_root); 100 + priv->node_tbl_root = NULL; 101 + return; 113 102 } 114 103 priv->node_tbl_file = de; 115 - 116 - return 0; 117 104 } 118 105 119 106 /* hsr_debugfs_term - Tear down debugfs intrastructure ··· 129 116 priv->node_tbl_file = NULL; 130 117 debugfs_remove(priv->node_tbl_root); 131 118 priv->node_tbl_root = NULL; 119 + } 120 + 121 + void hsr_debugfs_create_root(void) 122 + { 123 + hsr_debugfs_root_dir = debugfs_create_dir("hsr", NULL); 124 + if (IS_ERR(hsr_debugfs_root_dir)) { 125 + pr_err("Cannot create hsr debugfs root directory\n"); 126 + hsr_debugfs_root_dir = NULL; 127 + } 128 + } 129 + 130 + void hsr_debugfs_remove_root(void) 131 + { 132 + /* debugfs_remove() internally checks NULL and ERROR */ 133 + debugfs_remove(hsr_debugfs_root_dir); 132 134 }
+16 -12
net/hsr/hsr_device.c
··· 272 272 skb->dev->dev_addr, skb->len) <= 0) 273 273 goto out; 274 274 skb_reset_mac_header(skb); 275 + skb_reset_network_header(skb); 276 + skb_reset_transport_header(skb); 275 277 276 278 if (hsr_ver > 0) { 277 279 hsr_tag = skb_put(skb, sizeof(struct hsr_tag)); ··· 370 368 del_timer_sync(&hsr->prune_timer); 371 369 del_timer_sync(&hsr->announce_timer); 372 370 373 - hsr_del_self_node(&hsr->self_node_db); 371 + hsr_del_self_node(hsr); 374 372 hsr_del_nodes(&hsr->node_db); 375 373 } 376 374 ··· 442 440 INIT_LIST_HEAD(&hsr->ports); 443 441 INIT_LIST_HEAD(&hsr->node_db); 444 442 INIT_LIST_HEAD(&hsr->self_node_db); 443 + spin_lock_init(&hsr->list_lock); 445 444 446 445 ether_addr_copy(hsr_dev->dev_addr, slave[0]->dev_addr); 447 446 448 447 /* Make sure we recognize frames from ourselves in hsr_rcv() */ 449 - res = hsr_create_self_node(&hsr->self_node_db, hsr_dev->dev_addr, 448 + res = hsr_create_self_node(hsr, hsr_dev->dev_addr, 450 449 slave[1]->dev_addr); 451 450 if (res < 0) 452 451 return res; ··· 480 477 481 478 res = hsr_add_port(hsr, hsr_dev, HSR_PT_MASTER); 482 479 if (res) 483 - goto err_add_port; 480 + goto err_add_master; 484 481 485 482 res = register_netdevice(hsr_dev); 486 483 if (res) 487 - goto fail; 484 + goto err_unregister; 488 485 489 486 res = hsr_add_port(hsr, slave[0], HSR_PT_SLAVE_A); 490 487 if (res) 491 - goto fail; 488 + goto err_add_slaves; 489 + 492 490 res = hsr_add_port(hsr, slave[1], HSR_PT_SLAVE_B); 493 491 if (res) 494 - goto fail; 492 + goto err_add_slaves; 495 493 494 + hsr_debugfs_init(hsr, hsr_dev); 496 495 mod_timer(&hsr->prune_timer, jiffies + msecs_to_jiffies(PRUNE_PERIOD)); 497 - res = hsr_debugfs_init(hsr, hsr_dev); 498 - if (res) 499 - goto fail; 500 496 501 497 return 0; 502 498 503 - fail: 499 + err_add_slaves: 500 + unregister_netdevice(hsr_dev); 501 + err_unregister: 504 502 list_for_each_entry_safe(port, tmp, &hsr->ports, port_list) 505 503 hsr_del_port(port); 506 - err_add_port: 507 - hsr_del_self_node(&hsr->self_node_db); 504 + err_add_master: 505 + hsr_del_self_node(hsr); 508 506 509 507 return res; 510 508 }
+46 -27
net/hsr/hsr_framereg.c
··· 75 75 /* Helper for device init; the self_node_db is used in hsr_rcv() to recognize 76 76 * frames from self that's been looped over the HSR ring. 77 77 */ 78 - int hsr_create_self_node(struct list_head *self_node_db, 78 + int hsr_create_self_node(struct hsr_priv *hsr, 79 79 unsigned char addr_a[ETH_ALEN], 80 80 unsigned char addr_b[ETH_ALEN]) 81 81 { 82 + struct list_head *self_node_db = &hsr->self_node_db; 82 83 struct hsr_node *node, *oldnode; 83 84 84 85 node = kmalloc(sizeof(*node), GFP_KERNEL); ··· 89 88 ether_addr_copy(node->macaddress_A, addr_a); 90 89 ether_addr_copy(node->macaddress_B, addr_b); 91 90 92 - rcu_read_lock(); 91 + spin_lock_bh(&hsr->list_lock); 93 92 oldnode = list_first_or_null_rcu(self_node_db, 94 93 struct hsr_node, mac_list); 95 94 if (oldnode) { 96 95 list_replace_rcu(&oldnode->mac_list, &node->mac_list); 97 - rcu_read_unlock(); 98 - synchronize_rcu(); 99 - kfree(oldnode); 96 + spin_unlock_bh(&hsr->list_lock); 97 + kfree_rcu(oldnode, rcu_head); 100 98 } else { 101 - rcu_read_unlock(); 102 99 list_add_tail_rcu(&node->mac_list, self_node_db); 100 + spin_unlock_bh(&hsr->list_lock); 103 101 } 104 102 105 103 return 0; 106 104 } 107 105 108 - void hsr_del_self_node(struct list_head *self_node_db) 106 + void hsr_del_self_node(struct hsr_priv *hsr) 109 107 { 108 + struct list_head *self_node_db = &hsr->self_node_db; 110 109 struct hsr_node *node; 111 110 112 - rcu_read_lock(); 111 + spin_lock_bh(&hsr->list_lock); 113 112 node = list_first_or_null_rcu(self_node_db, struct hsr_node, mac_list); 114 - rcu_read_unlock(); 115 113 if (node) { 116 114 list_del_rcu(&node->mac_list); 117 - kfree(node); 115 + kfree_rcu(node, rcu_head); 118 116 } 117 + spin_unlock_bh(&hsr->list_lock); 119 118 } 120 119 121 120 void hsr_del_nodes(struct list_head *node_db) ··· 131 130 * seq_out is used to initialize filtering of outgoing duplicate frames 132 131 * originating from the newly added node. 133 132 */ 134 - struct hsr_node *hsr_add_node(struct list_head *node_db, unsigned char addr[], 135 - u16 seq_out) 133 + static struct hsr_node *hsr_add_node(struct hsr_priv *hsr, 134 + struct list_head *node_db, 135 + unsigned char addr[], 136 + u16 seq_out) 136 137 { 137 - struct hsr_node *node; 138 + struct hsr_node *new_node, *node; 138 139 unsigned long now; 139 140 int i; 140 141 141 - node = kzalloc(sizeof(*node), GFP_ATOMIC); 142 - if (!node) 142 + new_node = kzalloc(sizeof(*new_node), GFP_ATOMIC); 143 + if (!new_node) 143 144 return NULL; 144 145 145 - ether_addr_copy(node->macaddress_A, addr); 146 + ether_addr_copy(new_node->macaddress_A, addr); 146 147 147 148 /* We are only interested in time diffs here, so use current jiffies 148 149 * as initialization. (0 could trigger an spurious ring error warning). 149 150 */ 150 151 now = jiffies; 151 152 for (i = 0; i < HSR_PT_PORTS; i++) 152 - node->time_in[i] = now; 153 + new_node->time_in[i] = now; 153 154 for (i = 0; i < HSR_PT_PORTS; i++) 154 - node->seq_out[i] = seq_out; 155 + new_node->seq_out[i] = seq_out; 155 156 156 - list_add_tail_rcu(&node->mac_list, node_db); 157 - 157 + spin_lock_bh(&hsr->list_lock); 158 + list_for_each_entry_rcu(node, node_db, mac_list) { 159 + if (ether_addr_equal(node->macaddress_A, addr)) 160 + goto out; 161 + if (ether_addr_equal(node->macaddress_B, addr)) 162 + goto out; 163 + } 164 + list_add_tail_rcu(&new_node->mac_list, node_db); 165 + spin_unlock_bh(&hsr->list_lock); 166 + return new_node; 167 + out: 168 + spin_unlock_bh(&hsr->list_lock); 169 + kfree(new_node); 158 170 return node; 159 171 } 160 172 ··· 177 163 bool is_sup) 178 164 { 179 165 struct list_head *node_db = &port->hsr->node_db; 166 + struct hsr_priv *hsr = port->hsr; 180 167 struct hsr_node *node; 181 168 struct ethhdr *ethhdr; 182 169 u16 seq_out; ··· 211 196 seq_out = HSR_SEQNR_START; 212 197 } 213 198 214 - return hsr_add_node(node_db, ethhdr->h_source, seq_out); 199 + return hsr_add_node(hsr, node_db, ethhdr->h_source, seq_out); 215 200 } 216 201 217 202 /* Use the Supervision frame's info about an eventual macaddress_B for merging ··· 221 206 void hsr_handle_sup_frame(struct sk_buff *skb, struct hsr_node *node_curr, 222 207 struct hsr_port *port_rcv) 223 208 { 224 - struct ethhdr *ethhdr; 225 - struct hsr_node *node_real; 209 + struct hsr_priv *hsr = port_rcv->hsr; 226 210 struct hsr_sup_payload *hsr_sp; 211 + struct hsr_node *node_real; 227 212 struct list_head *node_db; 213 + struct ethhdr *ethhdr; 228 214 int i; 229 215 230 216 ethhdr = (struct ethhdr *)skb_mac_header(skb); ··· 247 231 node_real = find_node_by_addr_A(node_db, hsr_sp->macaddress_A); 248 232 if (!node_real) 249 233 /* No frame received from AddrA of this node yet */ 250 - node_real = hsr_add_node(node_db, hsr_sp->macaddress_A, 234 + node_real = hsr_add_node(hsr, node_db, hsr_sp->macaddress_A, 251 235 HSR_SEQNR_START - 1); 252 236 if (!node_real) 253 237 goto done; /* No mem */ ··· 268 252 } 269 253 node_real->addr_B_port = port_rcv->type; 270 254 255 + spin_lock_bh(&hsr->list_lock); 271 256 list_del_rcu(&node_curr->mac_list); 257 + spin_unlock_bh(&hsr->list_lock); 272 258 kfree_rcu(node_curr, rcu_head); 273 259 274 260 done: ··· 386 368 { 387 369 struct hsr_priv *hsr = from_timer(hsr, t, prune_timer); 388 370 struct hsr_node *node; 371 + struct hsr_node *tmp; 389 372 struct hsr_port *port; 390 373 unsigned long timestamp; 391 374 unsigned long time_a, time_b; 392 375 393 - rcu_read_lock(); 394 - list_for_each_entry_rcu(node, &hsr->node_db, mac_list) { 376 + spin_lock_bh(&hsr->list_lock); 377 + list_for_each_entry_safe(node, tmp, &hsr->node_db, mac_list) { 395 378 /* Don't prune own node. Neither time_in[HSR_PT_SLAVE_A] 396 379 * nor time_in[HSR_PT_SLAVE_B], will ever be updated for 397 380 * the master port. Thus the master node will be repeatedly ··· 440 421 kfree_rcu(node, rcu_head); 441 422 } 442 423 } 443 - rcu_read_unlock(); 424 + spin_unlock_bh(&hsr->list_lock); 444 425 445 426 /* Restart timer */ 446 427 mod_timer(&hsr->prune_timer,
+2 -4
net/hsr/hsr_framereg.h
··· 12 12 13 13 struct hsr_node; 14 14 15 - void hsr_del_self_node(struct list_head *self_node_db); 15 + void hsr_del_self_node(struct hsr_priv *hsr); 16 16 void hsr_del_nodes(struct list_head *node_db); 17 - struct hsr_node *hsr_add_node(struct list_head *node_db, unsigned char addr[], 18 - u16 seq_out); 19 17 struct hsr_node *hsr_get_node(struct hsr_port *port, struct sk_buff *skb, 20 18 bool is_sup); 21 19 void hsr_handle_sup_frame(struct sk_buff *skb, struct hsr_node *node_curr, ··· 31 33 32 34 void hsr_prune_nodes(struct timer_list *t); 33 35 34 - int hsr_create_self_node(struct list_head *self_node_db, 36 + int hsr_create_self_node(struct hsr_priv *hsr, 35 37 unsigned char addr_a[ETH_ALEN], 36 38 unsigned char addr_b[ETH_ALEN]); 37 39
+6 -1
net/hsr/hsr_main.c
··· 45 45 case NETDEV_CHANGE: /* Link (carrier) state changes */ 46 46 hsr_check_carrier_and_operstate(hsr); 47 47 break; 48 + case NETDEV_CHANGENAME: 49 + if (is_hsr_master(dev)) 50 + hsr_debugfs_rename(dev); 51 + break; 48 52 case NETDEV_CHANGEADDR: 49 53 if (port->type == HSR_PT_MASTER) { 50 54 /* This should not happen since there's no ··· 68 64 69 65 /* Make sure we recognize frames from ourselves in hsr_rcv() */ 70 66 port = hsr_port_get_hsr(hsr, HSR_PT_SLAVE_B); 71 - res = hsr_create_self_node(&hsr->self_node_db, 67 + res = hsr_create_self_node(hsr, 72 68 master->dev->dev_addr, 73 69 port ? 74 70 port->dev->dev_addr : ··· 127 123 { 128 124 unregister_netdevice_notifier(&hsr_nb); 129 125 hsr_netlink_exit(); 126 + hsr_debugfs_remove_root(); 130 127 } 131 128 132 129 module_init(hsr_init);
+15 -7
net/hsr/hsr_main.h
··· 160 160 int announce_count; 161 161 u16 sequence_nr; 162 162 u16 sup_sequence_nr; /* For HSRv1 separate seq_nr for supervision */ 163 - u8 prot_version; /* Indicate if HSRv0 or HSRv1. */ 164 - spinlock_t seqnr_lock; /* locking for sequence_nr */ 163 + u8 prot_version; /* Indicate if HSRv0 or HSRv1. */ 164 + spinlock_t seqnr_lock; /* locking for sequence_nr */ 165 + spinlock_t list_lock; /* locking for node list */ 165 166 unsigned char sup_multicast_addr[ETH_ALEN]; 166 167 #ifdef CONFIG_DEBUG_FS 167 168 struct dentry *node_tbl_root; ··· 185 184 } 186 185 187 186 #if IS_ENABLED(CONFIG_DEBUG_FS) 188 - int hsr_debugfs_init(struct hsr_priv *priv, struct net_device *hsr_dev); 187 + void hsr_debugfs_rename(struct net_device *dev); 188 + void hsr_debugfs_init(struct hsr_priv *priv, struct net_device *hsr_dev); 189 189 void hsr_debugfs_term(struct hsr_priv *priv); 190 + void hsr_debugfs_create_root(void); 191 + void hsr_debugfs_remove_root(void); 190 192 #else 191 - static inline int hsr_debugfs_init(struct hsr_priv *priv, 192 - struct net_device *hsr_dev) 193 + static inline void void hsr_debugfs_rename(struct net_device *dev) 193 194 { 194 - return 0; 195 195 } 196 - 196 + static inline void hsr_debugfs_init(struct hsr_priv *priv, 197 + struct net_device *hsr_dev) 198 + {} 197 199 static inline void hsr_debugfs_term(struct hsr_priv *priv) 200 + {} 201 + static inline void hsr_debugfs_create_root(void) 202 + {} 203 + static inline void hsr_debugfs_remove_root(void) 198 204 {} 199 205 #endif 200 206
+1
net/hsr/hsr_netlink.c
··· 476 476 if (rc) 477 477 goto fail_genl_register_family; 478 478 479 + hsr_debugfs_create_root(); 479 480 return 0; 480 481 481 482 fail_genl_register_family:
+1 -1
net/ipv4/inet_connection_sock.c
··· 1086 1086 if (!dst) 1087 1087 goto out; 1088 1088 } 1089 - dst->ops->update_pmtu(dst, sk, NULL, mtu); 1089 + dst->ops->update_pmtu(dst, sk, NULL, mtu, true); 1090 1090 1091 1091 dst = __sk_dst_check(sk, 0); 1092 1092 if (!dst)
+1 -1
net/ipv4/ip_tunnel.c
··· 505 505 mtu = skb_valid_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu; 506 506 507 507 if (skb_valid_dst(skb)) 508 - skb_dst_update_pmtu(skb, mtu); 508 + skb_dst_update_pmtu_no_confirm(skb, mtu); 509 509 510 510 if (skb->protocol == htons(ETH_P_IP)) { 511 511 if (!skb_is_gso(skb) &&
+1 -1
net/ipv4/ip_vti.c
··· 214 214 215 215 mtu = dst_mtu(dst); 216 216 if (skb->len > mtu) { 217 - skb_dst_update_pmtu(skb, mtu); 217 + skb_dst_update_pmtu_no_confirm(skb, mtu); 218 218 if (skb->protocol == htons(ETH_P_IP)) { 219 219 icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 220 220 htonl(mtu));
+16 -11
net/ipv4/netfilter/arp_tables.c
··· 384 384 return 1; 385 385 } 386 386 387 - static inline int check_target(struct arpt_entry *e, const char *name) 387 + static int check_target(struct arpt_entry *e, struct net *net, const char *name) 388 388 { 389 389 struct xt_entry_target *t = arpt_get_target(e); 390 390 struct xt_tgchk_param par = { 391 + .net = net, 391 392 .table = name, 392 393 .entryinfo = e, 393 394 .target = t->u.kernel.target, ··· 400 399 return xt_check_target(&par, t->u.target_size - sizeof(*t), 0, false); 401 400 } 402 401 403 - static inline int 404 - find_check_entry(struct arpt_entry *e, const char *name, unsigned int size, 402 + static int 403 + find_check_entry(struct arpt_entry *e, struct net *net, const char *name, 404 + unsigned int size, 405 405 struct xt_percpu_counter_alloc_state *alloc_state) 406 406 { 407 407 struct xt_entry_target *t; ··· 421 419 } 422 420 t->u.kernel.target = target; 423 421 424 - ret = check_target(e, name); 422 + ret = check_target(e, net, name); 425 423 if (ret) 426 424 goto err; 427 425 return 0; ··· 514 512 /* Checks and translates the user-supplied table segment (held in 515 513 * newinfo). 516 514 */ 517 - static int translate_table(struct xt_table_info *newinfo, void *entry0, 515 + static int translate_table(struct net *net, 516 + struct xt_table_info *newinfo, 517 + void *entry0, 518 518 const struct arpt_replace *repl) 519 519 { 520 520 struct xt_percpu_counter_alloc_state alloc_state = { 0 }; ··· 573 569 /* Finally, each sanity check must pass */ 574 570 i = 0; 575 571 xt_entry_foreach(iter, entry0, newinfo->size) { 576 - ret = find_check_entry(iter, repl->name, repl->size, 572 + ret = find_check_entry(iter, net, repl->name, repl->size, 577 573 &alloc_state); 578 574 if (ret != 0) 579 575 break; ··· 978 974 goto free_newinfo; 979 975 } 980 976 981 - ret = translate_table(newinfo, loc_cpu_entry, &tmp); 977 + ret = translate_table(net, newinfo, loc_cpu_entry, &tmp); 982 978 if (ret != 0) 983 979 goto free_newinfo; 984 980 ··· 1153 1149 } 1154 1150 } 1155 1151 1156 - static int translate_compat_table(struct xt_table_info **pinfo, 1152 + static int translate_compat_table(struct net *net, 1153 + struct xt_table_info **pinfo, 1157 1154 void **pentry0, 1158 1155 const struct compat_arpt_replace *compatr) 1159 1156 { ··· 1222 1217 repl.num_counters = 0; 1223 1218 repl.counters = NULL; 1224 1219 repl.size = newinfo->size; 1225 - ret = translate_table(newinfo, entry1, &repl); 1220 + ret = translate_table(net, newinfo, entry1, &repl); 1226 1221 if (ret) 1227 1222 goto free_newinfo; 1228 1223 ··· 1275 1270 goto free_newinfo; 1276 1271 } 1277 1272 1278 - ret = translate_compat_table(&newinfo, &loc_cpu_entry, &tmp); 1273 + ret = translate_compat_table(net, &newinfo, &loc_cpu_entry, &tmp); 1279 1274 if (ret != 0) 1280 1275 goto free_newinfo; 1281 1276 ··· 1551 1546 loc_cpu_entry = newinfo->entries; 1552 1547 memcpy(loc_cpu_entry, repl->entries, repl->size); 1553 1548 1554 - ret = translate_table(newinfo, loc_cpu_entry, repl); 1549 + ret = translate_table(net, newinfo, loc_cpu_entry, repl); 1555 1550 if (ret != 0) 1556 1551 goto out_free; 1557 1552
+6 -3
net/ipv4/route.c
··· 139 139 static struct dst_entry *ipv4_negative_advice(struct dst_entry *dst); 140 140 static void ipv4_link_failure(struct sk_buff *skb); 141 141 static void ip_rt_update_pmtu(struct dst_entry *dst, struct sock *sk, 142 - struct sk_buff *skb, u32 mtu); 142 + struct sk_buff *skb, u32 mtu, 143 + bool confirm_neigh); 143 144 static void ip_do_redirect(struct dst_entry *dst, struct sock *sk, 144 145 struct sk_buff *skb); 145 146 static void ipv4_dst_destroy(struct dst_entry *dst); ··· 1044 1043 } 1045 1044 1046 1045 static void ip_rt_update_pmtu(struct dst_entry *dst, struct sock *sk, 1047 - struct sk_buff *skb, u32 mtu) 1046 + struct sk_buff *skb, u32 mtu, 1047 + bool confirm_neigh) 1048 1048 { 1049 1049 struct rtable *rt = (struct rtable *) dst; 1050 1050 struct flowi4 fl4; ··· 2689 2687 } 2690 2688 2691 2689 static void ipv4_rt_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk, 2692 - struct sk_buff *skb, u32 mtu) 2690 + struct sk_buff *skb, u32 mtu, 2691 + bool confirm_neigh) 2693 2692 { 2694 2693 } 2695 2694
+4 -1
net/ipv4/tcp_input.c
··· 1727 1727 } 1728 1728 1729 1729 /* Ignore very old stuff early */ 1730 - if (!after(sp[used_sacks].end_seq, prior_snd_una)) 1730 + if (!after(sp[used_sacks].end_seq, prior_snd_una)) { 1731 + if (i == 0) 1732 + first_sack_index = -1; 1731 1733 continue; 1734 + } 1732 1735 1733 1736 used_sacks++; 1734 1737 }
+3
net/ipv4/tcp_output.c
··· 72 72 __skb_unlink(skb, &sk->sk_write_queue); 73 73 tcp_rbtree_insert(&sk->tcp_rtx_queue, skb); 74 74 75 + if (tp->highest_sack == NULL) 76 + tp->highest_sack = skb; 77 + 75 78 tp->packets_out += tcp_skb_pcount(skb); 76 79 if (!prior_packets || icsk->icsk_pending == ICSK_TIME_LOSS_PROBE) 77 80 tcp_rearm_rto(sk);
+1 -1
net/ipv4/udp.c
··· 1475 1475 * queue contains some other skb 1476 1476 */ 1477 1477 rmem = atomic_add_return(size, &sk->sk_rmem_alloc); 1478 - if (rmem > (size + sk->sk_rcvbuf)) 1478 + if (rmem > (size + (unsigned int)sk->sk_rcvbuf)) 1479 1479 goto uncharge_drop; 1480 1480 1481 1481 spin_lock(&list->lock);
+3 -2
net/ipv4/xfrm4_policy.c
··· 100 100 } 101 101 102 102 static void xfrm4_update_pmtu(struct dst_entry *dst, struct sock *sk, 103 - struct sk_buff *skb, u32 mtu) 103 + struct sk_buff *skb, u32 mtu, 104 + bool confirm_neigh) 104 105 { 105 106 struct xfrm_dst *xdst = (struct xfrm_dst *)dst; 106 107 struct dst_entry *path = xdst->route; 107 108 108 - path->ops->update_pmtu(path, sk, skb, mtu); 109 + path->ops->update_pmtu(path, sk, skb, mtu, confirm_neigh); 109 110 } 110 111 111 112 static void xfrm4_redirect(struct dst_entry *dst, struct sock *sk,
+1 -1
net/ipv6/inet6_connection_sock.c
··· 146 146 147 147 if (IS_ERR(dst)) 148 148 return NULL; 149 - dst->ops->update_pmtu(dst, sk, NULL, mtu); 149 + dst->ops->update_pmtu(dst, sk, NULL, mtu, true); 150 150 151 151 dst = inet6_csk_route_socket(sk, &fl6); 152 152 return IS_ERR(dst) ? NULL : dst;
+1 -1
net/ipv6/ip6_gre.c
··· 1040 1040 1041 1041 /* TooBig packet may have updated dst->dev's mtu */ 1042 1042 if (!t->parms.collect_md && dst && dst_mtu(dst) > dst->dev->mtu) 1043 - dst->ops->update_pmtu(dst, NULL, skb, dst->dev->mtu); 1043 + dst->ops->update_pmtu(dst, NULL, skb, dst->dev->mtu, false); 1044 1044 1045 1045 err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu, 1046 1046 NEXTHDR_GRE);
+2 -2
net/ipv6/ip6_tunnel.c
··· 640 640 if (rel_info > dst_mtu(skb_dst(skb2))) 641 641 goto out; 642 642 643 - skb_dst_update_pmtu(skb2, rel_info); 643 + skb_dst_update_pmtu_no_confirm(skb2, rel_info); 644 644 } 645 645 646 646 icmp_send(skb2, rel_type, rel_code, htonl(rel_info)); ··· 1132 1132 mtu = max(mtu, skb->protocol == htons(ETH_P_IPV6) ? 1133 1133 IPV6_MIN_MTU : IPV4_MIN_MTU); 1134 1134 1135 - skb_dst_update_pmtu(skb, mtu); 1135 + skb_dst_update_pmtu_no_confirm(skb, mtu); 1136 1136 if (skb->len - t->tun_hlen - eth_hlen > mtu && !skb_is_gso(skb)) { 1137 1137 *pmtu = mtu; 1138 1138 err = -EMSGSIZE;
+1 -1
net/ipv6/ip6_vti.c
··· 479 479 480 480 mtu = dst_mtu(dst); 481 481 if (skb->len > mtu) { 482 - skb_dst_update_pmtu(skb, mtu); 482 + skb_dst_update_pmtu_no_confirm(skb, mtu); 483 483 484 484 if (skb->protocol == htons(ETH_P_IPV6)) { 485 485 if (mtu < IPV6_MIN_MTU)
+15 -7
net/ipv6/route.c
··· 95 95 static int ip6_pkt_prohibit_out(struct net *net, struct sock *sk, struct sk_buff *skb); 96 96 static void ip6_link_failure(struct sk_buff *skb); 97 97 static void ip6_rt_update_pmtu(struct dst_entry *dst, struct sock *sk, 98 - struct sk_buff *skb, u32 mtu); 98 + struct sk_buff *skb, u32 mtu, 99 + bool confirm_neigh); 99 100 static void rt6_do_redirect(struct dst_entry *dst, struct sock *sk, 100 101 struct sk_buff *skb); 101 102 static int rt6_score_route(const struct fib6_nh *nh, u32 fib6_flags, int oif, ··· 265 264 } 266 265 267 266 static void ip6_rt_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk, 268 - struct sk_buff *skb, u32 mtu) 267 + struct sk_buff *skb, u32 mtu, 268 + bool confirm_neigh) 269 269 { 270 270 } 271 271 ··· 2694 2692 } 2695 2693 2696 2694 static void __ip6_rt_update_pmtu(struct dst_entry *dst, const struct sock *sk, 2697 - const struct ipv6hdr *iph, u32 mtu) 2695 + const struct ipv6hdr *iph, u32 mtu, 2696 + bool confirm_neigh) 2698 2697 { 2699 2698 const struct in6_addr *daddr, *saddr; 2700 2699 struct rt6_info *rt6 = (struct rt6_info *)dst; ··· 2713 2710 daddr = NULL; 2714 2711 saddr = NULL; 2715 2712 } 2716 - dst_confirm_neigh(dst, daddr); 2713 + 2714 + if (confirm_neigh) 2715 + dst_confirm_neigh(dst, daddr); 2716 + 2717 2717 mtu = max_t(u32, mtu, IPV6_MIN_MTU); 2718 2718 if (mtu >= dst_mtu(dst)) 2719 2719 return; ··· 2770 2764 } 2771 2765 2772 2766 static void ip6_rt_update_pmtu(struct dst_entry *dst, struct sock *sk, 2773 - struct sk_buff *skb, u32 mtu) 2767 + struct sk_buff *skb, u32 mtu, 2768 + bool confirm_neigh) 2774 2769 { 2775 - __ip6_rt_update_pmtu(dst, sk, skb ? ipv6_hdr(skb) : NULL, mtu); 2770 + __ip6_rt_update_pmtu(dst, sk, skb ? ipv6_hdr(skb) : NULL, mtu, 2771 + confirm_neigh); 2776 2772 } 2777 2773 2778 2774 void ip6_update_pmtu(struct sk_buff *skb, struct net *net, __be32 mtu, ··· 2793 2785 2794 2786 dst = ip6_route_output(net, NULL, &fl6); 2795 2787 if (!dst->error) 2796 - __ip6_rt_update_pmtu(dst, NULL, iph, ntohl(mtu)); 2788 + __ip6_rt_update_pmtu(dst, NULL, iph, ntohl(mtu), true); 2797 2789 dst_release(dst); 2798 2790 } 2799 2791 EXPORT_SYMBOL_GPL(ip6_update_pmtu);
+1 -1
net/ipv6/sit.c
··· 944 944 } 945 945 946 946 if (tunnel->parms.iph.daddr) 947 - skb_dst_update_pmtu(skb, mtu); 947 + skb_dst_update_pmtu_no_confirm(skb, mtu); 948 948 949 949 if (skb->len > mtu && !skb_is_gso(skb)) { 950 950 icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+3 -2
net/ipv6/xfrm6_policy.c
··· 98 98 } 99 99 100 100 static void xfrm6_update_pmtu(struct dst_entry *dst, struct sock *sk, 101 - struct sk_buff *skb, u32 mtu) 101 + struct sk_buff *skb, u32 mtu, 102 + bool confirm_neigh) 102 103 { 103 104 struct xfrm_dst *xdst = (struct xfrm_dst *)dst; 104 105 struct dst_entry *path = xdst->route; 105 106 106 - path->ops->update_pmtu(path, sk, skb, mtu); 107 + path->ops->update_pmtu(path, sk, skb, mtu, confirm_neigh); 107 108 } 108 109 109 110 static void xfrm6_redirect(struct dst_entry *dst, struct sock *sk,
+2 -1
net/netfilter/ipset/ip_set_core.c
··· 1848 1848 struct ip_set *set; 1849 1849 struct nlattr *tb[IPSET_ATTR_ADT_MAX + 1] = {}; 1850 1850 int ret = 0; 1851 + u32 lineno; 1851 1852 1852 1853 if (unlikely(protocol_min_failed(attr) || 1853 1854 !attr[IPSET_ATTR_SETNAME] || ··· 1865 1864 return -IPSET_ERR_PROTOCOL; 1866 1865 1867 1866 rcu_read_lock_bh(); 1868 - ret = set->variant->uadt(set, tb, IPSET_TEST, NULL, 0, 0); 1867 + ret = set->variant->uadt(set, tb, IPSET_TEST, &lineno, 0, 0); 1869 1868 rcu_read_unlock_bh(); 1870 1869 /* Userspace can't trigger element to be re-added */ 1871 1870 if (ret == -EAGAIN)
+1 -1
net/netfilter/ipvs/ip_vs_xmit.c
··· 208 208 struct rtable *ort = skb_rtable(skb); 209 209 210 210 if (!skb->dev && sk && sk_fullsock(sk)) 211 - ort->dst.ops->update_pmtu(&ort->dst, sk, NULL, mtu); 211 + ort->dst.ops->update_pmtu(&ort->dst, sk, NULL, mtu, true); 212 212 } 213 213 214 214 static inline bool ensure_mtu_is_adequate(struct netns_ipvs *ipvs, int skb_af,
+3
net/netfilter/nf_conntrack_proto_dccp.c
··· 677 677 unsigned int *timeouts = data; 678 678 int i; 679 679 680 + if (!timeouts) 681 + timeouts = dn->dccp_timeout; 682 + 680 683 /* set default DCCP timeouts. */ 681 684 for (i=0; i<CT_DCCP_MAX; i++) 682 685 timeouts[i] = dn->dccp_timeout[i];
+3
net/netfilter/nf_conntrack_proto_sctp.c
··· 594 594 struct nf_sctp_net *sn = nf_sctp_pernet(net); 595 595 int i; 596 596 597 + if (!timeouts) 598 + timeouts = sn->timeouts; 599 + 597 600 /* set default SCTP timeouts. */ 598 601 for (i=0; i<SCTP_CONNTRACK_MAX; i++) 599 602 timeouts[i] = sn->timeouts[i];
+1 -6
net/netfilter/nf_flow_table_core.c
··· 134 134 #define NF_FLOWTABLE_TCP_PICKUP_TIMEOUT (120 * HZ) 135 135 #define NF_FLOWTABLE_UDP_PICKUP_TIMEOUT (30 * HZ) 136 136 137 - static inline __s32 nf_flow_timeout_delta(unsigned int timeout) 138 - { 139 - return (__s32)(timeout - (u32)jiffies); 140 - } 141 - 142 137 static void flow_offload_fixup_ct_timeout(struct nf_conn *ct) 143 138 { 144 139 const struct nf_conntrack_l4proto *l4proto; ··· 227 232 { 228 233 int err; 229 234 230 - flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; 235 + flow->timeout = nf_flowtable_time_stamp + NF_FLOW_TIMEOUT; 231 236 232 237 err = rhashtable_insert_fast(&flow_table->rhashtable, 233 238 &flow->tuplehash[0].node,
+2 -2
net/netfilter/nf_flow_table_ip.c
··· 280 280 if (nf_flow_nat_ip(flow, skb, thoff, dir) < 0) 281 281 return NF_DROP; 282 282 283 - flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; 283 + flow->timeout = nf_flowtable_time_stamp + NF_FLOW_TIMEOUT; 284 284 iph = ip_hdr(skb); 285 285 ip_decrease_ttl(iph); 286 286 skb->tstamp = 0; ··· 509 509 if (nf_flow_nat_ipv6(flow, skb, dir) < 0) 510 510 return NF_DROP; 511 511 512 - flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; 512 + flow->timeout = nf_flowtable_time_stamp + NF_FLOW_TIMEOUT; 513 513 ip6h = ipv6_hdr(skb); 514 514 ip6h->hop_limit--; 515 515 skb->tstamp = 0;
+37 -15
net/netfilter/nf_flow_table_offload.c
··· 88 88 switch (tuple->l4proto) { 89 89 case IPPROTO_TCP: 90 90 key->tcp.flags = 0; 91 - mask->tcp.flags = TCP_FLAG_RST | TCP_FLAG_FIN; 91 + mask->tcp.flags = cpu_to_be16(be32_to_cpu(TCP_FLAG_RST | TCP_FLAG_FIN) >> 16); 92 92 match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_TCP); 93 93 break; 94 94 case IPPROTO_UDP: ··· 166 166 enum flow_offload_tuple_dir dir, 167 167 struct nf_flow_rule *flow_rule) 168 168 { 169 - const struct flow_offload_tuple *tuple = &flow->tuplehash[dir].tuple; 170 169 struct flow_action_entry *entry0 = flow_action_entry_next(flow_rule); 171 170 struct flow_action_entry *entry1 = flow_action_entry_next(flow_rule); 171 + const void *daddr = &flow->tuplehash[!dir].tuple.src_v4; 172 + const struct dst_entry *dst_cache; 173 + unsigned char ha[ETH_ALEN]; 172 174 struct neighbour *n; 173 175 u32 mask, val; 176 + u8 nud_state; 174 177 u16 val16; 175 178 176 - n = dst_neigh_lookup(tuple->dst_cache, &tuple->dst_v4); 179 + dst_cache = flow->tuplehash[dir].tuple.dst_cache; 180 + n = dst_neigh_lookup(dst_cache, daddr); 177 181 if (!n) 178 182 return -ENOENT; 179 183 184 + read_lock_bh(&n->lock); 185 + nud_state = n->nud_state; 186 + ether_addr_copy(ha, n->ha); 187 + read_unlock_bh(&n->lock); 188 + 189 + if (!(nud_state & NUD_VALID)) { 190 + neigh_release(n); 191 + return -ENOENT; 192 + } 193 + 180 194 mask = ~0xffffffff; 181 - memcpy(&val, n->ha, 4); 195 + memcpy(&val, ha, 4); 182 196 flow_offload_mangle(entry0, FLOW_ACT_MANGLE_HDR_TYPE_ETH, 0, 183 197 &val, &mask); 184 198 185 199 mask = ~0x0000ffff; 186 - memcpy(&val16, n->ha + 4, 2); 200 + memcpy(&val16, ha + 4, 2); 187 201 val = val16; 188 202 flow_offload_mangle(entry1, FLOW_ACT_MANGLE_HDR_TYPE_ETH, 4, 189 203 &val, &mask); ··· 349 335 struct nf_flow_rule *flow_rule) 350 336 { 351 337 struct flow_action_entry *entry = flow_action_entry_next(flow_rule); 352 - u32 mask = ~htonl(0xffff0000), port; 338 + u32 mask, port; 353 339 u32 offset; 354 340 355 341 switch (dir) { 356 342 case FLOW_OFFLOAD_DIR_ORIGINAL: 357 343 port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_port); 358 344 offset = 0; /* offsetof(struct tcphdr, source); */ 345 + port = htonl(port << 16); 346 + mask = ~htonl(0xffff0000); 359 347 break; 360 348 case FLOW_OFFLOAD_DIR_REPLY: 361 349 port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_port); 362 350 offset = 0; /* offsetof(struct tcphdr, dest); */ 351 + port = htonl(port); 352 + mask = ~htonl(0xffff); 363 353 break; 364 354 default: 365 355 return; 366 356 } 367 - port = htonl(port << 16); 357 + 368 358 flow_offload_mangle(entry, flow_offload_l4proto(flow), offset, 369 359 &port, &mask); 370 360 } ··· 379 361 struct nf_flow_rule *flow_rule) 380 362 { 381 363 struct flow_action_entry *entry = flow_action_entry_next(flow_rule); 382 - u32 mask = ~htonl(0xffff), port; 364 + u32 mask, port; 383 365 u32 offset; 384 366 385 367 switch (dir) { 386 368 case FLOW_OFFLOAD_DIR_ORIGINAL: 387 - port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_port); 388 - offset = 0; /* offsetof(struct tcphdr, source); */ 369 + port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_port); 370 + offset = 0; /* offsetof(struct tcphdr, dest); */ 371 + port = htonl(port); 372 + mask = ~htonl(0xffff); 389 373 break; 390 374 case FLOW_OFFLOAD_DIR_REPLY: 391 - port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_port); 392 - offset = 0; /* offsetof(struct tcphdr, dest); */ 375 + port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_port); 376 + offset = 0; /* offsetof(struct tcphdr, source); */ 377 + port = htonl(port << 16); 378 + mask = ~htonl(0xffff0000); 393 379 break; 394 380 default: 395 381 return; 396 382 } 397 - port = htonl(port); 383 + 398 384 flow_offload_mangle(entry, flow_offload_l4proto(flow), offset, 399 385 &port, &mask); 400 386 } ··· 781 759 struct flow_offload *flow) 782 760 { 783 761 struct flow_offload_work *offload; 784 - s64 delta; 762 + __s32 delta; 785 763 786 - delta = flow->timeout - jiffies; 764 + delta = nf_flow_timeout_delta(flow->timeout); 787 765 if ((delta >= (9 * NF_FLOW_TIMEOUT) / 10) || 788 766 flow->flags & FLOW_OFFLOAD_HW_DYING) 789 767 return;
+6 -2
net/netfilter/nf_tables_api.c
··· 5984 5984 return ERR_PTR(-ENOENT); 5985 5985 } 5986 5986 5987 + /* Only called from error and netdev event paths. */ 5987 5988 static void nft_unregister_flowtable_hook(struct net *net, 5988 5989 struct nft_flowtable *flowtable, 5989 5990 struct nft_hook *hook) ··· 6000 5999 struct nft_hook *hook; 6001 6000 6002 6001 list_for_each_entry(hook, &flowtable->hook_list, list) 6003 - nft_unregister_flowtable_hook(net, flowtable, hook); 6002 + nf_unregister_net_hook(net, &hook->ops); 6004 6003 } 6005 6004 6006 6005 static int nft_register_flowtable_net_hooks(struct net *net, ··· 6449 6448 { 6450 6449 struct nft_hook *hook, *next; 6451 6450 6451 + flowtable->data.type->free(&flowtable->data); 6452 6452 list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { 6453 + flowtable->data.type->setup(&flowtable->data, hook->ops.dev, 6454 + FLOW_BLOCK_UNBIND); 6453 6455 list_del_rcu(&hook->list); 6454 6456 kfree(hook); 6455 6457 } 6456 6458 kfree(flowtable->name); 6457 - flowtable->data.type->free(&flowtable->data); 6458 6459 module_put(flowtable->data.type->owner); 6459 6460 kfree(flowtable); 6460 6461 } ··· 6500 6497 if (hook->ops.dev != dev) 6501 6498 continue; 6502 6499 6500 + /* flow_offload_netdev_event() cleans up entries for us. */ 6503 6501 nft_unregister_flowtable_hook(dev_net(dev), flowtable, hook); 6504 6502 list_del_rcu(&hook->list); 6505 6503 kfree_rcu(hook, rcu);
-3
net/netfilter/nft_flow_offload.c
··· 200 200 static void nft_flow_offload_destroy(const struct nft_ctx *ctx, 201 201 const struct nft_expr *expr) 202 202 { 203 - struct nft_flow_offload *priv = nft_expr_priv(expr); 204 - 205 - priv->flowtable->use--; 206 203 nf_ct_netns_put(ctx->net, ctx->family); 207 204 } 208 205
+2 -2
net/netfilter/nft_tproxy.c
··· 50 50 taddr = nf_tproxy_laddr4(skb, taddr, iph->daddr); 51 51 52 52 if (priv->sreg_port) 53 - tport = regs->data[priv->sreg_port]; 53 + tport = nft_reg_load16(&regs->data[priv->sreg_port]); 54 54 if (!tport) 55 55 tport = hp->dest; 56 56 ··· 117 117 taddr = *nf_tproxy_laddr6(skb, &taddr, &iph->daddr); 118 118 119 119 if (priv->sreg_port) 120 - tport = regs->data[priv->sreg_port]; 120 + tport = nft_reg_load16(&regs->data[priv->sreg_port]); 121 121 if (!tport) 122 122 tport = hp->dest; 123 123
+1 -1
net/qrtr/qrtr.c
··· 196 196 hdr->size = cpu_to_le32(len); 197 197 hdr->confirm_rx = 0; 198 198 199 - skb_put_padto(skb, ALIGN(len, 4)); 199 + skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr)); 200 200 201 201 mutex_lock(&node->ep_lock); 202 202 if (node->ep)
+7 -3
net/rxrpc/ar-internal.h
··· 209 209 struct rxrpc_security { 210 210 const char *name; /* name of this service */ 211 211 u8 security_index; /* security type provided */ 212 + u32 no_key_abort; /* Abort code indicating no key */ 212 213 213 214 /* Initialise a security service */ 214 215 int (*init)(void); ··· 978 977 struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *, 979 978 struct sk_buff *); 980 979 struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *, gfp_t); 981 - void rxrpc_new_incoming_connection(struct rxrpc_sock *, 982 - struct rxrpc_connection *, struct sk_buff *); 980 + void rxrpc_new_incoming_connection(struct rxrpc_sock *, struct rxrpc_connection *, 981 + const struct rxrpc_security *, struct key *, 982 + struct sk_buff *); 983 983 void rxrpc_unpublish_service_conn(struct rxrpc_connection *); 984 984 985 985 /* ··· 1105 1103 int __init rxrpc_init_security(void); 1106 1104 void rxrpc_exit_security(void); 1107 1105 int rxrpc_init_client_conn_security(struct rxrpc_connection *); 1108 - int rxrpc_init_server_conn_security(struct rxrpc_connection *); 1106 + bool rxrpc_look_up_server_security(struct rxrpc_local *, struct rxrpc_sock *, 1107 + const struct rxrpc_security **, struct key **, 1108 + struct sk_buff *); 1109 1109 1110 1110 /* 1111 1111 * sendmsg.c
+37 -23
net/rxrpc/call_accept.c
··· 240 240 } 241 241 242 242 /* 243 + * Ping the other end to fill our RTT cache and to retrieve the rwind 244 + * and MTU parameters. 245 + */ 246 + static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb) 247 + { 248 + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 249 + ktime_t now = skb->tstamp; 250 + 251 + if (call->peer->rtt_usage < 3 || 252 + ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), now)) 253 + rxrpc_propose_ACK(call, RXRPC_ACK_PING, sp->hdr.serial, 254 + true, true, 255 + rxrpc_propose_ack_ping_for_params); 256 + } 257 + 258 + /* 243 259 * Allocate a new incoming call from the prealloc pool, along with a connection 244 260 * and a peer as necessary. 245 261 */ ··· 263 247 struct rxrpc_local *local, 264 248 struct rxrpc_peer *peer, 265 249 struct rxrpc_connection *conn, 250 + const struct rxrpc_security *sec, 251 + struct key *key, 266 252 struct sk_buff *skb) 267 253 { 268 254 struct rxrpc_backlog *b = rx->backlog; ··· 312 294 conn->params.local = rxrpc_get_local(local); 313 295 conn->params.peer = peer; 314 296 rxrpc_see_connection(conn); 315 - rxrpc_new_incoming_connection(rx, conn, skb); 297 + rxrpc_new_incoming_connection(rx, conn, sec, key, skb); 316 298 } else { 317 299 rxrpc_get_connection(conn); 318 300 } ··· 351 333 struct sk_buff *skb) 352 334 { 353 335 struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 336 + const struct rxrpc_security *sec = NULL; 354 337 struct rxrpc_connection *conn; 355 338 struct rxrpc_peer *peer = NULL; 356 - struct rxrpc_call *call; 339 + struct rxrpc_call *call = NULL; 340 + struct key *key = NULL; 357 341 358 342 _enter(""); 359 343 ··· 366 346 sp->hdr.seq, RX_INVALID_OPERATION, ESHUTDOWN); 367 347 skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; 368 348 skb->priority = RX_INVALID_OPERATION; 369 - _leave(" = NULL [close]"); 370 - call = NULL; 371 - goto out; 349 + goto no_call; 372 350 } 373 351 374 352 /* The peer, connection and call may all have sprung into existence due ··· 376 358 */ 377 359 conn = rxrpc_find_connection_rcu(local, skb, &peer); 378 360 379 - call = rxrpc_alloc_incoming_call(rx, local, peer, conn, skb); 361 + if (!conn && !rxrpc_look_up_server_security(local, rx, &sec, &key, skb)) 362 + goto no_call; 363 + 364 + call = rxrpc_alloc_incoming_call(rx, local, peer, conn, sec, key, skb); 365 + key_put(key); 380 366 if (!call) { 381 367 skb->mark = RXRPC_SKB_MARK_REJECT_BUSY; 382 - _leave(" = NULL [busy]"); 383 - call = NULL; 384 - goto out; 368 + goto no_call; 385 369 } 386 370 387 371 trace_rxrpc_receive(call, rxrpc_receive_incoming, 388 372 sp->hdr.serial, sp->hdr.seq); 389 - 390 - /* Lock the call to prevent rxrpc_kernel_send/recv_data() and 391 - * sendmsg()/recvmsg() inconveniently stealing the mutex once the 392 - * notification is generated. 393 - * 394 - * The BUG should never happen because the kernel should be well 395 - * behaved enough not to access the call before the first notification 396 - * event and userspace is prevented from doing so until the state is 397 - * appropriate. 398 - */ 399 - if (!mutex_trylock(&call->user_mutex)) 400 - BUG(); 401 373 402 374 /* Make the call live. */ 403 375 rxrpc_incoming_call(rx, call, skb); ··· 429 421 BUG(); 430 422 } 431 423 spin_unlock(&conn->state_lock); 424 + spin_unlock(&rx->incoming_lock); 425 + 426 + rxrpc_send_ping(call, skb); 432 427 433 428 if (call->state == RXRPC_CALL_SERVER_ACCEPTING) 434 429 rxrpc_notify_socket(call); ··· 444 433 rxrpc_put_call(call, rxrpc_call_put); 445 434 446 435 _leave(" = %p{%d}", call, call->debug_id); 447 - out: 448 - spin_unlock(&rx->incoming_lock); 449 436 return call; 437 + 438 + no_call: 439 + spin_unlock(&rx->incoming_lock); 440 + _leave(" = NULL [%u]", skb->mark); 441 + return NULL; 450 442 } 451 443 452 444 /*
+1 -15
net/rxrpc/conn_event.c
··· 376 376 _enter("{%d}", conn->debug_id); 377 377 378 378 ASSERT(conn->security_ix != 0); 379 - 380 - if (!conn->params.key) { 381 - _debug("set up security"); 382 - ret = rxrpc_init_server_conn_security(conn); 383 - switch (ret) { 384 - case 0: 385 - break; 386 - case -ENOENT: 387 - abort_code = RX_CALL_DEAD; 388 - goto abort; 389 - default: 390 - abort_code = RXKADNOAUTH; 391 - goto abort; 392 - } 393 - } 379 + ASSERT(conn->server_key); 394 380 395 381 if (conn->security->issue_challenge(conn) < 0) { 396 382 abort_code = RX_CALL_DEAD;
+4
net/rxrpc/conn_service.c
··· 148 148 */ 149 149 void rxrpc_new_incoming_connection(struct rxrpc_sock *rx, 150 150 struct rxrpc_connection *conn, 151 + const struct rxrpc_security *sec, 152 + struct key *key, 151 153 struct sk_buff *skb) 152 154 { 153 155 struct rxrpc_skb_priv *sp = rxrpc_skb(skb); ··· 162 160 conn->service_id = sp->hdr.serviceId; 163 161 conn->security_ix = sp->hdr.securityIndex; 164 162 conn->out_clientflag = 0; 163 + conn->security = sec; 164 + conn->server_key = key_get(key); 165 165 if (conn->security_ix) 166 166 conn->state = RXRPC_CONN_SERVICE_UNSECURED; 167 167 else
-18
net/rxrpc/input.c
··· 193 193 } 194 194 195 195 /* 196 - * Ping the other end to fill our RTT cache and to retrieve the rwind 197 - * and MTU parameters. 198 - */ 199 - static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb) 200 - { 201 - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 202 - ktime_t now = skb->tstamp; 203 - 204 - if (call->peer->rtt_usage < 3 || 205 - ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), now)) 206 - rxrpc_propose_ACK(call, RXRPC_ACK_PING, sp->hdr.serial, 207 - true, true, 208 - rxrpc_propose_ack_ping_for_params); 209 - } 210 - 211 - /* 212 196 * Apply a hard ACK by advancing the Tx window. 213 197 */ 214 198 static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to, ··· 1380 1396 call = rxrpc_new_incoming_call(local, rx, skb); 1381 1397 if (!call) 1382 1398 goto reject_packet; 1383 - rxrpc_send_ping(call, skb); 1384 - mutex_unlock(&call->user_mutex); 1385 1399 } 1386 1400 1387 1401 /* Process a call packet; this either discards or passes on the ref
+3 -2
net/rxrpc/rxkad.c
··· 648 648 u32 serial; 649 649 int ret; 650 650 651 - _enter("{%d,%x}", conn->debug_id, key_serial(conn->params.key)); 651 + _enter("{%d,%x}", conn->debug_id, key_serial(conn->server_key)); 652 652 653 - ret = key_validate(conn->params.key); 653 + ret = key_validate(conn->server_key); 654 654 if (ret < 0) 655 655 return ret; 656 656 ··· 1293 1293 const struct rxrpc_security rxkad = { 1294 1294 .name = "rxkad", 1295 1295 .security_index = RXRPC_SECURITY_RXKAD, 1296 + .no_key_abort = RXKADUNKNOWNKEY, 1296 1297 .init = rxkad_init, 1297 1298 .exit = rxkad_exit, 1298 1299 .init_connection_security = rxkad_init_connection_security,
+33 -37
net/rxrpc/security.c
··· 101 101 } 102 102 103 103 /* 104 - * initialise the security on a server connection 104 + * Find the security key for a server connection. 105 105 */ 106 - int rxrpc_init_server_conn_security(struct rxrpc_connection *conn) 106 + bool rxrpc_look_up_server_security(struct rxrpc_local *local, struct rxrpc_sock *rx, 107 + const struct rxrpc_security **_sec, 108 + struct key **_key, 109 + struct sk_buff *skb) 107 110 { 108 111 const struct rxrpc_security *sec; 109 - struct rxrpc_local *local = conn->params.local; 110 - struct rxrpc_sock *rx; 111 - struct key *key; 112 - key_ref_t kref; 112 + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 113 + key_ref_t kref = NULL; 113 114 char kdesc[5 + 1 + 3 + 1]; 114 115 115 116 _enter(""); 116 117 117 - sprintf(kdesc, "%u:%u", conn->service_id, conn->security_ix); 118 + sprintf(kdesc, "%u:%u", sp->hdr.serviceId, sp->hdr.securityIndex); 118 119 119 - sec = rxrpc_security_lookup(conn->security_ix); 120 + sec = rxrpc_security_lookup(sp->hdr.securityIndex); 120 121 if (!sec) { 121 - _leave(" = -ENOKEY [lookup]"); 122 - return -ENOKEY; 122 + trace_rxrpc_abort(0, "SVS", 123 + sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, 124 + RX_INVALID_OPERATION, EKEYREJECTED); 125 + skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; 126 + skb->priority = RX_INVALID_OPERATION; 127 + return false; 123 128 } 124 129 125 - /* find the service */ 126 - read_lock(&local->services_lock); 127 - rx = rcu_dereference_protected(local->service, 128 - lockdep_is_held(&local->services_lock)); 129 - if (rx && (rx->srx.srx_service == conn->service_id || 130 - rx->second_service == conn->service_id)) 131 - goto found_service; 130 + if (sp->hdr.securityIndex == RXRPC_SECURITY_NONE) 131 + goto out; 132 132 133 - /* the service appears to have died */ 134 - read_unlock(&local->services_lock); 135 - _leave(" = -ENOENT"); 136 - return -ENOENT; 137 - 138 - found_service: 139 133 if (!rx->securities) { 140 - read_unlock(&local->services_lock); 141 - _leave(" = -ENOKEY"); 142 - return -ENOKEY; 134 + trace_rxrpc_abort(0, "SVR", 135 + sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, 136 + RX_INVALID_OPERATION, EKEYREJECTED); 137 + skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; 138 + skb->priority = RX_INVALID_OPERATION; 139 + return false; 143 140 } 144 141 145 142 /* look through the service's keyring */ 146 143 kref = keyring_search(make_key_ref(rx->securities, 1UL), 147 144 &key_type_rxrpc_s, kdesc, true); 148 145 if (IS_ERR(kref)) { 149 - read_unlock(&local->services_lock); 150 - _leave(" = %ld [search]", PTR_ERR(kref)); 151 - return PTR_ERR(kref); 146 + trace_rxrpc_abort(0, "SVK", 147 + sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, 148 + sec->no_key_abort, EKEYREJECTED); 149 + skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; 150 + skb->priority = sec->no_key_abort; 151 + return false; 152 152 } 153 153 154 - key = key_ref_to_ptr(kref); 155 - read_unlock(&local->services_lock); 156 - 157 - conn->server_key = key; 158 - conn->security = sec; 159 - 160 - _leave(" = 0"); 161 - return 0; 154 + out: 155 + *_sec = sec; 156 + *_key = key_ref_to_ptr(kref); 157 + return true; 162 158 }
+12 -10
net/sched/act_mirred.c
··· 219 219 bool use_reinsert; 220 220 bool want_ingress; 221 221 bool is_redirect; 222 + bool expects_nh; 222 223 int m_eaction; 223 224 int mac_len; 225 + bool at_nh; 224 226 225 227 rec_level = __this_cpu_inc_return(mirred_rec_level); 226 228 if (unlikely(rec_level > MIRRED_RECURSION_LIMIT)) { ··· 263 261 goto out; 264 262 } 265 263 266 - /* If action's target direction differs than filter's direction, 267 - * and devices expect a mac header on xmit, then mac push/pull is 268 - * needed. 269 - */ 270 264 want_ingress = tcf_mirred_act_wants_ingress(m_eaction); 271 - if (skb_at_tc_ingress(skb) != want_ingress && m_mac_header_xmit) { 272 - if (!skb_at_tc_ingress(skb)) { 273 - /* caught at egress, act ingress: pull mac */ 274 - mac_len = skb_network_header(skb) - skb_mac_header(skb); 265 + 266 + expects_nh = want_ingress || !m_mac_header_xmit; 267 + at_nh = skb->data == skb_network_header(skb); 268 + if (at_nh != expects_nh) { 269 + mac_len = skb_at_tc_ingress(skb) ? skb->mac_len : 270 + skb_network_header(skb) - skb_mac_header(skb); 271 + if (expects_nh) { 272 + /* target device/action expect data at nh */ 275 273 skb_pull_rcsum(skb2, mac_len); 276 274 } else { 277 - /* caught at ingress, act egress: push mac */ 278 - skb_push_rcsum(skb2, skb->mac_len); 275 + /* target device/action expect data at mac */ 276 + skb_push_rcsum(skb2, mac_len); 279 277 } 280 278 } 281 279
+5 -26
net/sched/cls_api.c
··· 308 308 tcf_proto_destroy(tp, rtnl_held, true, extack); 309 309 } 310 310 311 - static int walker_check_empty(struct tcf_proto *tp, void *fh, 312 - struct tcf_walker *arg) 311 + static bool tcf_proto_check_delete(struct tcf_proto *tp) 313 312 { 314 - if (fh) { 315 - arg->nonempty = true; 316 - return -1; 317 - } 318 - return 0; 319 - } 313 + if (tp->ops->delete_empty) 314 + return tp->ops->delete_empty(tp); 320 315 321 - static bool tcf_proto_is_empty(struct tcf_proto *tp, bool rtnl_held) 322 - { 323 - struct tcf_walker walker = { .fn = walker_check_empty, }; 324 - 325 - if (tp->ops->walk) { 326 - tp->ops->walk(tp, &walker, rtnl_held); 327 - return !walker.nonempty; 328 - } 329 - return true; 330 - } 331 - 332 - static bool tcf_proto_check_delete(struct tcf_proto *tp, bool rtnl_held) 333 - { 334 - spin_lock(&tp->lock); 335 - if (tcf_proto_is_empty(tp, rtnl_held)) 336 - tp->deleting = true; 337 - spin_unlock(&tp->lock); 316 + tp->deleting = true; 338 317 return tp->deleting; 339 318 } 340 319 ··· 1730 1751 * concurrently. 1731 1752 * Mark tp for deletion if it is empty. 1732 1753 */ 1733 - if (!tp_iter || !tcf_proto_check_delete(tp, rtnl_held)) { 1754 + if (!tp_iter || !tcf_proto_check_delete(tp)) { 1734 1755 mutex_unlock(&chain->filter_chain_lock); 1735 1756 return; 1736 1757 }
+12
net/sched/cls_flower.c
··· 2773 2773 f->res.class = cl; 2774 2774 } 2775 2775 2776 + static bool fl_delete_empty(struct tcf_proto *tp) 2777 + { 2778 + struct cls_fl_head *head = fl_head_dereference(tp); 2779 + 2780 + spin_lock(&tp->lock); 2781 + tp->deleting = idr_is_empty(&head->handle_idr); 2782 + spin_unlock(&tp->lock); 2783 + 2784 + return tp->deleting; 2785 + } 2786 + 2776 2787 static struct tcf_proto_ops cls_fl_ops __read_mostly = { 2777 2788 .kind = "flower", 2778 2789 .classify = fl_classify, ··· 2793 2782 .put = fl_put, 2794 2783 .change = fl_change, 2795 2784 .delete = fl_delete, 2785 + .delete_empty = fl_delete_empty, 2796 2786 .walk = fl_walk, 2797 2787 .reoffload = fl_reoffload, 2798 2788 .hw_add = fl_hw_add,
-25
net/sched/cls_u32.c
··· 1108 1108 return err; 1109 1109 } 1110 1110 1111 - static bool u32_hnode_empty(struct tc_u_hnode *ht, bool *non_root_ht) 1112 - { 1113 - int i; 1114 - 1115 - if (!ht) 1116 - return true; 1117 - if (!ht->is_root) { 1118 - *non_root_ht = true; 1119 - return false; 1120 - } 1121 - if (*non_root_ht) 1122 - return false; 1123 - if (ht->refcnt < 2) 1124 - return true; 1125 - 1126 - for (i = 0; i <= ht->divisor; i++) { 1127 - if (rtnl_dereference(ht->ht[i])) 1128 - return false; 1129 - } 1130 - return true; 1131 - } 1132 - 1133 1111 static void u32_walk(struct tcf_proto *tp, struct tcf_walker *arg, 1134 1112 bool rtnl_held) 1135 1113 { 1136 1114 struct tc_u_common *tp_c = tp->data; 1137 - bool non_root_ht = false; 1138 1115 struct tc_u_hnode *ht; 1139 1116 struct tc_u_knode *n; 1140 1117 unsigned int h; ··· 1124 1147 ht = rtnl_dereference(ht->next)) { 1125 1148 if (ht->prio != tp->prio) 1126 1149 continue; 1127 - if (u32_hnode_empty(ht, &non_root_ht)) 1128 - return; 1129 1150 if (arg->count >= arg->skip) { 1130 1151 if (arg->fn(tp, ht, arg) < 0) { 1131 1152 arg->stop = 1;
+1 -1
net/sched/sch_cake.c
··· 1769 1769 q->avg_window_begin)); 1770 1770 u64 b = q->avg_window_bytes * (u64)NSEC_PER_SEC; 1771 1771 1772 - do_div(b, window_interval); 1772 + b = div64_u64(b, window_interval); 1773 1773 q->avg_peak_bandwidth = 1774 1774 cake_ewma(q->avg_peak_bandwidth, b, 1775 1775 b > q->avg_peak_bandwidth ? 2 : 8);
+12 -11
net/sched/sch_fq.c
··· 301 301 f->socket_hash != sk->sk_hash)) { 302 302 f->credit = q->initial_quantum; 303 303 f->socket_hash = sk->sk_hash; 304 + if (q->rate_enable) 305 + smp_store_release(&sk->sk_pacing_status, 306 + SK_PACING_FQ); 304 307 if (fq_flow_is_throttled(f)) 305 308 fq_flow_unset_throttled(q, f); 306 309 f->time_next_packet = 0ULL; ··· 325 322 326 323 fq_flow_set_detached(f); 327 324 f->sk = sk; 328 - if (skb->sk == sk) 325 + if (skb->sk == sk) { 329 326 f->socket_hash = sk->sk_hash; 327 + if (q->rate_enable) 328 + smp_store_release(&sk->sk_pacing_status, 329 + SK_PACING_FQ); 330 + } 330 331 f->credit = q->initial_quantum; 331 332 332 333 rb_link_node(&f->fq_node, parent, p); ··· 435 428 f->qlen++; 436 429 qdisc_qstats_backlog_inc(sch, skb); 437 430 if (fq_flow_is_detached(f)) { 438 - struct sock *sk = skb->sk; 439 - 440 431 fq_flow_add_tail(&q->new_flows, f); 441 432 if (time_after(jiffies, f->age + q->flow_refill_delay)) 442 433 f->credit = max_t(u32, f->credit, q->quantum); 443 - if (sk && q->rate_enable) { 444 - if (unlikely(smp_load_acquire(&sk->sk_pacing_status) != 445 - SK_PACING_FQ)) 446 - smp_store_release(&sk->sk_pacing_status, 447 - SK_PACING_FQ); 448 - } 449 434 q->inactive_flows--; 450 435 } 451 436 ··· 786 787 if (tb[TCA_FQ_QUANTUM]) { 787 788 u32 quantum = nla_get_u32(tb[TCA_FQ_QUANTUM]); 788 789 789 - if (quantum > 0) 790 + if (quantum > 0 && quantum <= (1 << 20)) { 790 791 q->quantum = quantum; 791 - else 792 + } else { 793 + NL_SET_ERR_MSG_MOD(extack, "invalid quantum"); 792 794 err = -EINVAL; 795 + } 793 796 } 794 797 795 798 if (tb[TCA_FQ_INITIAL_QUANTUM])
+8 -2
net/sched/sch_prio.c
··· 292 292 struct tc_prio_qopt_offload graft_offload; 293 293 unsigned long band = arg - 1; 294 294 295 - if (new == NULL) 296 - new = &noop_qdisc; 295 + if (!new) { 296 + new = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, 297 + TC_H_MAKE(sch->handle, arg), extack); 298 + if (!new) 299 + new = &noop_qdisc; 300 + else 301 + qdisc_hash_add(new, true); 302 + } 297 303 298 304 *old = qdisc_replace(sch, new, &q->queues[band]); 299 305
+18 -10
net/sctp/sm_sideeffect.c
··· 1363 1363 /* Generate an INIT ACK chunk. */ 1364 1364 new_obj = sctp_make_init_ack(asoc, chunk, GFP_ATOMIC, 1365 1365 0); 1366 - if (!new_obj) 1367 - goto nomem; 1366 + if (!new_obj) { 1367 + error = -ENOMEM; 1368 + break; 1369 + } 1368 1370 1369 1371 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, 1370 1372 SCTP_CHUNK(new_obj)); ··· 1388 1386 if (!new_obj) { 1389 1387 if (cmd->obj.chunk) 1390 1388 sctp_chunk_free(cmd->obj.chunk); 1391 - goto nomem; 1389 + error = -ENOMEM; 1390 + break; 1392 1391 } 1393 1392 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, 1394 1393 SCTP_CHUNK(new_obj)); ··· 1436 1433 1437 1434 /* Generate a SHUTDOWN chunk. */ 1438 1435 new_obj = sctp_make_shutdown(asoc, chunk); 1439 - if (!new_obj) 1440 - goto nomem; 1436 + if (!new_obj) { 1437 + error = -ENOMEM; 1438 + break; 1439 + } 1441 1440 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, 1442 1441 SCTP_CHUNK(new_obj)); 1443 1442 break; ··· 1775 1770 break; 1776 1771 } 1777 1772 1778 - if (error) 1773 + if (error) { 1774 + cmd = sctp_next_cmd(commands); 1775 + while (cmd) { 1776 + if (cmd->verb == SCTP_CMD_REPLY) 1777 + sctp_chunk_free(cmd->obj.chunk); 1778 + cmd = sctp_next_cmd(commands); 1779 + } 1779 1780 break; 1781 + } 1780 1782 } 1781 1783 1782 - out: 1783 1784 /* If this is in response to a received chunk, wait until 1784 1785 * we are done with the packet to open the queue so that we don't 1785 1786 * send multiple packets in response to a single request. ··· 1800 1789 sp->data_ready_signalled = 0; 1801 1790 1802 1791 return error; 1803 - nomem: 1804 - error = -ENOMEM; 1805 - goto out; 1806 1792 }
+15 -15
net/sctp/stream.c
··· 84 84 return 0; 85 85 86 86 ret = genradix_prealloc(&stream->out, outcnt, gfp); 87 - if (ret) { 88 - genradix_free(&stream->out); 87 + if (ret) 89 88 return ret; 90 - } 91 89 92 90 stream->outcnt = outcnt; 93 91 return 0; ··· 100 102 return 0; 101 103 102 104 ret = genradix_prealloc(&stream->in, incnt, gfp); 103 - if (ret) { 104 - genradix_free(&stream->in); 105 + if (ret) 105 106 return ret; 106 - } 107 107 108 108 stream->incnt = incnt; 109 109 return 0; ··· 119 123 * a new one with new outcnt to save memory if needed. 120 124 */ 121 125 if (outcnt == stream->outcnt) 122 - goto in; 126 + goto handle_in; 123 127 124 128 /* Filter out chunks queued on streams that won't exist anymore */ 125 129 sched->unsched_all(stream); ··· 128 132 129 133 ret = sctp_stream_alloc_out(stream, outcnt, gfp); 130 134 if (ret) 131 - goto out; 135 + goto out_err; 132 136 133 137 for (i = 0; i < stream->outcnt; i++) 134 138 SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN; 135 139 136 - in: 140 + handle_in: 137 141 sctp_stream_interleave_init(stream); 138 142 if (!incnt) 139 143 goto out; 140 144 141 145 ret = sctp_stream_alloc_in(stream, incnt, gfp); 142 - if (ret) { 143 - sched->free(stream); 144 - genradix_free(&stream->out); 145 - stream->outcnt = 0; 146 - goto out; 147 - } 146 + if (ret) 147 + goto in_err; 148 148 149 + goto out; 150 + 151 + in_err: 152 + sched->free(stream); 153 + genradix_free(&stream->in); 154 + out_err: 155 + genradix_free(&stream->out); 156 + stream->outcnt = 0; 149 157 out: 150 158 return ret; 151 159 }
+1 -1
net/sctp/transport.c
··· 263 263 264 264 pf->af->from_sk(&addr, sk); 265 265 pf->to_sk_daddr(&t->ipaddr, sk); 266 - dst->ops->update_pmtu(dst, sk, NULL, pmtu); 266 + dst->ops->update_pmtu(dst, sk, NULL, pmtu, true); 267 267 pf->to_sk_daddr(&addr, sk); 268 268 269 269 dst = sctp_transport_dst_check(t);
+1 -3
net/tipc/Makefile
··· 9 9 core.o link.o discover.o msg.o \ 10 10 name_distr.o subscr.o monitor.o name_table.o net.o \ 11 11 netlink.o netlink_compat.o node.o socket.o eth_media.o \ 12 - topsrv.o socket.o group.o trace.o 12 + topsrv.o group.o trace.o 13 13 14 14 CFLAGS_trace.o += -I$(src) 15 15 ··· 20 20 21 21 22 22 obj-$(CONFIG_TIPC_DIAG) += diag.o 23 - 24 - tipc_diag-y := diag.o
+34 -23
net/tipc/socket.c
··· 287 287 * 288 288 * Caller must hold socket lock 289 289 */ 290 - static void tsk_rej_rx_queue(struct sock *sk) 290 + static void tsk_rej_rx_queue(struct sock *sk, int error) 291 291 { 292 292 struct sk_buff *skb; 293 293 294 294 while ((skb = __skb_dequeue(&sk->sk_receive_queue))) 295 - tipc_sk_respond(sk, skb, TIPC_ERR_NO_PORT); 295 + tipc_sk_respond(sk, skb, error); 296 296 } 297 297 298 298 static bool tipc_sk_connected(struct sock *sk) ··· 545 545 /* Remove pending SYN */ 546 546 __skb_queue_purge(&sk->sk_write_queue); 547 547 548 - /* Reject all unreceived messages, except on an active connection 549 - * (which disconnects locally & sends a 'FIN+' to peer). 550 - */ 551 - while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) { 552 - if (TIPC_SKB_CB(skb)->bytes_read) { 553 - kfree_skb(skb); 554 - continue; 555 - } 556 - if (!tipc_sk_type_connectionless(sk) && 557 - sk->sk_state != TIPC_DISCONNECTING) { 558 - tipc_set_sk_state(sk, TIPC_DISCONNECTING); 559 - tipc_node_remove_conn(net, dnode, tsk->portid); 560 - } 561 - tipc_sk_respond(sk, skb, error); 548 + /* Remove partially received buffer if any */ 549 + skb = skb_peek(&sk->sk_receive_queue); 550 + if (skb && TIPC_SKB_CB(skb)->bytes_read) { 551 + __skb_unlink(skb, &sk->sk_receive_queue); 552 + kfree_skb(skb); 562 553 } 563 554 564 - if (tipc_sk_type_connectionless(sk)) 555 + /* Reject all unreceived messages if connectionless */ 556 + if (tipc_sk_type_connectionless(sk)) { 557 + tsk_rej_rx_queue(sk, error); 565 558 return; 559 + } 566 560 567 - if (sk->sk_state != TIPC_DISCONNECTING) { 561 + switch (sk->sk_state) { 562 + case TIPC_CONNECTING: 563 + case TIPC_ESTABLISHED: 564 + tipc_set_sk_state(sk, TIPC_DISCONNECTING); 565 + tipc_node_remove_conn(net, dnode, tsk->portid); 566 + /* Send a FIN+/- to its peer */ 567 + skb = __skb_dequeue(&sk->sk_receive_queue); 568 + if (skb) { 569 + __skb_queue_purge(&sk->sk_receive_queue); 570 + tipc_sk_respond(sk, skb, error); 571 + break; 572 + } 568 573 skb = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE, 569 574 TIPC_CONN_MSG, SHORT_H_SIZE, 0, dnode, 570 575 tsk_own_node(tsk), tsk_peer_port(tsk), 571 576 tsk->portid, error); 572 577 if (skb) 573 578 tipc_node_xmit_skb(net, skb, dnode, tsk->portid); 574 - tipc_node_remove_conn(net, dnode, tsk->portid); 575 - tipc_set_sk_state(sk, TIPC_DISCONNECTING); 579 + break; 580 + case TIPC_LISTEN: 581 + /* Reject all SYN messages */ 582 + tsk_rej_rx_queue(sk, error); 583 + break; 584 + default: 585 + __skb_queue_purge(&sk->sk_receive_queue); 586 + break; 576 587 } 577 588 } 578 589 ··· 2443 2432 return sock_intr_errno(*timeo_p); 2444 2433 2445 2434 add_wait_queue(sk_sleep(sk), &wait); 2446 - done = sk_wait_event(sk, timeo_p, 2447 - sk->sk_state != TIPC_CONNECTING, &wait); 2435 + done = sk_wait_event(sk, timeo_p, tipc_sk_connected(sk), 2436 + &wait); 2448 2437 remove_wait_queue(sk_sleep(sk), &wait); 2449 2438 } while (!done); 2450 2439 return 0; ··· 2654 2643 * Reject any stray messages received by new socket 2655 2644 * before the socket lock was taken (very, very unlikely) 2656 2645 */ 2657 - tsk_rej_rx_queue(new_sk); 2646 + tsk_rej_rx_queue(new_sk, TIPC_ERR_NO_PORT); 2658 2647 2659 2648 /* Connect new socket to it's peer */ 2660 2649 tipc_sk_finish_conn(new_tsock, msg_origport(msg), msg_orignode(msg));
+2 -2
samples/seccomp/user-trap.c
··· 298 298 req = malloc(sizes.seccomp_notif); 299 299 if (!req) 300 300 goto out_close; 301 - memset(req, 0, sizeof(*req)); 302 301 303 302 resp = malloc(sizes.seccomp_notif_resp); 304 303 if (!resp) 305 304 goto out_req; 306 - memset(resp, 0, sizeof(*resp)); 305 + memset(resp, 0, sizes.seccomp_notif_resp); 307 306 308 307 while (1) { 308 + memset(req, 0, sizes.seccomp_notif); 309 309 if (ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, req)) { 310 310 perror("ioctl recv"); 311 311 goto out_resp;
+4 -5
scripts/gcc-plugins/Kconfig
··· 14 14 An arch should select this symbol if it supports building with 15 15 GCC plugins. 16 16 17 - config GCC_PLUGINS 18 - bool 17 + menuconfig GCC_PLUGINS 18 + bool "GCC plugins" 19 19 depends on HAVE_GCC_PLUGINS 20 20 depends on PLUGIN_HOSTCC != "" 21 21 default y ··· 25 25 26 26 See Documentation/core-api/gcc-plugins.rst for details. 27 27 28 - menu "GCC plugins" 29 - depends on GCC_PLUGINS 28 + if GCC_PLUGINS 30 29 31 30 config GCC_PLUGIN_CYC_COMPLEXITY 32 31 bool "Compute the cyclomatic complexity of a function" if EXPERT ··· 112 113 bool 113 114 depends on GCC_PLUGINS && ARM 114 115 115 - endmenu 116 + endif
+1 -1
scripts/package/mkdebian
··· 136 136 echo "1.0" > debian/source/format 137 137 138 138 echo $debarch > debian/arch 139 - extra_build_depends=", $(if_enabled_echo CONFIG_UNWINDER_ORC libelf-dev)" 139 + extra_build_depends=", $(if_enabled_echo CONFIG_UNWINDER_ORC libelf-dev:native)" 140 140 extra_build_depends="$extra_build_depends, $(if_enabled_echo CONFIG_SYSTEM_TRUSTED_KEYRING libssl-dev:native)" 141 141 142 142 # Generate a simple changelog template
+1 -1
security/apparmor/apparmorfs.c
··· 623 623 624 624 void __aa_bump_ns_revision(struct aa_ns *ns) 625 625 { 626 - ns->revision++; 626 + WRITE_ONCE(ns->revision, ns->revision + 1); 627 627 wake_up_interruptible(&ns->wait); 628 628 } 629 629
+42 -38
security/apparmor/domain.c
··· 317 317 318 318 if (!bprm || !profile->xattr_count) 319 319 return 0; 320 + might_sleep(); 320 321 321 322 /* transition from exec match to xattr set */ 322 323 state = aa_dfa_null_transition(profile->xmatch, state); ··· 362 361 } 363 362 364 363 /** 365 - * __attach_match_ - find an attachment match 364 + * find_attach - do attachment search for unconfined processes 366 365 * @bprm - binprm structure of transitioning task 367 - * @name - to match against (NOT NULL) 366 + * @ns: the current namespace (NOT NULL) 368 367 * @head - profile list to walk (NOT NULL) 368 + * @name - to match against (NOT NULL) 369 369 * @info - info message if there was an error (NOT NULL) 370 370 * 371 371 * Do a linear search on the profiles in the list. There is a matching ··· 376 374 * 377 375 * Requires: @head not be shared or have appropriate locks held 378 376 * 379 - * Returns: profile or NULL if no match found 377 + * Returns: label or NULL if no match found 380 378 */ 381 - static struct aa_profile *__attach_match(const struct linux_binprm *bprm, 382 - const char *name, 383 - struct list_head *head, 384 - const char **info) 379 + static struct aa_label *find_attach(const struct linux_binprm *bprm, 380 + struct aa_ns *ns, struct list_head *head, 381 + const char *name, const char **info) 385 382 { 386 383 int candidate_len = 0, candidate_xattrs = 0; 387 384 bool conflict = false; ··· 389 388 AA_BUG(!name); 390 389 AA_BUG(!head); 391 390 391 + rcu_read_lock(); 392 + restart: 392 393 list_for_each_entry_rcu(profile, head, base.list) { 393 394 if (profile->label.flags & FLAG_NULL && 394 395 &profile->label == ns_unconfined(profile->ns)) ··· 416 413 perm = dfa_user_allow(profile->xmatch, state); 417 414 /* any accepting state means a valid match. */ 418 415 if (perm & MAY_EXEC) { 419 - int ret; 416 + int ret = 0; 420 417 421 418 if (count < candidate_len) 422 419 continue; 423 420 424 - ret = aa_xattrs_match(bprm, profile, state); 425 - /* Fail matching if the xattrs don't match */ 426 - if (ret < 0) 427 - continue; 421 + if (bprm && profile->xattr_count) { 422 + long rev = READ_ONCE(ns->revision); 428 423 424 + if (!aa_get_profile_not0(profile)) 425 + goto restart; 426 + rcu_read_unlock(); 427 + ret = aa_xattrs_match(bprm, profile, 428 + state); 429 + rcu_read_lock(); 430 + aa_put_profile(profile); 431 + if (rev != 432 + READ_ONCE(ns->revision)) 433 + /* policy changed */ 434 + goto restart; 435 + /* 436 + * Fail matching if the xattrs don't 437 + * match 438 + */ 439 + if (ret < 0) 440 + continue; 441 + } 429 442 /* 430 443 * TODO: allow for more flexible best match 431 444 * ··· 464 445 candidate_xattrs = ret; 465 446 conflict = false; 466 447 } 467 - } else if (!strcmp(profile->base.name, name)) 448 + } else if (!strcmp(profile->base.name, name)) { 468 449 /* 469 450 * old exact non-re match, without conditionals such 470 451 * as xattrs. no more searching required 471 452 */ 472 - return profile; 453 + candidate = profile; 454 + goto out; 455 + } 473 456 } 474 457 475 - if (conflict) { 476 - *info = "conflicting profile attachments"; 458 + if (!candidate || conflict) { 459 + if (conflict) 460 + *info = "conflicting profile attachments"; 461 + rcu_read_unlock(); 477 462 return NULL; 478 463 } 479 464 480 - return candidate; 481 - } 482 - 483 - /** 484 - * find_attach - do attachment search for unconfined processes 485 - * @bprm - binprm structure of transitioning task 486 - * @ns: the current namespace (NOT NULL) 487 - * @list: list to search (NOT NULL) 488 - * @name: the executable name to match against (NOT NULL) 489 - * @info: info message if there was an error 490 - * 491 - * Returns: label or NULL if no match found 492 - */ 493 - static struct aa_label *find_attach(const struct linux_binprm *bprm, 494 - struct aa_ns *ns, struct list_head *list, 495 - const char *name, const char **info) 496 - { 497 - struct aa_profile *profile; 498 - 499 - rcu_read_lock(); 500 - profile = aa_get_profile(__attach_match(bprm, name, list, info)); 465 + out: 466 + candidate = aa_get_newest_profile(candidate); 501 467 rcu_read_unlock(); 502 468 503 - return profile ? &profile->label : NULL; 469 + return &candidate->label; 504 470 } 505 471 506 472 static const char *next_name(int xtype, const char *name)
+8 -4
security/apparmor/file.c
··· 618 618 fctx = file_ctx(file); 619 619 620 620 rcu_read_lock(); 621 - flabel = aa_get_newest_label(rcu_dereference(fctx->label)); 622 - rcu_read_unlock(); 621 + flabel = rcu_dereference(fctx->label); 623 622 AA_BUG(!flabel); 624 623 625 624 /* revalidate access, if task is unconfined, or the cached cred ··· 630 631 */ 631 632 denied = request & ~fctx->allow; 632 633 if (unconfined(label) || unconfined(flabel) || 633 - (!denied && aa_label_is_subset(flabel, label))) 634 + (!denied && aa_label_is_subset(flabel, label))) { 635 + rcu_read_unlock(); 634 636 goto done; 637 + } 635 638 639 + flabel = aa_get_newest_label(flabel); 640 + rcu_read_unlock(); 636 641 /* TODO: label cross check */ 637 642 638 643 if (file->f_path.mnt && path_mediated_fs(file->f_path.dentry)) ··· 646 643 else if (S_ISSOCK(file_inode(file)->i_mode)) 647 644 error = __file_sock_perm(op, label, flabel, file, request, 648 645 denied); 649 - done: 650 646 aa_put_label(flabel); 647 + 648 + done: 651 649 return error; 652 650 } 653 651
+1 -1
security/apparmor/mount.c
··· 442 442 buffer = aa_get_buffer(false); 443 443 old_buffer = aa_get_buffer(false); 444 444 error = -ENOMEM; 445 - if (!buffer || old_buffer) 445 + if (!buffer || !old_buffer) 446 446 goto out; 447 447 448 448 error = fn_for_each_confined(label, profile,
+2 -2
security/apparmor/policy.c
··· 1125 1125 if (!name) { 1126 1126 /* remove namespace - can only happen if fqname[0] == ':' */ 1127 1127 mutex_lock_nested(&ns->parent->lock, ns->level); 1128 - __aa_remove_ns(ns); 1129 1128 __aa_bump_ns_revision(ns); 1129 + __aa_remove_ns(ns); 1130 1130 mutex_unlock(&ns->parent->lock); 1131 1131 } else { 1132 1132 /* remove profile */ ··· 1138 1138 goto fail_ns_lock; 1139 1139 } 1140 1140 name = profile->base.hname; 1141 + __aa_bump_ns_revision(ns); 1141 1142 __remove_profile(profile); 1142 1143 __aa_labelset_update_subtree(ns); 1143 - __aa_bump_ns_revision(ns); 1144 1144 mutex_unlock(&ns->lock); 1145 1145 } 1146 1146
+6 -3
security/tomoyo/common.c
··· 951 951 exe = tomoyo_get_exe(); 952 952 if (!exe) 953 953 return false; 954 - list_for_each_entry_rcu(ptr, &tomoyo_kernel_namespace.policy_list[TOMOYO_ID_MANAGER], head.list) { 954 + list_for_each_entry_rcu(ptr, &tomoyo_kernel_namespace.policy_list[TOMOYO_ID_MANAGER], head.list, 955 + srcu_read_lock_held(&tomoyo_ss)) { 955 956 if (!ptr->head.is_deleted && 956 957 (!tomoyo_pathcmp(domainname, ptr->manager) || 957 958 !strcmp(exe, ptr->manager->name))) { ··· 1096 1095 if (mutex_lock_interruptible(&tomoyo_policy_lock)) 1097 1096 return -EINTR; 1098 1097 /* Is there an active domain? */ 1099 - list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { 1098 + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list, 1099 + srcu_read_lock_held(&tomoyo_ss)) { 1100 1100 /* Never delete tomoyo_kernel_domain */ 1101 1101 if (domain == &tomoyo_kernel_domain) 1102 1102 continue; ··· 2780 2778 2781 2779 tomoyo_policy_loaded = true; 2782 2780 pr_info("TOMOYO: 2.6.0\n"); 2783 - list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { 2781 + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list, 2782 + srcu_read_lock_held(&tomoyo_ss)) { 2784 2783 const u8 profile = domain->profile; 2785 2784 struct tomoyo_policy_namespace *ns = domain->ns; 2786 2785
+10 -5
security/tomoyo/domain.c
··· 41 41 42 42 if (mutex_lock_interruptible(&tomoyo_policy_lock)) 43 43 return -ENOMEM; 44 - list_for_each_entry_rcu(entry, list, list) { 44 + list_for_each_entry_rcu(entry, list, list, 45 + srcu_read_lock_held(&tomoyo_ss)) { 45 46 if (entry->is_deleted == TOMOYO_GC_IN_PROGRESS) 46 47 continue; 47 48 if (!check_duplicate(entry, new_entry)) ··· 120 119 } 121 120 if (mutex_lock_interruptible(&tomoyo_policy_lock)) 122 121 goto out; 123 - list_for_each_entry_rcu(entry, list, list) { 122 + list_for_each_entry_rcu(entry, list, list, 123 + srcu_read_lock_held(&tomoyo_ss)) { 124 124 if (entry->is_deleted == TOMOYO_GC_IN_PROGRESS) 125 125 continue; 126 126 if (!tomoyo_same_acl_head(entry, new_entry) || ··· 168 166 u16 i = 0; 169 167 170 168 retry: 171 - list_for_each_entry_rcu(ptr, list, list) { 169 + list_for_each_entry_rcu(ptr, list, list, 170 + srcu_read_lock_held(&tomoyo_ss)) { 172 171 if (ptr->is_deleted || ptr->type != r->param_type) 173 172 continue; 174 173 if (!check_entry(r, ptr)) ··· 301 298 { 302 299 const struct tomoyo_transition_control *ptr; 303 300 304 - list_for_each_entry_rcu(ptr, list, head.list) { 301 + list_for_each_entry_rcu(ptr, list, head.list, 302 + srcu_read_lock_held(&tomoyo_ss)) { 305 303 if (ptr->head.is_deleted || ptr->type != type) 306 304 continue; 307 305 if (ptr->domainname) { ··· 739 735 740 736 /* Check 'aggregator' directive. */ 741 737 candidate = &exename; 742 - list_for_each_entry_rcu(ptr, list, head.list) { 738 + list_for_each_entry_rcu(ptr, list, head.list, 739 + srcu_read_lock_held(&tomoyo_ss)) { 743 740 if (ptr->head.is_deleted || 744 741 !tomoyo_path_matches_pattern(&exename, 745 742 ptr->original_name))
+6 -3
security/tomoyo/group.c
··· 133 133 { 134 134 struct tomoyo_path_group *member; 135 135 136 - list_for_each_entry_rcu(member, &group->member_list, head.list) { 136 + list_for_each_entry_rcu(member, &group->member_list, head.list, 137 + srcu_read_lock_held(&tomoyo_ss)) { 137 138 if (member->head.is_deleted) 138 139 continue; 139 140 if (!tomoyo_path_matches_pattern(pathname, member->member_name)) ··· 162 161 struct tomoyo_number_group *member; 163 162 bool matched = false; 164 163 165 - list_for_each_entry_rcu(member, &group->member_list, head.list) { 164 + list_for_each_entry_rcu(member, &group->member_list, head.list, 165 + srcu_read_lock_held(&tomoyo_ss)) { 166 166 if (member->head.is_deleted) 167 167 continue; 168 168 if (min > member->number.values[1] || ··· 193 191 bool matched = false; 194 192 const u8 size = is_ipv6 ? 16 : 4; 195 193 196 - list_for_each_entry_rcu(member, &group->member_list, head.list) { 194 + list_for_each_entry_rcu(member, &group->member_list, head.list, 195 + srcu_read_lock_held(&tomoyo_ss)) { 197 196 if (member->head.is_deleted) 198 197 continue; 199 198 if (member->address.is_ipv6 != is_ipv6)
+1 -31
security/tomoyo/realpath.c
··· 218 218 } 219 219 220 220 /** 221 - * tomoyo_get_socket_name - Get the name of a socket. 222 - * 223 - * @path: Pointer to "struct path". 224 - * @buffer: Pointer to buffer to return value in. 225 - * @buflen: Sizeof @buffer. 226 - * 227 - * Returns the buffer. 228 - */ 229 - static char *tomoyo_get_socket_name(const struct path *path, char * const buffer, 230 - const int buflen) 231 - { 232 - struct inode *inode = d_backing_inode(path->dentry); 233 - struct socket *sock = inode ? SOCKET_I(inode) : NULL; 234 - struct sock *sk = sock ? sock->sk : NULL; 235 - 236 - if (sk) { 237 - snprintf(buffer, buflen, "socket:[family=%u:type=%u:protocol=%u]", 238 - sk->sk_family, sk->sk_type, sk->sk_protocol); 239 - } else { 240 - snprintf(buffer, buflen, "socket:[unknown]"); 241 - } 242 - return buffer; 243 - } 244 - 245 - /** 246 221 * tomoyo_realpath_from_path - Returns realpath(3) of the given pathname but ignores chroot'ed root. 247 222 * 248 223 * @path: Pointer to "struct path". ··· 254 279 break; 255 280 /* To make sure that pos is '\0' terminated. */ 256 281 buf[buf_len - 1] = '\0'; 257 - /* Get better name for socket. */ 258 - if (sb->s_magic == SOCKFS_MAGIC) { 259 - pos = tomoyo_get_socket_name(path, buf, buf_len - 1); 260 - goto encode; 261 - } 262 - /* For "pipe:[\$]". */ 282 + /* For "pipe:[\$]" and "socket:[\$]". */ 263 283 if (dentry->d_op && dentry->d_op->d_dname) { 264 284 pos = dentry->d_op->d_dname(dentry, buf, buf_len - 1); 265 285 goto encode;
+4 -2
security/tomoyo/util.c
··· 594 594 595 595 name.name = domainname; 596 596 tomoyo_fill_path_info(&name); 597 - list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { 597 + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list, 598 + srcu_read_lock_held(&tomoyo_ss)) { 598 599 if (!domain->is_deleted && 599 600 !tomoyo_pathcmp(&name, domain->domainname)) 600 601 return domain; ··· 1029 1028 return false; 1030 1029 if (!domain) 1031 1030 return true; 1032 - list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { 1031 + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list, 1032 + srcu_read_lock_held(&tomoyo_ss)) { 1033 1033 u16 perm; 1034 1034 u8 i; 1035 1035
-1
sound/hda/hdac_regmap.c
··· 363 363 .reg_write = hda_reg_write, 364 364 .use_single_read = true, 365 365 .use_single_write = true, 366 - .disable_locking = true, 367 366 }; 368 367 369 368 /**
+16 -5
sound/pci/hda/hda_intel.c
··· 282 282 283 283 /* quirks for old Intel chipsets */ 284 284 #define AZX_DCAPS_INTEL_ICH \ 285 - (AZX_DCAPS_OLD_SSYNC | AZX_DCAPS_NO_ALIGN_BUFSIZE) 285 + (AZX_DCAPS_OLD_SSYNC | AZX_DCAPS_NO_ALIGN_BUFSIZE |\ 286 + AZX_DCAPS_SYNC_WRITE) 286 287 287 288 /* quirks for Intel PCH */ 288 289 #define AZX_DCAPS_INTEL_PCH_BASE \ 289 290 (AZX_DCAPS_NO_ALIGN_BUFSIZE | AZX_DCAPS_COUNT_LPIB_DELAY |\ 290 - AZX_DCAPS_SNOOP_TYPE(SCH)) 291 + AZX_DCAPS_SNOOP_TYPE(SCH) | AZX_DCAPS_SYNC_WRITE) 291 292 292 293 /* PCH up to IVB; no runtime PM; bind with i915 gfx */ 293 294 #define AZX_DCAPS_INTEL_PCH_NOPM \ ··· 303 302 #define AZX_DCAPS_INTEL_HASWELL \ 304 303 (/*AZX_DCAPS_ALIGN_BUFSIZE |*/ AZX_DCAPS_COUNT_LPIB_DELAY |\ 305 304 AZX_DCAPS_PM_RUNTIME | AZX_DCAPS_I915_COMPONENT |\ 306 - AZX_DCAPS_SNOOP_TYPE(SCH)) 305 + AZX_DCAPS_SNOOP_TYPE(SCH) | AZX_DCAPS_SYNC_WRITE) 307 306 308 307 /* Broadwell HDMI can't use position buffer reliably, force to use LPIB */ 309 308 #define AZX_DCAPS_INTEL_BROADWELL \ 310 309 (/*AZX_DCAPS_ALIGN_BUFSIZE |*/ AZX_DCAPS_POSFIX_LPIB |\ 311 310 AZX_DCAPS_PM_RUNTIME | AZX_DCAPS_I915_COMPONENT |\ 312 - AZX_DCAPS_SNOOP_TYPE(SCH)) 311 + AZX_DCAPS_SNOOP_TYPE(SCH) | AZX_DCAPS_SYNC_WRITE) 313 312 314 313 #define AZX_DCAPS_INTEL_BAYTRAIL \ 315 314 (AZX_DCAPS_INTEL_PCH_BASE | AZX_DCAPS_I915_COMPONENT) ··· 1411 1410 acpi_handle dhandle, atpx_handle; 1412 1411 acpi_status status; 1413 1412 1414 - while ((pdev = pci_get_class(PCI_BASE_CLASS_DISPLAY << 16, pdev)) != NULL) { 1413 + while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { 1414 + dhandle = ACPI_HANDLE(&pdev->dev); 1415 + if (dhandle) { 1416 + status = acpi_get_handle(dhandle, "ATPX", &atpx_handle); 1417 + if (!ACPI_FAILURE(status)) { 1418 + pci_dev_put(pdev); 1419 + return true; 1420 + } 1421 + } 1422 + } 1423 + while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { 1415 1424 dhandle = ACPI_HANDLE(&pdev->dev); 1416 1425 if (dhandle) { 1417 1426 status = acpi_get_handle(dhandle, "ATPX", &atpx_handle);
+38 -15
sound/pci/hda/patch_realtek.c
··· 412 412 case 0x10ec0672: 413 413 alc_update_coef_idx(codec, 0xd, 0, 1<<14); /* EAPD Ctrl */ 414 414 break; 415 + case 0x10ec0222: 415 416 case 0x10ec0623: 416 417 alc_update_coef_idx(codec, 0x19, 1<<13, 0); 417 418 break; ··· 431 430 break; 432 431 case 0x10ec0899: 433 432 case 0x10ec0900: 433 + case 0x10ec0b00: 434 434 case 0x10ec1168: 435 435 case 0x10ec1220: 436 436 alc_update_coef_idx(codec, 0x7, 1<<1, 0); ··· 503 501 struct alc_spec *spec = codec->spec; 504 502 505 503 switch (codec->core.vendor_id) { 504 + case 0x10ec0283: 506 505 case 0x10ec0286: 507 506 case 0x10ec0288: 508 507 case 0x10ec0298: ··· 2528 2525 case 0x10ec0882: 2529 2526 case 0x10ec0885: 2530 2527 case 0x10ec0900: 2528 + case 0x10ec0b00: 2531 2529 case 0x10ec1220: 2532 2530 break; 2533 2531 default: ··· 5908 5904 ALC256_FIXUP_ASUS_HEADSET_MIC, 5909 5905 ALC256_FIXUP_ASUS_MIC_NO_PRESENCE, 5910 5906 ALC299_FIXUP_PREDATOR_SPK, 5911 - ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC, 5912 5907 ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, 5913 - ALC294_FIXUP_ASUS_INTSPK_GPIO, 5908 + ALC289_FIXUP_DELL_SPK2, 5909 + ALC289_FIXUP_DUAL_SPK, 5910 + ALC294_FIXUP_SPK2_TO_DAC1, 5911 + ALC294_FIXUP_ASUS_DUAL_SPK, 5912 + 5914 5913 }; 5915 5914 5916 5915 static const struct hda_fixup alc269_fixups[] = { ··· 6988 6981 { } 6989 6982 } 6990 6983 }, 6991 - [ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC] = { 6992 - .type = HDA_FIXUP_PINS, 6993 - .v.pins = (const struct hda_pintbl[]) { 6994 - { 0x14, 0x411111f0 }, /* disable confusing internal speaker */ 6995 - { 0x19, 0x04a11150 }, /* use as headset mic, without its own jack detect */ 6996 - { } 6997 - }, 6998 - .chained = true, 6999 - .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 7000 - }, 7001 6984 [ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE] = { 7002 6985 .type = HDA_FIXUP_PINS, 7003 6986 .v.pins = (const struct hda_pintbl[]) { ··· 6998 7001 .chained = true, 6999 7002 .chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE 7000 7003 }, 7001 - [ALC294_FIXUP_ASUS_INTSPK_GPIO] = { 7004 + [ALC289_FIXUP_DELL_SPK2] = { 7005 + .type = HDA_FIXUP_PINS, 7006 + .v.pins = (const struct hda_pintbl[]) { 7007 + { 0x17, 0x90170130 }, /* bass spk */ 7008 + { } 7009 + }, 7010 + .chained = true, 7011 + .chain_id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE 7012 + }, 7013 + [ALC289_FIXUP_DUAL_SPK] = { 7014 + .type = HDA_FIXUP_FUNC, 7015 + .v.func = alc285_fixup_speaker2_to_dac1, 7016 + .chained = true, 7017 + .chain_id = ALC289_FIXUP_DELL_SPK2 7018 + }, 7019 + [ALC294_FIXUP_SPK2_TO_DAC1] = { 7020 + .type = HDA_FIXUP_FUNC, 7021 + .v.func = alc285_fixup_speaker2_to_dac1, 7022 + .chained = true, 7023 + .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC 7024 + }, 7025 + [ALC294_FIXUP_ASUS_DUAL_SPK] = { 7002 7026 .type = HDA_FIXUP_FUNC, 7003 7027 /* The GPIO must be pulled to initialize the AMP */ 7004 7028 .v.func = alc_fixup_gpio4, 7005 7029 .chained = true, 7006 - .chain_id = ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC 7030 + .chain_id = ALC294_FIXUP_SPK2_TO_DAC1 7007 7031 }, 7032 + 7008 7033 }; 7009 7034 7010 7035 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 7099 7080 SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE), 7100 7081 SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE), 7101 7082 SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB), 7083 + SND_PCI_QUIRK(0x1028, 0x097e, "Dell Precision", ALC289_FIXUP_DUAL_SPK), 7084 + SND_PCI_QUIRK(0x1028, 0x097d, "Dell Precision", ALC289_FIXUP_DUAL_SPK), 7102 7085 SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 7103 7086 SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 7104 7087 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), ··· 7188 7167 SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), 7189 7168 SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A), 7190 7169 SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), 7191 - SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_INTSPK_GPIO), 7170 + SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK), 7192 7171 SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC), 7193 7172 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), 7194 7173 SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC), ··· 7260 7239 SND_PCI_QUIRK(0x17aa, 0x224c, "Thinkpad", ALC298_FIXUP_TPT470_DOCK), 7261 7240 SND_PCI_QUIRK(0x17aa, 0x224d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK), 7262 7241 SND_PCI_QUIRK(0x17aa, 0x225d, "Thinkpad T480", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 7242 + SND_PCI_QUIRK(0x17aa, 0x2292, "Thinkpad X1 Yoga 7th", ALC285_FIXUP_SPEAKER2_TO_DAC1), 7263 7243 SND_PCI_QUIRK(0x17aa, 0x2293, "Thinkpad X1 Carbon 7th", ALC285_FIXUP_SPEAKER2_TO_DAC1), 7264 7244 SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 7265 7245 SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), ··· 9259 9237 HDA_CODEC_ENTRY(0x10ec0892, "ALC892", patch_alc662), 9260 9238 HDA_CODEC_ENTRY(0x10ec0899, "ALC898", patch_alc882), 9261 9239 HDA_CODEC_ENTRY(0x10ec0900, "ALC1150", patch_alc882), 9240 + HDA_CODEC_ENTRY(0x10ec0b00, "ALCS1200A", patch_alc882), 9262 9241 HDA_CODEC_ENTRY(0x10ec1168, "ALC1220", patch_alc882), 9263 9242 HDA_CODEC_ENTRY(0x10ec1220, "ALC1220", patch_alc882), 9264 9243 {} /* terminator */
+6 -3
sound/pci/ice1712/ice1724.c
··· 647 647 unsigned long flags; 648 648 unsigned char mclk_change; 649 649 unsigned int i, old_rate; 650 + bool call_set_rate = false; 650 651 651 652 if (rate > ice->hw_rates->list[ice->hw_rates->count - 1]) 652 653 return -EINVAL; ··· 671 670 * setting clock rate for internal clock mode */ 672 671 old_rate = ice->get_rate(ice); 673 672 if (force || (old_rate != rate)) 674 - ice->set_rate(ice, rate); 673 + call_set_rate = true; 675 674 else if (rate == ice->cur_rate) { 676 675 spin_unlock_irqrestore(&ice->reg_lock, flags); 677 676 return 0; ··· 679 678 } 680 679 681 680 ice->cur_rate = rate; 681 + spin_unlock_irqrestore(&ice->reg_lock, flags); 682 + 683 + if (call_set_rate) 684 + ice->set_rate(ice, rate); 682 685 683 686 /* setting master clock */ 684 687 mclk_change = ice->set_mclk(ice, rate); 685 - 686 - spin_unlock_irqrestore(&ice->reg_lock, flags); 687 688 688 689 if (mclk_change && ice->gpio.i2s_mclk_changed) 689 690 ice->gpio.i2s_mclk_changed(ice);
+8 -1
sound/soc/fsl/fsl_audmix.c
··· 505 505 ARRAY_SIZE(fsl_audmix_dai)); 506 506 if (ret) { 507 507 dev_err(dev, "failed to register ASoC DAI\n"); 508 - return ret; 508 + goto err_disable_pm; 509 509 } 510 510 511 511 priv->pdev = platform_device_register_data(dev, mdrv, 0, NULL, 0); 512 512 if (IS_ERR(priv->pdev)) { 513 513 ret = PTR_ERR(priv->pdev); 514 514 dev_err(dev, "failed to register platform %s: %d\n", mdrv, ret); 515 + goto err_disable_pm; 515 516 } 516 517 518 + return 0; 519 + 520 + err_disable_pm: 521 + pm_runtime_disable(dev); 517 522 return ret; 518 523 } 519 524 520 525 static int fsl_audmix_remove(struct platform_device *pdev) 521 526 { 522 527 struct fsl_audmix *priv = dev_get_drvdata(&pdev->dev); 528 + 529 + pm_runtime_disable(&pdev->dev); 523 530 524 531 if (priv->pdev) 525 532 platform_device_unregister(priv->pdev);
-1
sound/soc/intel/boards/cml_rt1011_rt5682.c
··· 11 11 #include <linux/clk.h> 12 12 #include <linux/dmi.h> 13 13 #include <linux/slab.h> 14 - #include <asm/cpu_device_id.h> 15 14 #include <linux/acpi.h> 16 15 #include <sound/core.h> 17 16 #include <sound/jack.h>
+8 -6
sound/soc/soc-core.c
··· 479 479 goto free_rtd; 480 480 481 481 rtd->dev = dev; 482 + INIT_LIST_HEAD(&rtd->list); 483 + INIT_LIST_HEAD(&rtd->component_list); 484 + INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].be_clients); 485 + INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].be_clients); 486 + INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].fe_clients); 487 + INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].fe_clients); 482 488 dev_set_drvdata(dev, rtd); 483 489 INIT_DELAYED_WORK(&rtd->delayed_work, close_delayed_work); 484 490 ··· 500 494 /* 501 495 * rtd remaining settings 502 496 */ 503 - INIT_LIST_HEAD(&rtd->component_list); 504 - INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].be_clients); 505 - INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].be_clients); 506 - INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].fe_clients); 507 - INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].fe_clients); 508 - 509 497 rtd->card = card; 510 498 rtd->dai_link = dai_link; 511 499 if (!rtd->dai_link->ops) ··· 1871 1871 1872 1872 /* convert non BE into BE */ 1873 1873 dai_link->no_pcm = 1; 1874 + dai_link->dpcm_playback = 1; 1875 + dai_link->dpcm_capture = 1; 1874 1876 1875 1877 /* override any BE fixups */ 1876 1878 dai_link->be_hw_params_fixup =
+3 -3
sound/soc/soc-topology.c
··· 548 548 if (dobj->ops && dobj->ops->link_unload) 549 549 dobj->ops->link_unload(comp, dobj); 550 550 551 + list_del(&dobj->list); 552 + snd_soc_remove_dai_link(comp->card, link); 553 + 551 554 kfree(link->name); 552 555 kfree(link->stream_name); 553 556 kfree(link->cpus->dai_name); 554 - 555 - list_del(&dobj->list); 556 - snd_soc_remove_dai_link(comp->card, link); 557 557 kfree(link); 558 558 } 559 559
+4 -1
sound/soc/sof/imx/imx8.c
··· 209 209 210 210 priv->pd_dev = devm_kmalloc_array(&pdev->dev, priv->num_domains, 211 211 sizeof(*priv->pd_dev), GFP_KERNEL); 212 - if (!priv) 212 + if (!priv->pd_dev) 213 213 return -ENOMEM; 214 214 215 215 priv->link = devm_kmalloc_array(&pdev->dev, priv->num_domains, ··· 303 303 goto exit_pdev_unregister; 304 304 } 305 305 sdev->mailbox_bar = SOF_FW_BLK_TYPE_SRAM; 306 + 307 + /* set default mailbox offset for FW ready message */ 308 + sdev->dsp_box.offset = MBOX_OFFSET; 306 309 307 310 return 0; 308 311
+9 -2
sound/soc/sof/intel/hda-dai.c
··· 216 216 link_dev = hda_link_stream_assign(bus, substream); 217 217 if (!link_dev) 218 218 return -EBUSY; 219 + 220 + snd_soc_dai_set_dma_data(dai, substream, (void *)link_dev); 219 221 } 220 222 221 223 stream_tag = hdac_stream(link_dev)->stream_tag; ··· 229 227 substream->stream); 230 228 if (ret < 0) 231 229 return ret; 232 - 233 - snd_soc_dai_set_dma_data(dai, substream, (void *)link_dev); 234 230 235 231 link = snd_hdac_ext_bus_get_link(bus, codec_dai->component->name); 236 232 if (!link) ··· 361 361 bus = hstream->bus; 362 362 rtd = snd_pcm_substream_chip(substream); 363 363 link_dev = snd_soc_dai_get_dma_data(dai, substream); 364 + 365 + if (!link_dev) { 366 + dev_dbg(dai->dev, 367 + "%s: link_dev is not assigned\n", __func__); 368 + return -EINVAL; 369 + } 370 + 364 371 hda_stream = hstream_to_sof_hda_stream(link_dev); 365 372 366 373 /* free the link DMA channel in the FW */
+3
sound/soc/sof/ipc.c
··· 826 826 { 827 827 struct snd_sof_ipc *ipc = sdev->ipc; 828 828 829 + if (!ipc) 830 + return; 831 + 829 832 /* disable sending of ipc's */ 830 833 mutex_lock(&ipc->tx_mutex); 831 834 ipc->disable_ipc_tx = true;
+26 -14
sound/soc/stm/stm32_spdifrx.c
··· 12 12 #include <linux/delay.h> 13 13 #include <linux/module.h> 14 14 #include <linux/of_platform.h> 15 - #include <linux/pinctrl/consumer.h> 16 15 #include <linux/regmap.h> 17 16 #include <linux/reset.h> 18 17 ··· 219 220 * @slave_config: dma slave channel runtime config pointer 220 221 * @phys_addr: SPDIFRX registers physical base address 221 222 * @lock: synchronization enabling lock 223 + * @irq_lock: prevent race condition with IRQ on stream state 222 224 * @cs: channel status buffer 223 225 * @ub: user data buffer 224 226 * @irq: SPDIFRX interrupt line ··· 240 240 struct dma_slave_config slave_config; 241 241 dma_addr_t phys_addr; 242 242 spinlock_t lock; /* Sync enabling lock */ 243 + spinlock_t irq_lock; /* Prevent race condition on stream state */ 243 244 unsigned char cs[SPDIFRX_CS_BYTES_NB]; 244 245 unsigned char ub[SPDIFRX_UB_BYTES_NB]; 245 246 int irq; ··· 321 320 static int stm32_spdifrx_start_sync(struct stm32_spdifrx_data *spdifrx) 322 321 { 323 322 int cr, cr_mask, imr, ret; 323 + unsigned long flags; 324 324 325 325 /* Enable IRQs */ 326 326 imr = SPDIFRX_IMR_IFEIE | SPDIFRX_IMR_SYNCDIE | SPDIFRX_IMR_PERRIE; ··· 329 327 if (ret) 330 328 return ret; 331 329 332 - spin_lock(&spdifrx->lock); 330 + spin_lock_irqsave(&spdifrx->lock, flags); 333 331 334 332 spdifrx->refcount++; 335 333 ··· 364 362 "Failed to start synchronization\n"); 365 363 } 366 364 367 - spin_unlock(&spdifrx->lock); 365 + spin_unlock_irqrestore(&spdifrx->lock, flags); 368 366 369 367 return ret; 370 368 } ··· 372 370 static void stm32_spdifrx_stop(struct stm32_spdifrx_data *spdifrx) 373 371 { 374 372 int cr, cr_mask, reg; 373 + unsigned long flags; 375 374 376 - spin_lock(&spdifrx->lock); 375 + spin_lock_irqsave(&spdifrx->lock, flags); 377 376 378 377 if (--spdifrx->refcount) { 379 - spin_unlock(&spdifrx->lock); 378 + spin_unlock_irqrestore(&spdifrx->lock, flags); 380 379 return; 381 380 } 382 381 ··· 396 393 regmap_read(spdifrx->regmap, STM32_SPDIFRX_DR, &reg); 397 394 regmap_read(spdifrx->regmap, STM32_SPDIFRX_CSR, &reg); 398 395 399 - spin_unlock(&spdifrx->lock); 396 + spin_unlock_irqrestore(&spdifrx->lock, flags); 400 397 } 401 398 402 399 static int stm32_spdifrx_dma_ctrl_register(struct device *dev, ··· 483 480 memset(spdifrx->cs, 0, SPDIFRX_CS_BYTES_NB); 484 481 memset(spdifrx->ub, 0, SPDIFRX_UB_BYTES_NB); 485 482 486 - pinctrl_pm_select_default_state(&spdifrx->pdev->dev); 487 - 488 483 ret = stm32_spdifrx_dma_ctrl_start(spdifrx); 489 484 if (ret < 0) 490 485 return ret; ··· 514 513 515 514 end: 516 515 clk_disable_unprepare(spdifrx->kclk); 517 - pinctrl_pm_select_sleep_state(&spdifrx->pdev->dev); 518 516 519 517 return ret; 520 518 } ··· 665 665 static irqreturn_t stm32_spdifrx_isr(int irq, void *devid) 666 666 { 667 667 struct stm32_spdifrx_data *spdifrx = (struct stm32_spdifrx_data *)devid; 668 - struct snd_pcm_substream *substream = spdifrx->substream; 669 668 struct platform_device *pdev = spdifrx->pdev; 670 669 unsigned int cr, mask, sr, imr; 671 670 unsigned int flags, sync_state; ··· 744 745 return IRQ_HANDLED; 745 746 } 746 747 747 - if (substream) 748 - snd_pcm_stop(substream, SNDRV_PCM_STATE_DISCONNECTED); 748 + spin_lock(&spdifrx->irq_lock); 749 + if (spdifrx->substream) 750 + snd_pcm_stop(spdifrx->substream, 751 + SNDRV_PCM_STATE_DISCONNECTED); 752 + spin_unlock(&spdifrx->irq_lock); 749 753 750 754 return IRQ_HANDLED; 751 755 } 752 756 753 - if (err_xrun && substream) 754 - snd_pcm_stop_xrun(substream); 757 + spin_lock(&spdifrx->irq_lock); 758 + if (err_xrun && spdifrx->substream) 759 + snd_pcm_stop_xrun(spdifrx->substream); 760 + spin_unlock(&spdifrx->irq_lock); 755 761 756 762 return IRQ_HANDLED; 757 763 } ··· 765 761 struct snd_soc_dai *cpu_dai) 766 762 { 767 763 struct stm32_spdifrx_data *spdifrx = snd_soc_dai_get_drvdata(cpu_dai); 764 + unsigned long flags; 768 765 int ret; 769 766 767 + spin_lock_irqsave(&spdifrx->irq_lock, flags); 770 768 spdifrx->substream = substream; 769 + spin_unlock_irqrestore(&spdifrx->irq_lock, flags); 771 770 772 771 ret = clk_prepare_enable(spdifrx->kclk); 773 772 if (ret) ··· 846 839 struct snd_soc_dai *cpu_dai) 847 840 { 848 841 struct stm32_spdifrx_data *spdifrx = snd_soc_dai_get_drvdata(cpu_dai); 842 + unsigned long flags; 849 843 844 + spin_lock_irqsave(&spdifrx->irq_lock, flags); 850 845 spdifrx->substream = NULL; 846 + spin_unlock_irqrestore(&spdifrx->irq_lock, flags); 847 + 851 848 clk_disable_unprepare(spdifrx->kclk); 852 849 } 853 850 ··· 955 944 spdifrx->pdev = pdev; 956 945 init_completion(&spdifrx->cs_completion); 957 946 spin_lock_init(&spdifrx->lock); 947 + spin_lock_init(&spdifrx->irq_lock); 958 948 959 949 platform_set_drvdata(pdev, spdifrx); 960 950
+1
sound/usb/card.h
··· 145 145 struct snd_usb_endpoint *sync_endpoint; 146 146 unsigned long flags; 147 147 bool need_setup_ep; /* (re)configure EP at prepare? */ 148 + bool need_setup_fmt; /* (re)configure fmt after resume? */ 148 149 unsigned int speed; /* USB_SPEED_XXX */ 149 150 150 151 u64 formats; /* format bitmasks (all or'ed) */
+21 -4
sound/usb/pcm.c
··· 506 506 if (WARN_ON(!iface)) 507 507 return -EINVAL; 508 508 alts = usb_altnum_to_altsetting(iface, fmt->altsetting); 509 - altsd = get_iface_desc(alts); 510 - if (WARN_ON(altsd->bAlternateSetting != fmt->altsetting)) 509 + if (WARN_ON(!alts)) 511 510 return -EINVAL; 511 + altsd = get_iface_desc(alts); 512 512 513 - if (fmt == subs->cur_audiofmt) 513 + if (fmt == subs->cur_audiofmt && !subs->need_setup_fmt) 514 514 return 0; 515 515 516 516 /* close the old interface */ 517 - if (subs->interface >= 0 && subs->interface != fmt->iface) { 517 + if (subs->interface >= 0 && (subs->interface != fmt->iface || subs->need_setup_fmt)) { 518 518 if (!subs->stream->chip->keep_iface) { 519 519 err = usb_set_interface(subs->dev, subs->interface, 0); 520 520 if (err < 0) { ··· 527 527 subs->interface = -1; 528 528 subs->altset_idx = 0; 529 529 } 530 + 531 + if (subs->need_setup_fmt) 532 + subs->need_setup_fmt = false; 530 533 531 534 /* set interface */ 532 535 if (iface->cur_altsetting != alts) { ··· 1731 1728 subs->data_endpoint->retire_data_urb = retire_playback_urb; 1732 1729 subs->running = 0; 1733 1730 return 0; 1731 + case SNDRV_PCM_TRIGGER_SUSPEND: 1732 + if (subs->stream->chip->setup_fmt_after_resume_quirk) { 1733 + stop_endpoints(subs, true); 1734 + subs->need_setup_fmt = true; 1735 + return 0; 1736 + } 1737 + break; 1734 1738 } 1735 1739 1736 1740 return -EINVAL; ··· 1770 1760 subs->data_endpoint->retire_data_urb = retire_capture_urb; 1771 1761 subs->running = 1; 1772 1762 return 0; 1763 + case SNDRV_PCM_TRIGGER_SUSPEND: 1764 + if (subs->stream->chip->setup_fmt_after_resume_quirk) { 1765 + stop_endpoints(subs, true); 1766 + subs->need_setup_fmt = true; 1767 + return 0; 1768 + } 1769 + break; 1773 1770 } 1774 1771 1775 1772 return -EINVAL;
+2 -1
sound/usb/quirks-table.h
··· 3466 3466 .vendor_name = "Dell", 3467 3467 .product_name = "WD19 Dock", 3468 3468 .profile_name = "Dell-WD15-Dock", 3469 - .ifnum = QUIRK_NO_INTERFACE 3469 + .ifnum = QUIRK_ANY_INTERFACE, 3470 + .type = QUIRK_SETUP_FMT_AFTER_RESUME 3470 3471 } 3471 3472 }, 3472 3473 /* MOTU Microbook II */
+12
sound/usb/quirks.c
··· 508 508 return snd_usb_create_mixer(chip, quirk->ifnum, 0); 509 509 } 510 510 511 + 512 + static int setup_fmt_after_resume_quirk(struct snd_usb_audio *chip, 513 + struct usb_interface *iface, 514 + struct usb_driver *driver, 515 + const struct snd_usb_audio_quirk *quirk) 516 + { 517 + chip->setup_fmt_after_resume_quirk = 1; 518 + return 1; /* Continue with creating streams and mixer */ 519 + } 520 + 511 521 /* 512 522 * audio-interface quirks 513 523 * ··· 556 546 [QUIRK_AUDIO_EDIROL_UAXX] = create_uaxx_quirk, 557 547 [QUIRK_AUDIO_ALIGN_TRANSFER] = create_align_transfer_quirk, 558 548 [QUIRK_AUDIO_STANDARD_MIXER] = create_standard_mixer_quirk, 549 + [QUIRK_SETUP_FMT_AFTER_RESUME] = setup_fmt_after_resume_quirk, 559 550 }; 560 551 561 552 if (quirk->type < QUIRK_TYPE_COUNT) { ··· 1397 1386 case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */ 1398 1387 case USB_ID(0x0556, 0x0014): /* Phoenix Audio TMX320VC */ 1399 1388 case USB_ID(0x05A3, 0x9420): /* ELP HD USB Camera */ 1389 + case USB_ID(0x05a7, 0x1020): /* Bose Companion 5 */ 1400 1390 case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */ 1401 1391 case USB_ID(0x1395, 0x740a): /* Sennheiser DECT */ 1402 1392 case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */
+2 -1
sound/usb/usbaudio.h
··· 33 33 wait_queue_head_t shutdown_wait; 34 34 unsigned int txfr_quirk:1; /* Subframe boundaries on transfers */ 35 35 unsigned int tx_length_quirk:1; /* Put length specifier in transfers */ 36 - 36 + unsigned int setup_fmt_after_resume_quirk:1; /* setup the format to interface after resume */ 37 37 int num_interfaces; 38 38 int num_suspended_intf; 39 39 int sample_rate_read_error; ··· 98 98 QUIRK_AUDIO_EDIROL_UAXX, 99 99 QUIRK_AUDIO_ALIGN_TRANSFER, 100 100 QUIRK_AUDIO_STANDARD_MIXER, 101 + QUIRK_SETUP_FMT_AFTER_RESUME, 101 102 102 103 QUIRK_TYPE_COUNT 103 104 };
+8 -7
tools/lib/bpf/Makefile
··· 138 138 BPF_IN_SHARED := $(SHARED_OBJDIR)libbpf-in.o 139 139 BPF_IN_STATIC := $(STATIC_OBJDIR)libbpf-in.o 140 140 VERSION_SCRIPT := libbpf.map 141 + BPF_HELPER_DEFS := $(OUTPUT)bpf_helper_defs.h 141 142 142 143 LIB_TARGET := $(addprefix $(OUTPUT),$(LIB_TARGET)) 143 144 LIB_FILE := $(addprefix $(OUTPUT),$(LIB_FILE)) ··· 160 159 161 160 all_cmd: $(CMD_TARGETS) check 162 161 163 - $(BPF_IN_SHARED): force elfdep bpfdep bpf_helper_defs.h 162 + $(BPF_IN_SHARED): force elfdep bpfdep $(BPF_HELPER_DEFS) 164 163 @(test -f ../../include/uapi/linux/bpf.h -a -f ../../../include/uapi/linux/bpf.h && ( \ 165 164 (diff -B ../../include/uapi/linux/bpf.h ../../../include/uapi/linux/bpf.h >/dev/null) || \ 166 165 echo "Warning: Kernel ABI header at 'tools/include/uapi/linux/bpf.h' differs from latest version at 'include/uapi/linux/bpf.h'" >&2 )) || true ··· 178 177 echo "Warning: Kernel ABI header at 'tools/include/uapi/linux/if_xdp.h' differs from latest version at 'include/uapi/linux/if_xdp.h'" >&2 )) || true 179 178 $(Q)$(MAKE) $(build)=libbpf OUTPUT=$(SHARED_OBJDIR) CFLAGS="$(CFLAGS) $(SHLIB_FLAGS)" 180 179 181 - $(BPF_IN_STATIC): force elfdep bpfdep bpf_helper_defs.h 180 + $(BPF_IN_STATIC): force elfdep bpfdep $(BPF_HELPER_DEFS) 182 181 $(Q)$(MAKE) $(build)=libbpf OUTPUT=$(STATIC_OBJDIR) 183 182 184 - bpf_helper_defs.h: $(srctree)/tools/include/uapi/linux/bpf.h 183 + $(BPF_HELPER_DEFS): $(srctree)/tools/include/uapi/linux/bpf.h 185 184 $(Q)$(srctree)/scripts/bpf_helpers_doc.py --header \ 186 - --file $(srctree)/tools/include/uapi/linux/bpf.h > bpf_helper_defs.h 185 + --file $(srctree)/tools/include/uapi/linux/bpf.h > $(BPF_HELPER_DEFS) 187 186 188 187 $(OUTPUT)libbpf.so: $(OUTPUT)libbpf.so.$(LIBBPF_VERSION) 189 188 ··· 244 243 $(call do_install_mkdir,$(libdir_SQ)); \ 245 244 cp -fpR $(LIB_FILE) $(DESTDIR)$(libdir_SQ) 246 245 247 - install_headers: bpf_helper_defs.h 246 + install_headers: $(BPF_HELPER_DEFS) 248 247 $(call QUIET_INSTALL, headers) \ 249 248 $(call do_install,bpf.h,$(prefix)/include/bpf,644); \ 250 249 $(call do_install,libbpf.h,$(prefix)/include/bpf,644); \ ··· 252 251 $(call do_install,libbpf_util.h,$(prefix)/include/bpf,644); \ 253 252 $(call do_install,xsk.h,$(prefix)/include/bpf,644); \ 254 253 $(call do_install,bpf_helpers.h,$(prefix)/include/bpf,644); \ 255 - $(call do_install,bpf_helper_defs.h,$(prefix)/include/bpf,644); \ 254 + $(call do_install,$(BPF_HELPER_DEFS),$(prefix)/include/bpf,644); \ 256 255 $(call do_install,bpf_tracing.h,$(prefix)/include/bpf,644); \ 257 256 $(call do_install,bpf_endian.h,$(prefix)/include/bpf,644); \ 258 257 $(call do_install,bpf_core_read.h,$(prefix)/include/bpf,644); ··· 272 271 clean: 273 272 $(call QUIET_CLEAN, libbpf) $(RM) -rf $(CMD_TARGETS) \ 274 273 *.o *~ *.a *.so *.so.$(LIBBPF_MAJOR_VERSION) .*.d .*.cmd \ 275 - *.pc LIBBPF-CFLAGS bpf_helper_defs.h \ 274 + *.pc LIBBPF-CFLAGS $(BPF_HELPER_DEFS) \ 276 275 $(SHARED_OBJDIR) $(STATIC_OBJDIR) 277 276 $(call QUIET_CLEAN, core-gen) $(RM) $(OUTPUT)FEATURE-DUMP.libbpf 278 277
+11 -7
tools/testing/kunit/kunit.py
··· 31 31 TEST_FAILURE = auto() 32 32 33 33 def create_default_kunitconfig(): 34 - if not os.path.exists(kunit_kernel.KUNITCONFIG_PATH): 34 + if not os.path.exists(kunit_kernel.kunitconfig_path): 35 35 shutil.copyfile('arch/um/configs/kunit_defconfig', 36 - kunit_kernel.KUNITCONFIG_PATH) 36 + kunit_kernel.kunitconfig_path) 37 37 38 38 def run_tests(linux: kunit_kernel.LinuxSourceTree, 39 39 request: KunitRequest) -> KunitResult: 40 - if request.defconfig: 41 - create_default_kunitconfig() 42 - 43 40 config_start = time.time() 44 41 success = linux.build_reconfig(request.build_dir) 45 42 config_end = time.time() ··· 105 108 run_parser.add_argument('--build_dir', 106 109 help='As in the make command, it specifies the build ' 107 110 'directory.', 108 - type=str, default=None, metavar='build_dir') 111 + type=str, default='', metavar='build_dir') 109 112 110 113 run_parser.add_argument('--defconfig', 111 - help='Uses a default kunitconfig.', 114 + help='Uses a default .kunitconfig.', 112 115 action='store_true') 113 116 114 117 cli_args = parser.parse_args(argv) 115 118 116 119 if cli_args.subcommand == 'run': 120 + if cli_args.build_dir: 121 + if not os.path.exists(cli_args.build_dir): 122 + os.mkdir(cli_args.build_dir) 123 + kunit_kernel.kunitconfig_path = os.path.join( 124 + cli_args.build_dir, 125 + kunit_kernel.kunitconfig_path) 126 + 117 127 if cli_args.defconfig: 118 128 create_default_kunitconfig() 119 129
+5 -5
tools/testing/kunit/kunit_kernel.py
··· 14 14 import kunit_config 15 15 16 16 KCONFIG_PATH = '.config' 17 - KUNITCONFIG_PATH = 'kunitconfig' 17 + kunitconfig_path = '.kunitconfig' 18 18 19 19 class ConfigError(Exception): 20 20 """Represents an error trying to configure the Linux kernel.""" ··· 82 82 83 83 def __init__(self): 84 84 self._kconfig = kunit_config.Kconfig() 85 - self._kconfig.read_from_file(KUNITCONFIG_PATH) 85 + self._kconfig.read_from_file(kunitconfig_path) 86 86 self._ops = LinuxSourceTreeOperations() 87 87 88 88 def clean(self): ··· 111 111 return True 112 112 113 113 def build_reconfig(self, build_dir): 114 - """Creates a new .config if it is not a subset of the kunitconfig.""" 114 + """Creates a new .config if it is not a subset of the .kunitconfig.""" 115 115 kconfig_path = get_kconfig_path(build_dir) 116 116 if os.path.exists(kconfig_path): 117 117 existing_kconfig = kunit_config.Kconfig() ··· 140 140 return False 141 141 return True 142 142 143 - def run_kernel(self, args=[], timeout=None, build_dir=None): 143 + def run_kernel(self, args=[], timeout=None, build_dir=''): 144 144 args.extend(['mem=256M']) 145 145 process = self._ops.linux_bin(args, timeout, build_dir) 146 - with open('test.log', 'w') as f: 146 + with open(os.path.join(build_dir, 'test.log'), 'w') as f: 147 147 for line in process.stdout: 148 148 f.write(line.rstrip().decode('ascii') + '\n') 149 149 yield line.rstrip().decode('ascii')
+9 -1
tools/testing/kunit/kunit_tool_test.py
··· 174 174 kunit.main(['run'], self.linux_source_mock) 175 175 assert self.linux_source_mock.build_reconfig.call_count == 1 176 176 assert self.linux_source_mock.run_kernel.call_count == 1 177 + self.linux_source_mock.run_kernel.assert_called_once_with(build_dir='', timeout=300) 177 178 self.print_mock.assert_any_call(StrContains('Testing complete.')) 178 179 179 180 def test_run_passes_args_fail(self): ··· 200 199 timeout = 3453 201 200 kunit.main(['run', '--timeout', str(timeout)], self.linux_source_mock) 202 201 assert self.linux_source_mock.build_reconfig.call_count == 1 203 - self.linux_source_mock.run_kernel.assert_called_once_with(build_dir=None, timeout=timeout) 202 + self.linux_source_mock.run_kernel.assert_called_once_with(build_dir='', timeout=timeout) 203 + self.print_mock.assert_any_call(StrContains('Testing complete.')) 204 + 205 + def test_run_builddir(self): 206 + build_dir = '.kunit' 207 + kunit.main(['run', '--build_dir', build_dir], self.linux_source_mock) 208 + assert self.linux_source_mock.build_reconfig.call_count == 1 209 + self.linux_source_mock.run_kernel.assert_called_once_with(build_dir=build_dir, timeout=300) 204 210 self.print_mock.assert_any_call(StrContains('Testing complete.')) 205 211 206 212 if __name__ == '__main__':
+1
tools/testing/selftests/bpf/.gitignore
··· 40 40 test_cpp 41 41 /no_alu32 42 42 /bpf_gcc 43 + bpf_helper_defs.h
+3 -3
tools/testing/selftests/bpf/Makefile
··· 120 120 $(BPFOBJ): force 121 121 $(MAKE) -C $(BPFDIR) OUTPUT=$(OUTPUT)/ 122 122 123 - BPF_HELPERS := $(BPFDIR)/bpf_helper_defs.h $(wildcard $(BPFDIR)/bpf_*.h) 124 - $(BPFDIR)/bpf_helper_defs.h: 125 - $(MAKE) -C $(BPFDIR) OUTPUT=$(OUTPUT)/ bpf_helper_defs.h 123 + BPF_HELPERS := $(OUTPUT)/bpf_helper_defs.h $(wildcard $(BPFDIR)/bpf_*.h) 124 + $(OUTPUT)/bpf_helper_defs.h: 125 + $(MAKE) -C $(BPFDIR) OUTPUT=$(OUTPUT)/ $(OUTPUT)/bpf_helper_defs.h 126 126 127 127 # Get Clang's default includes on this system, as opposed to those seen by 128 128 # '-target bpf'. This fixes "missing" files on some architectures/distros,
+1 -1
tools/testing/selftests/filesystems/epoll/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 3 CFLAGS += -I../../../../../usr/include/ 4 - LDFLAGS += -lpthread 4 + LDLIBS += -lpthread 5 5 TEST_GEN_PROGS := epoll_wakeup_test 6 6 7 7 include ../../lib.mk
+6
tools/testing/selftests/firmware/fw_lib.sh
··· 34 34 35 35 check_mods() 36 36 { 37 + local uid=$(id -u) 38 + if [ $uid -ne 0 ]; then 39 + echo "skip all tests: must be run as root" >&2 40 + exit $ksft_skip 41 + fi 42 + 37 43 trap "test_modprobe" EXIT 38 44 if [ ! -d $DIR ]; then 39 45 modprobe test_firmware
+14 -1
tools/testing/selftests/livepatch/functions.sh
··· 7 7 MAX_RETRIES=600 8 8 RETRY_INTERVAL=".1" # seconds 9 9 10 + # Kselftest framework requirement - SKIP code is 4 11 + ksft_skip=4 12 + 10 13 # log(msg) - write message to kernel log 11 14 # msg - insightful words 12 15 function log() { ··· 21 18 function skip() { 22 19 log "SKIP: $1" 23 20 echo "SKIP: $1" >&2 24 - exit 4 21 + exit $ksft_skip 22 + } 23 + 24 + # root test 25 + function is_root() { 26 + uid=$(id -u) 27 + if [ $uid -ne 0 ]; then 28 + echo "skip all tests: must be run as root" >&2 29 + exit $ksft_skip 30 + fi 25 31 } 26 32 27 33 # die(msg) - game over, man ··· 74 62 # for verbose livepatching output and turn on 75 63 # the ftrace_enabled sysctl. 76 64 function setup_config() { 65 + is_root 77 66 push_config 78 67 set_dynamic_debug 79 68 set_ftrace_enabled 1
+1 -2
tools/testing/selftests/livepatch/test-state.sh
··· 8 8 MOD_LIVEPATCH2=test_klp_state2 9 9 MOD_LIVEPATCH3=test_klp_state3 10 10 11 - set_dynamic_debug 12 - 11 + setup_config 13 12 14 13 # TEST: Loading and removing a module that modifies the system state 15 14
+8
tools/testing/selftests/net/forwarding/loopback.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + # Kselftest framework requirement - SKIP code is 4. 5 + ksft_skip=4 6 + 4 7 ALL_TESTS="loopback_test" 5 8 NUM_NETIFS=2 6 9 source tc_common.sh ··· 75 72 76 73 h1_create 77 74 h2_create 75 + 76 + if ethtool -k $h1 | grep loopback | grep -q fixed; then 77 + log_test "SKIP: dev $h1 does not support loopback feature" 78 + exit $ksft_skip 79 + fi 78 80 } 79 81 80 82 cleanup()
+34 -5
tools/testing/selftests/netfilter/nft_flowtable.sh
··· 226 226 return 0 227 227 } 228 228 229 - test_tcp_forwarding() 229 + test_tcp_forwarding_ip() 230 230 { 231 231 local nsa=$1 232 232 local nsb=$2 233 + local dstip=$3 234 + local dstport=$4 233 235 local lret=0 234 236 235 237 ip netns exec $nsb nc -w 5 -l -p 12345 < "$ns2in" > "$ns2out" & 236 238 lpid=$! 237 239 238 240 sleep 1 239 - ip netns exec $nsa nc -w 4 10.0.2.99 12345 < "$ns1in" > "$ns1out" & 241 + ip netns exec $nsa nc -w 4 "$dstip" "$dstport" < "$ns1in" > "$ns1out" & 240 242 cpid=$! 241 243 242 244 sleep 3 ··· 255 253 check_transfer "$ns2in" "$ns1out" "ns1 <- ns2" 256 254 if [ $? -ne 0 ];then 257 255 lret=1 256 + fi 257 + 258 + return $lret 259 + } 260 + 261 + test_tcp_forwarding() 262 + { 263 + test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345 264 + 265 + return $? 266 + } 267 + 268 + test_tcp_forwarding_nat() 269 + { 270 + local lret 271 + 272 + test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345 273 + lret=$? 274 + 275 + if [ $lret -eq 0 ] ; then 276 + test_tcp_forwarding_ip "$1" "$2" 10.6.6.6 1666 277 + lret=$? 258 278 fi 259 279 260 280 return $lret ··· 307 283 # Same, but with NAT enabled. 308 284 ip netns exec nsr1 nft -f - <<EOF 309 285 table ip nat { 286 + chain prerouting { 287 + type nat hook prerouting priority 0; policy accept; 288 + meta iif "veth0" ip daddr 10.6.6.6 tcp dport 1666 counter dnat ip to 10.0.2.99:12345 289 + } 290 + 310 291 chain postrouting { 311 292 type nat hook postrouting priority 0; policy accept; 312 - meta oifname "veth1" masquerade 293 + meta oifname "veth1" counter masquerade 313 294 } 314 295 } 315 296 EOF 316 297 317 - test_tcp_forwarding ns1 ns2 298 + test_tcp_forwarding_nat ns1 ns2 318 299 319 300 if [ $? -eq 0 ] ;then 320 301 echo "PASS: flow offloaded for ns1/ns2 with NAT" ··· 342 313 ip netns exec ns1 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null 343 314 ip netns exec ns2 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null 344 315 345 - test_tcp_forwarding ns1 ns2 316 + test_tcp_forwarding_nat ns1 ns2 346 317 if [ $? -eq 0 ] ;then 347 318 echo "PASS: flow offloaded for ns1/ns2 with NAT and pmtu discovery" 348 319 else
+10 -8
tools/testing/selftests/rseq/param_test.c
··· 15 15 #include <errno.h> 16 16 #include <stddef.h> 17 17 18 - static inline pid_t gettid(void) 18 + static inline pid_t rseq_gettid(void) 19 19 { 20 20 return syscall(__NR_gettid); 21 21 } ··· 373 373 rseq_percpu_unlock(&data->lock, cpu); 374 374 #ifndef BENCHMARK 375 375 if (i != 0 && !(i % (reps / 10))) 376 - printf_verbose("tid %d: count %lld\n", (int) gettid(), i); 376 + printf_verbose("tid %d: count %lld\n", 377 + (int) rseq_gettid(), i); 377 378 #endif 378 379 } 379 380 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n", 380 - (int) gettid(), nr_abort, signals_delivered); 381 + (int) rseq_gettid(), nr_abort, signals_delivered); 381 382 if (!opt_disable_rseq && thread_data->reg && 382 383 rseq_unregister_current_thread()) 383 384 abort(); ··· 455 454 } while (rseq_unlikely(ret)); 456 455 #ifndef BENCHMARK 457 456 if (i != 0 && !(i % (reps / 10))) 458 - printf_verbose("tid %d: count %lld\n", (int) gettid(), i); 457 + printf_verbose("tid %d: count %lld\n", 458 + (int) rseq_gettid(), i); 459 459 #endif 460 460 } 461 461 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n", 462 - (int) gettid(), nr_abort, signals_delivered); 462 + (int) rseq_gettid(), nr_abort, signals_delivered); 463 463 if (!opt_disable_rseq && thread_data->reg && 464 464 rseq_unregister_current_thread()) 465 465 abort(); ··· 607 605 } 608 606 609 607 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n", 610 - (int) gettid(), nr_abort, signals_delivered); 608 + (int) rseq_gettid(), nr_abort, signals_delivered); 611 609 if (!opt_disable_rseq && rseq_unregister_current_thread()) 612 610 abort(); 613 611 ··· 798 796 } 799 797 800 798 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n", 801 - (int) gettid(), nr_abort, signals_delivered); 799 + (int) rseq_gettid(), nr_abort, signals_delivered); 802 800 if (!opt_disable_rseq && rseq_unregister_current_thread()) 803 801 abort(); 804 802 ··· 1013 1011 } 1014 1012 1015 1013 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n", 1016 - (int) gettid(), nr_abort, signals_delivered); 1014 + (int) rseq_gettid(), nr_abort, signals_delivered); 1017 1015 if (!opt_disable_rseq && rseq_unregister_current_thread()) 1018 1016 abort(); 1019 1017
+7 -5
tools/testing/selftests/rseq/rseq.h
··· 149 149 /* 150 150 * rseq_prepare_unload() should be invoked by each thread executing a rseq 151 151 * critical section at least once between their last critical section and 152 - * library unload of the library defining the rseq critical section 153 - * (struct rseq_cs). This also applies to use of rseq in code generated by 154 - * JIT: rseq_prepare_unload() should be invoked at least once by each 155 - * thread executing a rseq critical section before reclaim of the memory 156 - * holding the struct rseq_cs. 152 + * library unload of the library defining the rseq critical section (struct 153 + * rseq_cs) or the code referred to by the struct rseq_cs start_ip and 154 + * post_commit_offset fields. This also applies to use of rseq in code 155 + * generated by JIT: rseq_prepare_unload() should be invoked at least once by 156 + * each thread executing a rseq critical section before reclaim of the memory 157 + * holding the struct rseq_cs or reclaim of the code pointed to by struct 158 + * rseq_cs start_ip and post_commit_offset fields. 157 159 */ 158 160 static inline void rseq_prepare_unload(void) 159 161 {
+1
tools/testing/selftests/rseq/settings
··· 1 + timeout=0
+14 -1
tools/testing/selftests/seccomp/seccomp_bpf.c
··· 3158 3158 EXPECT_GT(poll(&pollfd, 1, -1), 0); 3159 3159 EXPECT_EQ(pollfd.revents, POLLIN); 3160 3160 3161 - EXPECT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, &req), 0); 3161 + /* Test that we can't pass garbage to the kernel. */ 3162 + memset(&req, 0, sizeof(req)); 3163 + req.pid = -1; 3164 + errno = 0; 3165 + ret = ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, &req); 3166 + EXPECT_EQ(-1, ret); 3167 + EXPECT_EQ(EINVAL, errno); 3168 + 3169 + if (ret) { 3170 + req.pid = 0; 3171 + EXPECT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, &req), 0); 3172 + } 3162 3173 3163 3174 pollfd.fd = listener; 3164 3175 pollfd.events = POLLIN | POLLOUT; ··· 3289 3278 3290 3279 close(sk_pair[1]); 3291 3280 3281 + memset(&req, 0, sizeof(req)); 3292 3282 EXPECT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, &req), 0); 3293 3283 3294 3284 EXPECT_EQ(kill(pid, SIGUSR1), 0); ··· 3308 3296 EXPECT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_SEND, &resp), -1); 3309 3297 EXPECT_EQ(errno, ENOENT); 3310 3298 3299 + memset(&req, 0, sizeof(req)); 3311 3300 EXPECT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, &req), 0); 3312 3301 3313 3302 resp.id = req.id;
+1 -1
usr/gen_initramfs_list.sh
··· 128 128 str="${ftype} ${name} ${location} ${str}" 129 129 ;; 130 130 "nod") 131 - local dev=`LC_ALL=C ls -l "${location}"` 131 + local dev="`LC_ALL=C ls -l "${location}"`" 132 132 local maj=`field 5 ${dev}` 133 133 local min=`field 6 ${dev}` 134 134 maj=${maj%,}