Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.20-rc6 into staging-next

We want the staging fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3084 -1266
+1 -1
Documentation/ABI/testing/sysfs-class-net-dsa
··· 1 - What: /sys/class/net/<iface>/tagging 1 + What: /sys/class/net/<iface>/dsa/tagging 2 2 Date: August 2018 3 3 KernelVersion: 4.20 4 4 Contact: netdev@vger.kernel.org
+16
Documentation/devicetree/bindings/clock/clock-bindings.txt
··· 168 168 169 169 Configuration of common clocks, which affect multiple consumer devices can 170 170 be similarly specified in the clock provider node. 171 + 172 + ==Protected clocks== 173 + 174 + Some platforms or firmwares may not fully expose all the clocks to the OS, such 175 + as in situations where those clks are used by drivers running in ARM secure 176 + execution levels. Such a configuration can be specified in device tree with the 177 + protected-clocks property in the form of a clock specifier list. This property should 178 + only be specified in the node that is providing the clocks being protected: 179 + 180 + clock-controller@a000f000 { 181 + compatible = "vendor,clk95; 182 + reg = <0xa000f000 0x1000> 183 + #clocks-cells = <1>; 184 + ... 185 + protected-clocks = <UART3_CLK>, <SPI5_CLK>; 186 + };
+1 -1
Documentation/devicetree/bindings/input/input-reset.txt
··· 12 12 a set of keys. 13 13 14 14 Required property: 15 - sysrq-reset-seq: array of Linux keycodes, one keycode per cell. 15 + keyset: array of Linux keycodes, one keycode per cell. 16 16 17 17 Optional property: 18 18 timeout-ms: duration keys must be pressed together in milliseconds before
-29
Documentation/devicetree/bindings/media/rockchip-vpu.txt
··· 1 - device-tree bindings for rockchip VPU codec 2 - 3 - Rockchip (Video Processing Unit) present in various Rockchip platforms, 4 - such as RK3288 and RK3399. 5 - 6 - Required properties: 7 - - compatible: value should be one of the following 8 - "rockchip,rk3288-vpu"; 9 - "rockchip,rk3399-vpu"; 10 - - interrupts: encoding and decoding interrupt specifiers 11 - - interrupt-names: should be "vepu" and "vdpu" 12 - - clocks: phandle to VPU aclk, hclk clocks 13 - - clock-names: should be "aclk" and "hclk" 14 - - power-domains: phandle to power domain node 15 - - iommus: phandle to a iommu node 16 - 17 - Example: 18 - SoC-specific DT entry: 19 - vpu: video-codec@ff9a0000 { 20 - compatible = "rockchip,rk3288-vpu"; 21 - reg = <0x0 0xff9a0000 0x0 0x800>; 22 - interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>, 23 - <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>; 24 - interrupt-names = "vepu", "vdpu"; 25 - clocks = <&cru ACLK_VCODEC>, <&cru HCLK_VCODEC>; 26 - clock-names = "aclk", "hclk"; 27 - power-domains = <&power RK3288_PD_VIDEO>; 28 - iommus = <&vpu_mmu>; 29 - };
+25 -1
Documentation/media/uapi/mediactl/media-ioc-request-alloc.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 1 + .. This file is dual-licensed: you can use it either under the terms 2 + .. of the GPL or the GFDL 1.1+ license, at your option. Note that this 3 + .. dual licensing only applies to this file, and not this project as a 4 + .. whole. 5 + .. 6 + .. a) This file is free software; you can redistribute it and/or 7 + .. modify it under the terms of the GNU General Public License as 8 + .. published by the Free Software Foundation; either version 2 of 9 + .. the License, or (at your option) any later version. 10 + .. 11 + .. This file is distributed in the hope that it will be useful, 12 + .. but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + .. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + .. GNU General Public License for more details. 15 + .. 16 + .. Or, alternatively, 17 + .. 18 + .. b) Permission is granted to copy, distribute and/or modify this 19 + .. document under the terms of the GNU Free Documentation License, 20 + .. Version 1.1 or any later version published by the Free Software 21 + .. Foundation, with no Invariant Sections, no Front-Cover Texts 22 + .. and no Back-Cover Texts. A copy of the license is included at 23 + .. Documentation/media/uapi/fdl-appendix.rst. 24 + .. 25 + .. TODO: replace it to GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 2 26 3 27 .. _media_ioc_request_alloc: 4 28
+25 -1
Documentation/media/uapi/mediactl/media-request-ioc-queue.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 1 + .. This file is dual-licensed: you can use it either under the terms 2 + .. of the GPL or the GFDL 1.1+ license, at your option. Note that this 3 + .. dual licensing only applies to this file, and not this project as a 4 + .. whole. 5 + .. 6 + .. a) This file is free software; you can redistribute it and/or 7 + .. modify it under the terms of the GNU General Public License as 8 + .. published by the Free Software Foundation; either version 2 of 9 + .. the License, or (at your option) any later version. 10 + .. 11 + .. This file is distributed in the hope that it will be useful, 12 + .. but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + .. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + .. GNU General Public License for more details. 15 + .. 16 + .. Or, alternatively, 17 + .. 18 + .. b) Permission is granted to copy, distribute and/or modify this 19 + .. document under the terms of the GNU Free Documentation License, 20 + .. Version 1.1 or any later version published by the Free Software 21 + .. Foundation, with no Invariant Sections, no Front-Cover Texts 22 + .. and no Back-Cover Texts. A copy of the license is included at 23 + .. Documentation/media/uapi/fdl-appendix.rst. 24 + .. 25 + .. TODO: replace it to GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 2 26 3 27 .. _media_request_ioc_queue: 4 28
+25 -1
Documentation/media/uapi/mediactl/media-request-ioc-reinit.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 1 + .. This file is dual-licensed: you can use it either under the terms 2 + .. of the GPL or the GFDL 1.1+ license, at your option. Note that this 3 + .. dual licensing only applies to this file, and not this project as a 4 + .. whole. 5 + .. 6 + .. a) This file is free software; you can redistribute it and/or 7 + .. modify it under the terms of the GNU General Public License as 8 + .. published by the Free Software Foundation; either version 2 of 9 + .. the License, or (at your option) any later version. 10 + .. 11 + .. This file is distributed in the hope that it will be useful, 12 + .. but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + .. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + .. GNU General Public License for more details. 15 + .. 16 + .. Or, alternatively, 17 + .. 18 + .. b) Permission is granted to copy, distribute and/or modify this 19 + .. document under the terms of the GNU Free Documentation License, 20 + .. Version 1.1 or any later version published by the Free Software 21 + .. Foundation, with no Invariant Sections, no Front-Cover Texts 22 + .. and no Back-Cover Texts. A copy of the license is included at 23 + .. Documentation/media/uapi/fdl-appendix.rst. 24 + .. 25 + .. TODO: replace it to GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 2 26 3 27 .. _media_request_ioc_reinit: 4 28
+25 -1
Documentation/media/uapi/mediactl/request-api.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 1 + .. This file is dual-licensed: you can use it either under the terms 2 + .. of the GPL or the GFDL 1.1+ license, at your option. Note that this 3 + .. dual licensing only applies to this file, and not this project as a 4 + .. whole. 5 + .. 6 + .. a) This file is free software; you can redistribute it and/or 7 + .. modify it under the terms of the GNU General Public License as 8 + .. published by the Free Software Foundation; either version 2 of 9 + .. the License, or (at your option) any later version. 10 + .. 11 + .. This file is distributed in the hope that it will be useful, 12 + .. but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + .. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + .. GNU General Public License for more details. 15 + .. 16 + .. Or, alternatively, 17 + .. 18 + .. b) Permission is granted to copy, distribute and/or modify this 19 + .. document under the terms of the GNU Free Documentation License, 20 + .. Version 1.1 or any later version published by the Free Software 21 + .. Foundation, with no Invariant Sections, no Front-Cover Texts 22 + .. and no Back-Cover Texts. A copy of the license is included at 23 + .. Documentation/media/uapi/fdl-appendix.rst. 24 + .. 25 + .. TODO: replace it to GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 2 26 3 27 .. _media-request-api: 4 28
+25 -1
Documentation/media/uapi/mediactl/request-func-close.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 1 + .. This file is dual-licensed: you can use it either under the terms 2 + .. of the GPL or the GFDL 1.1+ license, at your option. Note that this 3 + .. dual licensing only applies to this file, and not this project as a 4 + .. whole. 5 + .. 6 + .. a) This file is free software; you can redistribute it and/or 7 + .. modify it under the terms of the GNU General Public License as 8 + .. published by the Free Software Foundation; either version 2 of 9 + .. the License, or (at your option) any later version. 10 + .. 11 + .. This file is distributed in the hope that it will be useful, 12 + .. but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + .. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + .. GNU General Public License for more details. 15 + .. 16 + .. Or, alternatively, 17 + .. 18 + .. b) Permission is granted to copy, distribute and/or modify this 19 + .. document under the terms of the GNU Free Documentation License, 20 + .. Version 1.1 or any later version published by the Free Software 21 + .. Foundation, with no Invariant Sections, no Front-Cover Texts 22 + .. and no Back-Cover Texts. A copy of the license is included at 23 + .. Documentation/media/uapi/fdl-appendix.rst. 24 + .. 25 + .. TODO: replace it to GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 2 26 3 27 .. _request-func-close: 4 28
+25 -1
Documentation/media/uapi/mediactl/request-func-ioctl.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 1 + .. This file is dual-licensed: you can use it either under the terms 2 + .. of the GPL or the GFDL 1.1+ license, at your option. Note that this 3 + .. dual licensing only applies to this file, and not this project as a 4 + .. whole. 5 + .. 6 + .. a) This file is free software; you can redistribute it and/or 7 + .. modify it under the terms of the GNU General Public License as 8 + .. published by the Free Software Foundation; either version 2 of 9 + .. the License, or (at your option) any later version. 10 + .. 11 + .. This file is distributed in the hope that it will be useful, 12 + .. but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + .. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + .. GNU General Public License for more details. 15 + .. 16 + .. Or, alternatively, 17 + .. 18 + .. b) Permission is granted to copy, distribute and/or modify this 19 + .. document under the terms of the GNU Free Documentation License, 20 + .. Version 1.1 or any later version published by the Free Software 21 + .. Foundation, with no Invariant Sections, no Front-Cover Texts 22 + .. and no Back-Cover Texts. A copy of the license is included at 23 + .. Documentation/media/uapi/fdl-appendix.rst. 24 + .. 25 + .. TODO: replace it to GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 2 26 3 27 .. _request-func-ioctl: 4 28
+25 -1
Documentation/media/uapi/mediactl/request-func-poll.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 1 + .. This file is dual-licensed: you can use it either under the terms 2 + .. of the GPL or the GFDL 1.1+ license, at your option. Note that this 3 + .. dual licensing only applies to this file, and not this project as a 4 + .. whole. 5 + .. 6 + .. a) This file is free software; you can redistribute it and/or 7 + .. modify it under the terms of the GNU General Public License as 8 + .. published by the Free Software Foundation; either version 2 of 9 + .. the License, or (at your option) any later version. 10 + .. 11 + .. This file is distributed in the hope that it will be useful, 12 + .. but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + .. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + .. GNU General Public License for more details. 15 + .. 16 + .. Or, alternatively, 17 + .. 18 + .. b) Permission is granted to copy, distribute and/or modify this 19 + .. document under the terms of the GNU Free Documentation License, 20 + .. Version 1.1 or any later version published by the Free Software 21 + .. Foundation, with no Invariant Sections, no Front-Cover Texts 22 + .. and no Back-Cover Texts. A copy of the license is included at 23 + .. Documentation/media/uapi/fdl-appendix.rst. 24 + .. 25 + .. TODO: replace it to GPL-2.0 OR GFDL-1.1-or-later WITH no-invariant-sections 2 26 3 27 .. _request-func-poll: 4 28
+15 -1
MAINTAINERS
··· 1480 1480 F: drivers/clocksource/timer-prima2.c 1481 1481 F: drivers/clocksource/timer-atlas7.c 1482 1482 N: [^a-z]sirf 1483 + X: drivers/gnss 1483 1484 1484 1485 ARM/EBSA110 MACHINE SUPPORT 1485 1486 M: Russell King <linux@armlinux.org.uk> ··· 3280 3279 F: sound/pci/oxygen/ 3281 3280 3282 3281 C-SKY ARCHITECTURE 3283 - M: Guo Ren <ren_guo@c-sky.com> 3282 + M: Guo Ren <guoren@kernel.org> 3284 3283 T: git https://github.com/c-sky/csky-linux.git 3285 3284 S: Supported 3286 3285 F: arch/csky/ 3287 3286 F: Documentation/devicetree/bindings/csky/ 3287 + F: drivers/irqchip/irq-csky-* 3288 + F: Documentation/devicetree/bindings/interrupt-controller/csky,* 3289 + F: drivers/clocksource/timer-gx6605s.c 3290 + F: drivers/clocksource/timer-mp-csky.c 3291 + F: Documentation/devicetree/bindings/timer/csky,* 3288 3292 K: csky 3289 3293 N: csky 3290 3294 ··· 6330 6324 6331 6325 GNSS SUBSYSTEM 6332 6326 M: Johan Hovold <johan@kernel.org> 6327 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/johan/gnss.git 6333 6328 S: Maintained 6334 6329 F: Documentation/ABI/testing/sysfs-class-gnss 6335 6330 F: Documentation/devicetree/bindings/gnss/ ··· 13905 13898 F: drivers/md/raid* 13906 13899 F: include/linux/raid/ 13907 13900 F: include/uapi/linux/raid/ 13901 + 13902 + SOCIONEXT (SNI) AVE NETWORK DRIVER 13903 + M: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> 13904 + L: netdev@vger.kernel.org 13905 + S: Maintained 13906 + F: drivers/net/ethernet/socionext/sni_ave.c 13907 + F: Documentation/devicetree/bindings/net/socionext,uniphier-ave4.txt 13908 13908 13909 13909 SOCIONEXT (SNI) NETSEC NETWORK DRIVER 13910 13910 M: Jassi Brar <jaswinder.singh@linaro.org>
+1 -1
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 20 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Shy Crocodile 7 7 8 8 # *DOCUMENTATION*
+1 -12
arch/arc/Kconfig
··· 109 109 110 110 choice 111 111 prompt "ARC Instruction Set" 112 - default ISA_ARCOMPACT 112 + default ISA_ARCV2 113 113 114 114 config ISA_ARCOMPACT 115 115 bool "ARCompact ISA" ··· 176 176 177 177 config CPU_BIG_ENDIAN 178 178 bool "Enable Big Endian Mode" 179 - default n 180 179 help 181 180 Build kernel for Big Endian Mode of ARC CPU 182 181 183 182 config SMP 184 183 bool "Symmetric Multi-Processing" 185 - default n 186 184 select ARC_MCIP if ISA_ARCV2 187 185 help 188 186 This enables support for systems with more than one CPU. ··· 252 254 config ARC_CACHE_VIPT_ALIASING 253 255 bool "Support VIPT Aliasing D$" 254 256 depends on ARC_HAS_DCACHE && ISA_ARCOMPACT 255 - default n 256 257 257 258 endif #ARC_CACHE 258 259 ··· 259 262 bool "Use ICCM" 260 263 help 261 264 Single Cycle RAMS to store Fast Path Code 262 - default n 263 265 264 266 config ARC_ICCM_SZ 265 267 int "ICCM Size in KB" ··· 269 273 bool "Use DCCM" 270 274 help 271 275 Single Cycle RAMS to store Fast Path Data 272 - default n 273 276 274 277 config ARC_DCCM_SZ 275 278 int "DCCM Size in KB" ··· 361 366 362 367 config ARC_COMPACT_IRQ_LEVELS 363 368 bool "Setup Timer IRQ as high Priority" 364 - default n 365 369 # if SMP, LV2 enabled ONLY if ARC implementation has LV2 re-entrancy 366 370 depends on !SMP 367 371 368 372 config ARC_FPU_SAVE_RESTORE 369 373 bool "Enable FPU state persistence across context switch" 370 - default n 371 374 help 372 375 Double Precision Floating Point unit had dedicated regs which 373 376 need to be saved/restored across context-switch. ··· 446 453 447 454 config ARC_HAS_PAE40 448 455 bool "Support for the 40-bit Physical Address Extension" 449 - default n 450 456 depends on ISA_ARCV2 451 457 select HIGHMEM 452 458 select PHYS_ADDR_T_64BIT ··· 488 496 489 497 config ARC_METAWARE_HLINK 490 498 bool "Support for Metaware debugger assisted Host access" 491 - default n 492 499 help 493 500 This options allows a Linux userland apps to directly access 494 501 host file system (open/creat/read/write etc) with help from ··· 515 524 516 525 config ARC_DBG_TLB_PARANOIA 517 526 bool "Paranoia Checks in Low Level TLB Handlers" 518 - default n 519 527 520 528 endif 521 529 522 530 config ARC_UBOOT_SUPPORT 523 531 bool "Support uboot arg Handling" 524 - default n 525 532 help 526 533 ARC Linux by default checks for uboot provided args as pointers to 527 534 external cmdline or DTB. This however breaks in absence of uboot,
+1 -1
arch/arc/Makefile
··· 6 6 # published by the Free Software Foundation. 7 7 # 8 8 9 - KBUILD_DEFCONFIG := nsim_700_defconfig 9 + KBUILD_DEFCONFIG := nsim_hs_defconfig 10 10 11 11 cflags-y += -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__ 12 12 cflags-$(CONFIG_ISA_ARCOMPACT) += -mA7
+15
arch/arc/boot/dts/hsdk.dts
··· 222 222 bus-width = <4>; 223 223 dma-coherent; 224 224 }; 225 + 226 + gpio: gpio@3000 { 227 + compatible = "snps,dw-apb-gpio"; 228 + reg = <0x3000 0x20>; 229 + #address-cells = <1>; 230 + #size-cells = <0>; 231 + 232 + gpio_port_a: gpio-controller@0 { 233 + compatible = "snps,dw-apb-gpio-port"; 234 + gpio-controller; 235 + #gpio-cells = <2>; 236 + snps,nr-gpios = <24>; 237 + reg = <0>; 238 + }; 239 + }; 225 240 }; 226 241 227 242 memory@80000000 {
+2
arch/arc/configs/axs101_defconfig
··· 14 14 # CONFIG_VM_EVENT_COUNTERS is not set 15 15 # CONFIG_SLUB_DEBUG is not set 16 16 # CONFIG_COMPAT_BRK is not set 17 + CONFIG_ISA_ARCOMPACT=y 17 18 CONFIG_MODULES=y 18 19 CONFIG_MODULE_FORCE_LOAD=y 19 20 CONFIG_MODULE_UNLOAD=y ··· 96 95 CONFIG_NTFS_FS=y 97 96 CONFIG_TMPFS=y 98 97 CONFIG_NFS_FS=y 98 + CONFIG_NFS_V3_ACL=y 99 99 CONFIG_NLS_CODEPAGE_437=y 100 100 CONFIG_NLS_ISO8859_1=y 101 101 # CONFIG_ENABLE_WARN_DEPRECATED is not set
+1
arch/arc/configs/axs103_defconfig
··· 94 94 CONFIG_NTFS_FS=y 95 95 CONFIG_TMPFS=y 96 96 CONFIG_NFS_FS=y 97 + CONFIG_NFS_V3_ACL=y 97 98 CONFIG_NLS_CODEPAGE_437=y 98 99 CONFIG_NLS_ISO8859_1=y 99 100 # CONFIG_ENABLE_WARN_DEPRECATED is not set
+1
arch/arc/configs/axs103_smp_defconfig
··· 97 97 CONFIG_NTFS_FS=y 98 98 CONFIG_TMPFS=y 99 99 CONFIG_NFS_FS=y 100 + CONFIG_NFS_V3_ACL=y 100 101 CONFIG_NLS_CODEPAGE_437=y 101 102 CONFIG_NLS_ISO8859_1=y 102 103 # CONFIG_ENABLE_WARN_DEPRECATED is not set
+4
arch/arc/configs/hsdk_defconfig
··· 45 45 CONFIG_SERIAL_8250_DW=y 46 46 CONFIG_SERIAL_OF_PLATFORM=y 47 47 # CONFIG_HW_RANDOM is not set 48 + CONFIG_GPIOLIB=y 49 + CONFIG_GPIO_SYSFS=y 50 + CONFIG_GPIO_DWAPB=y 48 51 # CONFIG_HWMON is not set 49 52 CONFIG_DRM=y 50 53 # CONFIG_DRM_FBDEV_EMULATION is not set ··· 68 65 CONFIG_VFAT_FS=y 69 66 CONFIG_TMPFS=y 70 67 CONFIG_NFS_FS=y 68 + CONFIG_NFS_V3_ACL=y 71 69 CONFIG_NLS_CODEPAGE_437=y 72 70 CONFIG_NLS_ISO8859_1=y 73 71 # CONFIG_ENABLE_WARN_DEPRECATED is not set
+2
arch/arc/configs/nps_defconfig
··· 15 15 CONFIG_EMBEDDED=y 16 16 CONFIG_PERF_EVENTS=y 17 17 # CONFIG_COMPAT_BRK is not set 18 + CONFIG_ISA_ARCOMPACT=y 18 19 CONFIG_KPROBES=y 19 20 CONFIG_MODULES=y 20 21 CONFIG_MODULE_FORCE_LOAD=y ··· 74 73 CONFIG_TMPFS=y 75 74 # CONFIG_MISC_FILESYSTEMS is not set 76 75 CONFIG_NFS_FS=y 76 + CONFIG_NFS_V3_ACL=y 77 77 CONFIG_ROOT_NFS=y 78 78 CONFIG_DEBUG_INFO=y 79 79 # CONFIG_ENABLE_WARN_DEPRECATED is not set
+1
arch/arc/configs/nsim_700_defconfig
··· 15 15 CONFIG_PERF_EVENTS=y 16 16 # CONFIG_SLUB_DEBUG is not set 17 17 # CONFIG_COMPAT_BRK is not set 18 + CONFIG_ISA_ARCOMPACT=y 18 19 CONFIG_KPROBES=y 19 20 CONFIG_MODULES=y 20 21 # CONFIG_LBDAF is not set
+2
arch/arc/configs/nsimosci_defconfig
··· 15 15 CONFIG_PERF_EVENTS=y 16 16 # CONFIG_SLUB_DEBUG is not set 17 17 # CONFIG_COMPAT_BRK is not set 18 + CONFIG_ISA_ARCOMPACT=y 18 19 CONFIG_KPROBES=y 19 20 CONFIG_MODULES=y 20 21 # CONFIG_LBDAF is not set ··· 67 66 CONFIG_TMPFS=y 68 67 # CONFIG_MISC_FILESYSTEMS is not set 69 68 CONFIG_NFS_FS=y 69 + CONFIG_NFS_V3_ACL=y 70 70 # CONFIG_ENABLE_WARN_DEPRECATED is not set 71 71 # CONFIG_ENABLE_MUST_CHECK is not set
+1
arch/arc/configs/nsimosci_hs_defconfig
··· 65 65 CONFIG_TMPFS=y 66 66 # CONFIG_MISC_FILESYSTEMS is not set 67 67 CONFIG_NFS_FS=y 68 + CONFIG_NFS_V3_ACL=y 68 69 # CONFIG_ENABLE_WARN_DEPRECATED is not set 69 70 # CONFIG_ENABLE_MUST_CHECK is not set
+1
arch/arc/configs/nsimosci_hs_smp_defconfig
··· 76 76 CONFIG_TMPFS=y 77 77 # CONFIG_MISC_FILESYSTEMS is not set 78 78 CONFIG_NFS_FS=y 79 + CONFIG_NFS_V3_ACL=y 79 80 # CONFIG_ENABLE_WARN_DEPRECATED is not set 80 81 # CONFIG_ENABLE_MUST_CHECK is not set 81 82 CONFIG_FTRACE=y
+1
arch/arc/configs/tb10x_defconfig
··· 19 19 # CONFIG_AIO is not set 20 20 CONFIG_EMBEDDED=y 21 21 # CONFIG_COMPAT_BRK is not set 22 + CONFIG_ISA_ARCOMPACT=y 22 23 CONFIG_SLAB=y 23 24 CONFIG_MODULES=y 24 25 CONFIG_MODULE_FORCE_LOAD=y
+1
arch/arc/configs/vdk_hs38_defconfig
··· 85 85 CONFIG_TMPFS=y 86 86 CONFIG_JFFS2_FS=y 87 87 CONFIG_NFS_FS=y 88 + CONFIG_NFS_V3_ACL=y 88 89 CONFIG_NLS_CODEPAGE_437=y 89 90 CONFIG_NLS_ISO8859_1=y 90 91 # CONFIG_ENABLE_WARN_DEPRECATED is not set
+1
arch/arc/configs/vdk_hs38_smp_defconfig
··· 90 90 CONFIG_TMPFS=y 91 91 CONFIG_JFFS2_FS=y 92 92 CONFIG_NFS_FS=y 93 + CONFIG_NFS_V3_ACL=y 93 94 CONFIG_NLS_CODEPAGE_437=y 94 95 CONFIG_NLS_ISO8859_1=y 95 96 # CONFIG_ENABLE_WARN_DEPRECATED is not set
+2
arch/arc/include/asm/cache.h
··· 113 113 114 114 /* IO coherency related Auxiliary registers */ 115 115 #define ARC_REG_IO_COH_ENABLE 0x500 116 + #define ARC_IO_COH_ENABLE_BIT BIT(0) 116 117 #define ARC_REG_IO_COH_PARTIAL 0x501 118 + #define ARC_IO_COH_PARTIAL_BIT BIT(0) 117 119 #define ARC_REG_IO_COH_AP0_BASE 0x508 118 120 #define ARC_REG_IO_COH_AP0_SIZE 0x509 119 121
+72
arch/arc/include/asm/io.h
··· 12 12 #include <linux/types.h> 13 13 #include <asm/byteorder.h> 14 14 #include <asm/page.h> 15 + #include <asm/unaligned.h> 15 16 16 17 #ifdef CONFIG_ISA_ARCV2 17 18 #include <asm/barrier.h> ··· 95 94 return w; 96 95 } 97 96 97 + /* 98 + * {read,write}s{b,w,l}() repeatedly access the same IO address in 99 + * native endianness in 8-, 16-, 32-bit chunks {into,from} memory, 100 + * @count times 101 + */ 102 + #define __raw_readsx(t,f) \ 103 + static inline void __raw_reads##f(const volatile void __iomem *addr, \ 104 + void *ptr, unsigned int count) \ 105 + { \ 106 + bool is_aligned = ((unsigned long)ptr % ((t) / 8)) == 0; \ 107 + u##t *buf = ptr; \ 108 + \ 109 + if (!count) \ 110 + return; \ 111 + \ 112 + /* Some ARC CPU's don't support unaligned accesses */ \ 113 + if (is_aligned) { \ 114 + do { \ 115 + u##t x = __raw_read##f(addr); \ 116 + *buf++ = x; \ 117 + } while (--count); \ 118 + } else { \ 119 + do { \ 120 + u##t x = __raw_read##f(addr); \ 121 + put_unaligned(x, buf++); \ 122 + } while (--count); \ 123 + } \ 124 + } 125 + 126 + #define __raw_readsb __raw_readsb 127 + __raw_readsx(8, b) 128 + #define __raw_readsw __raw_readsw 129 + __raw_readsx(16, w) 130 + #define __raw_readsl __raw_readsl 131 + __raw_readsx(32, l) 132 + 98 133 #define __raw_writeb __raw_writeb 99 134 static inline void __raw_writeb(u8 b, volatile void __iomem *addr) 100 135 { ··· 163 126 164 127 } 165 128 129 + #define __raw_writesx(t,f) \ 130 + static inline void __raw_writes##f(volatile void __iomem *addr, \ 131 + const void *ptr, unsigned int count) \ 132 + { \ 133 + bool is_aligned = ((unsigned long)ptr % ((t) / 8)) == 0; \ 134 + const u##t *buf = ptr; \ 135 + \ 136 + if (!count) \ 137 + return; \ 138 + \ 139 + /* Some ARC CPU's don't support unaligned accesses */ \ 140 + if (is_aligned) { \ 141 + do { \ 142 + __raw_write##f(*buf++, addr); \ 143 + } while (--count); \ 144 + } else { \ 145 + do { \ 146 + __raw_write##f(get_unaligned(buf++), addr); \ 147 + } while (--count); \ 148 + } \ 149 + } 150 + 151 + #define __raw_writesb __raw_writesb 152 + __raw_writesx(8, b) 153 + #define __raw_writesw __raw_writesw 154 + __raw_writesx(16, w) 155 + #define __raw_writesl __raw_writesl 156 + __raw_writesx(32, l) 157 + 166 158 /* 167 159 * MMIO can also get buffered/optimized in micro-arch, so barriers needed 168 160 * Based on ARM model for the typical use case ··· 207 141 #define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; }) 208 142 #define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; }) 209 143 #define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(); __v; }) 144 + #define readsb(p,d,l) ({ __raw_readsb(p,d,l); __iormb(); }) 145 + #define readsw(p,d,l) ({ __raw_readsw(p,d,l); __iormb(); }) 146 + #define readsl(p,d,l) ({ __raw_readsl(p,d,l); __iormb(); }) 210 147 211 148 #define writeb(v,c) ({ __iowmb(); writeb_relaxed(v,c); }) 212 149 #define writew(v,c) ({ __iowmb(); writew_relaxed(v,c); }) 213 150 #define writel(v,c) ({ __iowmb(); writel_relaxed(v,c); }) 151 + #define writesb(p,d,l) ({ __iowmb(); __raw_writesb(p,d,l); }) 152 + #define writesw(p,d,l) ({ __iowmb(); __raw_writesw(p,d,l); }) 153 + #define writesl(p,d,l) ({ __iowmb(); __raw_writesl(p,d,l); }) 214 154 215 155 /* 216 156 * Relaxed API for drivers which can handle barrier ordering themselves
+6 -3
arch/arc/kernel/setup.c
··· 243 243 { 244 244 struct cpuinfo_arc *cpu = &cpuinfo_arc700[cpu_id]; 245 245 struct bcr_identity *core = &cpu->core; 246 - int i, n = 0; 246 + int i, n = 0, ua = 0; 247 247 248 248 FIX_PTR(cpu); 249 249 ··· 263 263 IS_AVAIL2(cpu->extn.rtc, "RTC [UP 64-bit] ", CONFIG_ARC_TIMERS_64BIT), 264 264 IS_AVAIL2(cpu->extn.gfrc, "GFRC [SMP 64-bit] ", CONFIG_ARC_TIMERS_64BIT)); 265 265 266 - n += i = scnprintf(buf + n, len - n, "%s%s%s%s%s", 266 + #ifdef __ARC_UNALIGNED__ 267 + ua = 1; 268 + #endif 269 + n += i = scnprintf(buf + n, len - n, "%s%s%s%s%s%s", 267 270 IS_AVAIL2(cpu->isa.atomic, "atomic ", CONFIG_ARC_HAS_LLSC), 268 271 IS_AVAIL2(cpu->isa.ldd, "ll64 ", CONFIG_ARC_HAS_LL64), 269 - IS_AVAIL1(cpu->isa.unalign, "unalign (not used)")); 272 + IS_AVAIL1(cpu->isa.unalign, "unalign "), IS_USED_RUN(ua)); 270 273 271 274 if (i) 272 275 n += scnprintf(buf + n, len - n, "\n\t\t: ");
+17 -3
arch/arc/mm/cache.c
··· 1145 1145 unsigned int ioc_base, mem_sz; 1146 1146 1147 1147 /* 1148 + * If IOC was already enabled (due to bootloader) it technically needs to 1149 + * be reconfigured with aperture base,size corresponding to Linux memory map 1150 + * which will certainly be different than uboot's. But disabling and 1151 + * reenabling IOC when DMA might be potentially active is tricky business. 1152 + * To avoid random memory issues later, just panic here and ask user to 1153 + * upgrade bootloader to one which doesn't enable IOC 1154 + */ 1155 + if (read_aux_reg(ARC_REG_IO_COH_ENABLE) & ARC_IO_COH_ENABLE_BIT) 1156 + panic("IOC already enabled, please upgrade bootloader!\n"); 1157 + 1158 + if (!ioc_enable) 1159 + return; 1160 + 1161 + /* 1148 1162 * As for today we don't support both IOC and ZONE_HIGHMEM enabled 1149 1163 * simultaneously. This happens because as of today IOC aperture covers 1150 1164 * only ZONE_NORMAL (low mem) and any dma transactions outside this ··· 1201 1187 panic("IOC Aperture start must be aligned to the size of the aperture"); 1202 1188 1203 1189 write_aux_reg(ARC_REG_IO_COH_AP0_BASE, ioc_base >> 12); 1204 - write_aux_reg(ARC_REG_IO_COH_PARTIAL, 1); 1205 - write_aux_reg(ARC_REG_IO_COH_ENABLE, 1); 1190 + write_aux_reg(ARC_REG_IO_COH_PARTIAL, ARC_IO_COH_PARTIAL_BIT); 1191 + write_aux_reg(ARC_REG_IO_COH_ENABLE, ARC_IO_COH_ENABLE_BIT); 1206 1192 1207 1193 /* Re-enable L1 dcache */ 1208 1194 __dc_enable(); ··· 1279 1265 if (is_isa_arcv2() && l2_line_sz && !slc_enable) 1280 1266 arc_slc_disable(); 1281 1267 1282 - if (is_isa_arcv2() && ioc_enable) 1268 + if (is_isa_arcv2() && ioc_exists) 1283 1269 arc_ioc_setup(); 1284 1270 1285 1271 if (is_isa_arcv2() && l2_line_sz && slc_enable) {
+1 -1
arch/arc/mm/fault.c
··· 66 66 struct vm_area_struct *vma = NULL; 67 67 struct task_struct *tsk = current; 68 68 struct mm_struct *mm = tsk->mm; 69 - int si_code; 69 + int si_code = 0; 70 70 int ret; 71 71 vm_fault_t fault; 72 72 int write = regs->ecr_cause & ECR_C_PROTV_STORE; /* ST/EX */
+5 -3
arch/arm/mm/cache-v7.S
··· 360 360 ALT_UP(W(nop)) 361 361 #endif 362 362 mcrne p15, 0, r0, c7, c14, 1 @ clean & invalidate D / U line 363 + addne r0, r0, r2 363 364 364 365 tst r1, r3 365 366 bic r1, r1, r3 366 367 mcrne p15, 0, r1, c7, c14, 1 @ clean & invalidate D / U line 367 - 1: 368 - mcr p15, 0, r0, c7, c6, 1 @ invalidate D / U line 369 - add r0, r0, r2 370 368 cmp r0, r1 369 + 1: 370 + mcrlo p15, 0, r0, c7, c6, 1 @ invalidate D / U line 371 + addlo r0, r0, r2 372 + cmplo r0, r1 371 373 blo 1b 372 374 dsb st 373 375 ret lr
+9 -5
arch/arm/mm/cache-v7m.S
··· 73 73 /* 74 74 * dcimvac: Invalidate data cache line by MVA to PoC 75 75 */ 76 - .macro dcimvac, rt, tmp 77 - v7m_cacheop \rt, \tmp, V7M_SCB_DCIMVAC 76 + .irp c,,eq,ne,cs,cc,mi,pl,vs,vc,hi,ls,ge,lt,gt,le,hs,lo 77 + .macro dcimvac\c, rt, tmp 78 + v7m_cacheop \rt, \tmp, V7M_SCB_DCIMVAC, \c 78 79 .endm 80 + .endr 79 81 80 82 /* 81 83 * dccmvau: Clean data cache line by MVA to PoU ··· 371 369 tst r0, r3 372 370 bic r0, r0, r3 373 371 dccimvacne r0, r3 372 + addne r0, r0, r2 374 373 subne r3, r2, #1 @ restore r3, corrupted by v7m's dccimvac 375 374 tst r1, r3 376 375 bic r1, r1, r3 377 376 dccimvacne r1, r3 378 - 1: 379 - dcimvac r0, r3 380 - add r0, r0, r2 381 377 cmp r0, r1 378 + 1: 379 + dcimvaclo r0, r3 380 + addlo r0, r0, r2 381 + cmplo r0, r1 382 382 blo 1b 383 383 dsb st 384 384 ret lr
+1 -1
arch/arm/mm/dma-mapping.c
··· 829 829 void *cpu_addr, dma_addr_t dma_addr, size_t size, 830 830 unsigned long attrs) 831 831 { 832 - int ret; 832 + int ret = -ENXIO; 833 833 unsigned long nr_vma_pages = vma_pages(vma); 834 834 unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; 835 835 unsigned long pfn = dma_to_pfn(dev, dma_addr);
+10
arch/arm/mm/proc-macros.S
··· 274 274 .endm 275 275 276 276 .macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0 277 + /* 278 + * If we are building for big.Little with branch predictor hardening, 279 + * we need the processor function tables to remain available after boot. 280 + */ 281 + #if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) 282 + .section ".rodata" 283 + #endif 277 284 .type \name\()_processor_functions, #object 278 285 .align 2 279 286 ENTRY(\name\()_processor_functions) ··· 316 309 .endif 317 310 318 311 .size \name\()_processor_functions, . - \name\()_processor_functions 312 + #if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) 313 + .previous 314 + #endif 319 315 .endm 320 316 321 317 .macro define_cache_functions name:req
+1 -1
arch/arm/probes/kprobes/opt-arm.c
··· 247 247 } 248 248 249 249 /* Copy arch-dep-instance from template. */ 250 - memcpy(code, &optprobe_template_entry, 250 + memcpy(code, (unsigned char *)optprobe_template_entry, 251 251 TMPL_END_IDX * sizeof(kprobe_opcode_t)); 252 252 253 253 /* Adjust buffer according to instruction. */
+6
arch/arm64/boot/dts/qcom/sdm845-mtp.dts
··· 343 343 }; 344 344 }; 345 345 346 + &gcc { 347 + protected-clocks = <GCC_QSPI_CORE_CLK>, 348 + <GCC_QSPI_CORE_CLK_SRC>, 349 + <GCC_QSPI_CNOC_PERIPH_AHB_CLK>; 350 + }; 351 + 346 352 &i2c10 { 347 353 status = "okay"; 348 354 clock-frequency = <400000>;
+1 -1
arch/arm64/kernel/hibernate.c
··· 214 214 } 215 215 216 216 memcpy((void *)dst, src_start, length); 217 - flush_icache_range(dst, dst + length); 217 + __flush_icache_range(dst, dst + length); 218 218 219 219 pgdp = pgd_offset_raw(allocator(mask), dst_addr); 220 220 if (pgd_none(READ_ONCE(*pgdp))) {
+2 -2
arch/csky/include/asm/mmu_context.h
··· 16 16 17 17 static inline void tlbmiss_handler_setup_pgd(unsigned long pgd, bool kernel) 18 18 { 19 - pgd &= ~(1<<31); 19 + pgd -= PAGE_OFFSET; 20 20 pgd += PHYS_OFFSET; 21 21 pgd |= 1; 22 22 setup_pgd(pgd, kernel); ··· 29 29 30 30 static inline unsigned long tlb_get_pgd(void) 31 31 { 32 - return ((get_pgd()|(1<<31)) - PHYS_OFFSET) & ~1; 32 + return ((get_pgd() - PHYS_OFFSET) & ~1) + PAGE_OFFSET; 33 33 } 34 34 35 35 #define cpu_context(cpu, mm) ((mm)->context.asid[cpu])
+7
arch/parisc/Makefile
··· 71 71 KBUILD_CFLAGS_KERNEL += -mlong-calls 72 72 endif 73 73 74 + # Without this, "ld -r" results in .text sections that are too big (> 0x40000) 75 + # for branches to reach stubs. And multiple .text sections trigger a warning 76 + # when creating the sysfs module information section. 77 + ifndef CONFIG_64BIT 78 + KBUILD_CFLAGS_MODULE += -ffunction-sections 79 + endif 80 + 74 81 # select which processor to optimise for 75 82 cflags-$(CONFIG_PA7000) += -march=1.1 -mschedule=7100 76 83 cflags-$(CONFIG_PA7200) += -march=1.1 -mschedule=7200
+66
arch/powerpc/net/bpf_jit_comp64.c
··· 891 891 return 0; 892 892 } 893 893 894 + /* Fix the branch target addresses for subprog calls */ 895 + static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image, 896 + struct codegen_context *ctx, u32 *addrs) 897 + { 898 + const struct bpf_insn *insn = fp->insnsi; 899 + bool func_addr_fixed; 900 + u64 func_addr; 901 + u32 tmp_idx; 902 + int i, ret; 903 + 904 + for (i = 0; i < fp->len; i++) { 905 + /* 906 + * During the extra pass, only the branch target addresses for 907 + * the subprog calls need to be fixed. All other instructions 908 + * can left untouched. 909 + * 910 + * The JITed image length does not change because we already 911 + * ensure that the JITed instruction sequence for these calls 912 + * are of fixed length by padding them with NOPs. 913 + */ 914 + if (insn[i].code == (BPF_JMP | BPF_CALL) && 915 + insn[i].src_reg == BPF_PSEUDO_CALL) { 916 + ret = bpf_jit_get_func_addr(fp, &insn[i], true, 917 + &func_addr, 918 + &func_addr_fixed); 919 + if (ret < 0) 920 + return ret; 921 + 922 + /* 923 + * Save ctx->idx as this would currently point to the 924 + * end of the JITed image and set it to the offset of 925 + * the instruction sequence corresponding to the 926 + * subprog call temporarily. 927 + */ 928 + tmp_idx = ctx->idx; 929 + ctx->idx = addrs[i] / 4; 930 + bpf_jit_emit_func_call_rel(image, ctx, func_addr); 931 + 932 + /* 933 + * Restore ctx->idx here. This is safe as the length 934 + * of the JITed sequence remains unchanged. 935 + */ 936 + ctx->idx = tmp_idx; 937 + } 938 + } 939 + 940 + return 0; 941 + } 942 + 894 943 struct powerpc64_jit_data { 895 944 struct bpf_binary_header *header; 896 945 u32 *addrs; ··· 1038 989 skip_init_ctx: 1039 990 code_base = (u32 *)(image + FUNCTION_DESCR_SIZE); 1040 991 992 + if (extra_pass) { 993 + /* 994 + * Do not touch the prologue and epilogue as they will remain 995 + * unchanged. Only fix the branch target address for subprog 996 + * calls in the body. 997 + * 998 + * This does not change the offsets and lengths of the subprog 999 + * call instruction sequences and hence, the size of the JITed 1000 + * image as well. 1001 + */ 1002 + bpf_jit_fixup_subprog_calls(fp, code_base, &cgctx, addrs); 1003 + 1004 + /* There is no need to perform the usual passes. */ 1005 + goto skip_codegen_passes; 1006 + } 1007 + 1041 1008 /* Code generation passes 1-2 */ 1042 1009 for (pass = 1; pass < 3; pass++) { 1043 1010 /* Now build the prologue, body code & epilogue for real. */ ··· 1067 1002 proglen - (cgctx.idx * 4), cgctx.seen); 1068 1003 } 1069 1004 1005 + skip_codegen_passes: 1070 1006 if (bpf_jit_enable > 1) 1071 1007 /* 1072 1008 * Note that we output the base address of the code_base
+1 -2
arch/sparc/kernel/iommu.c
··· 108 108 /* Allocate and initialize the free area map. */ 109 109 sz = num_tsb_entries / 8; 110 110 sz = (sz + 7UL) & ~7UL; 111 - iommu->tbl.map = kmalloc_node(sz, GFP_KERNEL, numa_node); 111 + iommu->tbl.map = kzalloc_node(sz, GFP_KERNEL, numa_node); 112 112 if (!iommu->tbl.map) 113 113 return -ENOMEM; 114 - memset(iommu->tbl.map, 0, sz); 115 114 116 115 iommu_tbl_pool_init(&iommu->tbl, num_tsb_entries, IO_PAGE_SHIFT, 117 116 (tlb_type != hypervisor ? iommu_flushall : NULL),
+1
arch/sparc/kernel/signal32.c
··· 683 683 regs->tpc -= 4; 684 684 regs->tnpc -= 4; 685 685 pt_regs_clear_syscall(regs); 686 + /* fall through */ 686 687 case ERESTART_RESTARTBLOCK: 687 688 regs->u_regs[UREG_G1] = __NR_restart_syscall; 688 689 regs->tpc -= 4;
+1
arch/sparc/kernel/signal_32.c
··· 508 508 regs->pc -= 4; 509 509 regs->npc -= 4; 510 510 pt_regs_clear_syscall(regs); 511 + /* fall through */ 511 512 case ERESTART_RESTARTBLOCK: 512 513 regs->u_regs[UREG_G1] = __NR_restart_syscall; 513 514 regs->pc -= 4;
+1
arch/sparc/kernel/signal_64.c
··· 533 533 regs->tpc -= 4; 534 534 regs->tnpc -= 4; 535 535 pt_regs_clear_syscall(regs); 536 + /* fall through */ 536 537 case ERESTART_RESTARTBLOCK: 537 538 regs->u_regs[UREG_G1] = __NR_restart_syscall; 538 539 regs->tpc -= 4;
+7 -3
arch/x86/Makefile
··· 220 220 221 221 # Avoid indirect branches in kernel to deal with Spectre 222 222 ifdef CONFIG_RETPOLINE 223 - ifeq ($(RETPOLINE_CFLAGS),) 224 - $(error You are building kernel with non-retpoline compiler, please update your compiler.) 225 - endif 226 223 KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) 227 224 endif 228 225 ··· 303 306 ifndef CC_HAVE_ASM_GOTO 304 307 @echo Compiler lacks asm-goto support. 305 308 @exit 1 309 + endif 310 + ifdef CONFIG_RETPOLINE 311 + ifeq ($(RETPOLINE_CFLAGS),) 312 + @echo "You are building kernel with non-retpoline compiler." >&2 313 + @echo "Please update your compiler." >&2 314 + @false 315 + endif 306 316 endif 307 317 308 318 archclean:
+41 -24
arch/x86/boot/compressed/eboot.c
··· 1 + 1 2 /* ----------------------------------------------------------------------- 2 3 * 3 4 * Copyright 2011 Intel Corporation; author Matt Fleming ··· 635 634 return status; 636 635 } 637 636 637 + static efi_status_t allocate_e820(struct boot_params *params, 638 + struct setup_data **e820ext, 639 + u32 *e820ext_size) 640 + { 641 + unsigned long map_size, desc_size, buff_size; 642 + struct efi_boot_memmap boot_map; 643 + efi_memory_desc_t *map; 644 + efi_status_t status; 645 + __u32 nr_desc; 646 + 647 + boot_map.map = &map; 648 + boot_map.map_size = &map_size; 649 + boot_map.desc_size = &desc_size; 650 + boot_map.desc_ver = NULL; 651 + boot_map.key_ptr = NULL; 652 + boot_map.buff_size = &buff_size; 653 + 654 + status = efi_get_memory_map(sys_table, &boot_map); 655 + if (status != EFI_SUCCESS) 656 + return status; 657 + 658 + nr_desc = buff_size / desc_size; 659 + 660 + if (nr_desc > ARRAY_SIZE(params->e820_table)) { 661 + u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table); 662 + 663 + status = alloc_e820ext(nr_e820ext, e820ext, e820ext_size); 664 + if (status != EFI_SUCCESS) 665 + return status; 666 + } 667 + 668 + return EFI_SUCCESS; 669 + } 670 + 638 671 struct exit_boot_struct { 639 672 struct boot_params *boot_params; 640 673 struct efi_info *efi; 641 - struct setup_data *e820ext; 642 - __u32 e820ext_size; 643 674 }; 644 675 645 676 static efi_status_t exit_boot_func(efi_system_table_t *sys_table_arg, 646 677 struct efi_boot_memmap *map, 647 678 void *priv) 648 679 { 649 - static bool first = true; 650 680 const char *signature; 651 681 __u32 nr_desc; 652 682 efi_status_t status; 653 683 struct exit_boot_struct *p = priv; 654 - 655 - if (first) { 656 - nr_desc = *map->buff_size / *map->desc_size; 657 - if (nr_desc > ARRAY_SIZE(p->boot_params->e820_table)) { 658 - u32 nr_e820ext = nr_desc - 659 - ARRAY_SIZE(p->boot_params->e820_table); 660 - 661 - status = alloc_e820ext(nr_e820ext, &p->e820ext, 662 - &p->e820ext_size); 663 - if (status != EFI_SUCCESS) 664 - return status; 665 - } 666 - first = false; 667 - } 668 684 669 685 signature = efi_is_64bit() ? EFI64_LOADER_SIGNATURE 670 686 : EFI32_LOADER_SIGNATURE; ··· 705 687 { 706 688 unsigned long map_sz, key, desc_size, buff_size; 707 689 efi_memory_desc_t *mem_map; 708 - struct setup_data *e820ext; 709 - __u32 e820ext_size; 690 + struct setup_data *e820ext = NULL; 691 + __u32 e820ext_size = 0; 710 692 efi_status_t status; 711 693 __u32 desc_version; 712 694 struct efi_boot_memmap map; ··· 720 702 map.buff_size = &buff_size; 721 703 priv.boot_params = boot_params; 722 704 priv.efi = &boot_params->efi_info; 723 - priv.e820ext = NULL; 724 - priv.e820ext_size = 0; 705 + 706 + status = allocate_e820(boot_params, &e820ext, &e820ext_size); 707 + if (status != EFI_SUCCESS) 708 + return status; 725 709 726 710 /* Might as well exit boot services now */ 727 711 status = efi_exit_boot_services(sys_table, handle, &map, &priv, 728 712 exit_boot_func); 729 713 if (status != EFI_SUCCESS) 730 714 return status; 731 - 732 - e820ext = priv.e820ext; 733 - e820ext_size = priv.e820ext_size; 734 715 735 716 /* Historic? */ 736 717 boot_params->alt_mem_k = 32 * 1024;
+4
arch/x86/entry/entry_64.S
··· 566 566 567 567 ret 568 568 END(interrupt_entry) 569 + _ASM_NOKPROBE(interrupt_entry) 569 570 570 571 571 572 /* Interrupt entry/exit. */ ··· 767 766 jmp native_irq_return_iret 768 767 #endif 769 768 END(common_interrupt) 769 + _ASM_NOKPROBE(common_interrupt) 770 770 771 771 /* 772 772 * APIC interrupts. ··· 782 780 call \do_sym /* rdi points to pt_regs */ 783 781 jmp ret_from_intr 784 782 END(\sym) 783 + _ASM_NOKPROBE(\sym) 785 784 .endm 786 785 787 786 /* Make sure APIC interrupt handlers end up in the irqentry section: */ ··· 963 960 964 961 jmp error_exit 965 962 .endif 963 + _ASM_NOKPROBE(\sym) 966 964 END(\sym) 967 965 .endm 968 966
+2 -2
arch/x86/entry/vdso/Makefile
··· 47 47 CPPFLAGS_vdso.lds += -P -C 48 48 49 49 VDSO_LDFLAGS_vdso.lds = -m elf_x86_64 -soname linux-vdso.so.1 --no-undefined \ 50 - -z max-page-size=4096 -z common-page-size=4096 50 + -z max-page-size=4096 51 51 52 52 $(obj)/vdso64.so.dbg: $(obj)/vdso.lds $(vobjs) FORCE 53 53 $(call if_changed,vdso) ··· 98 98 99 99 CPPFLAGS_vdsox32.lds = $(CPPFLAGS_vdso.lds) 100 100 VDSO_LDFLAGS_vdsox32.lds = -m elf32_x86_64 -soname linux-vdso.so.1 \ 101 - -z max-page-size=4096 -z common-page-size=4096 101 + -z max-page-size=4096 102 102 103 103 # x32-rebranded versions 104 104 vobjx32s-y := $(vobjs-y:.o=-x32.o)
+1
arch/x86/include/asm/bootparam_utils.h
··· 36 36 */ 37 37 if (boot_params->sentinel) { 38 38 /* fields in boot_params are left uninitialized, clear them */ 39 + boot_params->acpi_rsdp_addr = 0; 39 40 memset(&boot_params->ext_ramdisk_image, 0, 40 41 (char *)&boot_params->efi_info - 41 42 (char *)&boot_params->ext_ramdisk_image);
+1 -1
arch/x86/kernel/kprobes/opt.c
··· 189 189 int len = 0, ret; 190 190 191 191 while (len < RELATIVEJUMP_SIZE) { 192 - ret = __copy_instruction(dest + len, src + len, real, &insn); 192 + ret = __copy_instruction(dest + len, src + len, real + len, &insn); 193 193 if (!ret || !can_boost(&insn, src + len)) 194 194 return -EINVAL; 195 195 len += ret;
+1 -1
arch/x86/platform/efi/early_printk.c
··· 183 183 num--; 184 184 } 185 185 186 - if (efi_x >= si->lfb_width) { 186 + if (efi_x + font->width > si->lfb_width) { 187 187 efi_x = 0; 188 188 efi_y += font->height; 189 189 }
+54 -22
block/bfq-iosched.c
··· 638 638 bfqd->queue_weights_tree.rb_node->rb_right) 639 639 #ifdef CONFIG_BFQ_GROUP_IOSCHED 640 640 ) || 641 - (bfqd->num_active_groups > 0 641 + (bfqd->num_groups_with_pending_reqs > 0 642 642 #endif 643 643 ); 644 644 } ··· 802 802 */ 803 803 break; 804 804 } 805 - bfqd->num_active_groups--; 805 + 806 + /* 807 + * The decrement of num_groups_with_pending_reqs is 808 + * not performed immediately upon the deactivation of 809 + * entity, but it is delayed to when it also happens 810 + * that the first leaf descendant bfqq of entity gets 811 + * all its pending requests completed. The following 812 + * instructions perform this delayed decrement, if 813 + * needed. See the comments on 814 + * num_groups_with_pending_reqs for details. 815 + */ 816 + if (entity->in_groups_with_pending_reqs) { 817 + entity->in_groups_with_pending_reqs = false; 818 + bfqd->num_groups_with_pending_reqs--; 819 + } 806 820 } 807 821 } 808 822 ··· 3543 3529 * fact, if there are active groups, then, for condition (i) 3544 3530 * to become false, it is enough that an active group contains 3545 3531 * more active processes or sub-groups than some other active 3546 - * group. We address this issue with the following bi-modal 3547 - * behavior, implemented in the function 3532 + * group. More precisely, for condition (i) to hold because of 3533 + * such a group, it is not even necessary that the group is 3534 + * (still) active: it is sufficient that, even if the group 3535 + * has become inactive, some of its descendant processes still 3536 + * have some request already dispatched but still waiting for 3537 + * completion. In fact, requests have still to be guaranteed 3538 + * their share of the throughput even after being 3539 + * dispatched. In this respect, it is easy to show that, if a 3540 + * group frequently becomes inactive while still having 3541 + * in-flight requests, and if, when this happens, the group is 3542 + * not considered in the calculation of whether the scenario 3543 + * is asymmetric, then the group may fail to be guaranteed its 3544 + * fair share of the throughput (basically because idling may 3545 + * not be performed for the descendant processes of the group, 3546 + * but it had to be). We address this issue with the 3547 + * following bi-modal behavior, implemented in the function 3548 3548 * bfq_symmetric_scenario(). 3549 3549 * 3550 - * If there are active groups, then the scenario is tagged as 3550 + * If there are groups with requests waiting for completion 3551 + * (as commented above, some of these groups may even be 3552 + * already inactive), then the scenario is tagged as 3551 3553 * asymmetric, conservatively, without checking any of the 3552 3554 * conditions (i) and (ii). So the device is idled for bfqq. 3553 3555 * This behavior matches also the fact that groups are created 3554 - * exactly if controlling I/O (to preserve bandwidth and 3555 - * latency guarantees) is a primary concern. 3556 + * exactly if controlling I/O is a primary concern (to 3557 + * preserve bandwidth and latency guarantees). 3556 3558 * 3557 - * On the opposite end, if there are no active groups, then 3558 - * only condition (i) is actually controlled, i.e., provided 3559 - * that condition (i) holds, idling is not performed, 3560 - * regardless of whether condition (ii) holds. In other words, 3561 - * only if condition (i) does not hold, then idling is 3562 - * allowed, and the device tends to be prevented from queueing 3563 - * many requests, possibly of several processes. Since there 3564 - * are no active groups, then, to control condition (i) it is 3565 - * enough to check whether all active queues have the same 3566 - * weight. 3559 + * On the opposite end, if there are no groups with requests 3560 + * waiting for completion, then only condition (i) is actually 3561 + * controlled, i.e., provided that condition (i) holds, idling 3562 + * is not performed, regardless of whether condition (ii) 3563 + * holds. In other words, only if condition (i) does not hold, 3564 + * then idling is allowed, and the device tends to be 3565 + * prevented from queueing many requests, possibly of several 3566 + * processes. Since there are no groups with requests waiting 3567 + * for completion, then, to control condition (i) it is enough 3568 + * to check just whether all the queues with requests waiting 3569 + * for completion also have the same weight. 3567 3570 * 3568 3571 * Not checking condition (ii) evidently exposes bfqq to the 3569 3572 * risk of getting less throughput than its fair share. ··· 3638 3607 * bfqq is weight-raised is checked explicitly here. More 3639 3608 * precisely, the compound condition below takes into account 3640 3609 * also the fact that, even if bfqq is being weight-raised, 3641 - * the scenario is still symmetric if all active queues happen 3642 - * to be weight-raised. Actually, we should be even more 3643 - * precise here, and differentiate between interactive weight 3644 - * raising and soft real-time weight raising. 3610 + * the scenario is still symmetric if all queues with requests 3611 + * waiting for completion happen to be 3612 + * weight-raised. Actually, we should be even more precise 3613 + * here, and differentiate between interactive weight raising 3614 + * and soft real-time weight raising. 3645 3615 * 3646 3616 * As a side note, it is worth considering that the above 3647 3617 * device-idling countermeasures may however fail in the ··· 5449 5417 bfqd->idle_slice_timer.function = bfq_idle_slice_timer; 5450 5418 5451 5419 bfqd->queue_weights_tree = RB_ROOT; 5452 - bfqd->num_active_groups = 0; 5420 + bfqd->num_groups_with_pending_reqs = 0; 5453 5421 5454 5422 INIT_LIST_HEAD(&bfqd->active_list); 5455 5423 INIT_LIST_HEAD(&bfqd->idle_list);
+49 -2
block/bfq-iosched.h
··· 196 196 197 197 /* flag, set to request a weight, ioprio or ioprio_class change */ 198 198 int prio_changed; 199 + 200 + /* flag, set if the entity is counted in groups_with_pending_reqs */ 201 + bool in_groups_with_pending_reqs; 199 202 }; 200 203 201 204 struct bfq_group; ··· 451 448 * bfq_weights_tree_[add|remove] for further details). 452 449 */ 453 450 struct rb_root queue_weights_tree; 451 + 454 452 /* 455 - * number of groups with requests still waiting for completion 453 + * Number of groups with at least one descendant process that 454 + * has at least one request waiting for completion. Note that 455 + * this accounts for also requests already dispatched, but not 456 + * yet completed. Therefore this number of groups may differ 457 + * (be larger) than the number of active groups, as a group is 458 + * considered active only if its corresponding entity has 459 + * descendant queues with at least one request queued. This 460 + * number is used to decide whether a scenario is symmetric. 461 + * For a detailed explanation see comments on the computation 462 + * of the variable asymmetric_scenario in the function 463 + * bfq_better_to_idle(). 464 + * 465 + * However, it is hard to compute this number exactly, for 466 + * groups with multiple descendant processes. Consider a group 467 + * that is inactive, i.e., that has no descendant process with 468 + * pending I/O inside BFQ queues. Then suppose that 469 + * num_groups_with_pending_reqs is still accounting for this 470 + * group, because the group has descendant processes with some 471 + * I/O request still in flight. num_groups_with_pending_reqs 472 + * should be decremented when the in-flight request of the 473 + * last descendant process is finally completed (assuming that 474 + * nothing else has changed for the group in the meantime, in 475 + * terms of composition of the group and active/inactive state of child 476 + * groups and processes). To accomplish this, an additional 477 + * pending-request counter must be added to entities, and must 478 + * be updated correctly. To avoid this additional field and operations, 479 + * we resort to the following tradeoff between simplicity and 480 + * accuracy: for an inactive group that is still counted in 481 + * num_groups_with_pending_reqs, we decrement 482 + * num_groups_with_pending_reqs when the first descendant 483 + * process of the group remains with no request waiting for 484 + * completion. 485 + * 486 + * Even this simpler decrement strategy requires a little 487 + * carefulness: to avoid multiple decrements, we flag a group, 488 + * more precisely an entity representing a group, as still 489 + * counted in num_groups_with_pending_reqs when it becomes 490 + * inactive. Then, when the first descendant queue of the 491 + * entity remains with no request waiting for completion, 492 + * num_groups_with_pending_reqs is decremented, and this flag 493 + * is reset. After this flag is reset for the entity, 494 + * num_groups_with_pending_reqs won't be decremented any 495 + * longer in case a new descendant queue of the entity remains 496 + * with no request waiting for completion. 456 497 */ 457 - unsigned int num_active_groups; 498 + unsigned int num_groups_with_pending_reqs; 458 499 459 500 /* 460 501 * Number of bfq_queues containing requests (including the
+4 -1
block/bfq-wf2q.c
··· 1012 1012 container_of(entity, struct bfq_group, entity); 1013 1013 struct bfq_data *bfqd = bfqg->bfqd; 1014 1014 1015 - bfqd->num_active_groups++; 1015 + if (!entity->in_groups_with_pending_reqs) { 1016 + entity->in_groups_with_pending_reqs = true; 1017 + bfqd->num_groups_with_pending_reqs++; 1018 + } 1016 1019 } 1017 1020 #endif 1018 1021
+4 -3
block/blk-mq.c
··· 1764 1764 if (bypass_insert) 1765 1765 return BLK_STS_RESOURCE; 1766 1766 1767 - blk_mq_sched_insert_request(rq, false, run_queue, false); 1767 + blk_mq_request_bypass_insert(rq, run_queue); 1768 1768 return BLK_STS_OK; 1769 1769 } 1770 1770 ··· 1780 1780 1781 1781 ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false); 1782 1782 if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) 1783 - blk_mq_sched_insert_request(rq, false, true, false); 1783 + blk_mq_request_bypass_insert(rq, true); 1784 1784 else if (ret != BLK_STS_OK) 1785 1785 blk_mq_end_request(rq, ret); 1786 1786 ··· 1815 1815 if (ret != BLK_STS_OK) { 1816 1816 if (ret == BLK_STS_RESOURCE || 1817 1817 ret == BLK_STS_DEV_RESOURCE) { 1818 - list_add(&rq->queuelist, list); 1818 + blk_mq_request_bypass_insert(rq, 1819 + list_empty(list)); 1819 1820 break; 1820 1821 } 1821 1822 blk_mq_end_request(rq, ret);
+1 -1
crypto/Kconfig
··· 1812 1812 cipher algorithms. 1813 1813 1814 1814 config CRYPTO_STATS 1815 - bool "Crypto usage statistics for User-space" 1815 + bool 1816 1816 help 1817 1817 This option enables the gathering of crypto stats. 1818 1818 This will collect:
+4 -2
crypto/cbc.c
··· 140 140 spawn = skcipher_instance_ctx(inst); 141 141 err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst), 142 142 CRYPTO_ALG_TYPE_MASK); 143 - crypto_mod_put(alg); 144 143 if (err) 145 - goto err_free_inst; 144 + goto err_put_alg; 146 145 147 146 err = crypto_inst_setname(skcipher_crypto_instance(inst), "cbc", alg); 148 147 if (err) ··· 173 174 err = skcipher_register_instance(tmpl, inst); 174 175 if (err) 175 176 goto err_drop_spawn; 177 + crypto_mod_put(alg); 176 178 177 179 out: 178 180 return err; 179 181 180 182 err_drop_spawn: 181 183 crypto_drop_spawn(spawn); 184 + err_put_alg: 185 + crypto_mod_put(alg); 182 186 err_free_inst: 183 187 kfree(inst); 184 188 goto out;
+4 -2
crypto/cfb.c
··· 286 286 spawn = skcipher_instance_ctx(inst); 287 287 err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst), 288 288 CRYPTO_ALG_TYPE_MASK); 289 - crypto_mod_put(alg); 290 289 if (err) 291 - goto err_free_inst; 290 + goto err_put_alg; 292 291 293 292 err = crypto_inst_setname(skcipher_crypto_instance(inst), "cfb", alg); 294 293 if (err) ··· 316 317 err = skcipher_register_instance(tmpl, inst); 317 318 if (err) 318 319 goto err_drop_spawn; 320 + crypto_mod_put(alg); 319 321 320 322 out: 321 323 return err; 322 324 323 325 err_drop_spawn: 324 326 crypto_drop_spawn(spawn); 327 + err_put_alg: 328 + crypto_mod_put(alg); 325 329 err_free_inst: 326 330 kfree(inst); 327 331 goto out;
+4 -2
crypto/pcbc.c
··· 244 244 spawn = skcipher_instance_ctx(inst); 245 245 err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst), 246 246 CRYPTO_ALG_TYPE_MASK); 247 - crypto_mod_put(alg); 248 247 if (err) 249 - goto err_free_inst; 248 + goto err_put_alg; 250 249 251 250 err = crypto_inst_setname(skcipher_crypto_instance(inst), "pcbc", alg); 252 251 if (err) ··· 274 275 err = skcipher_register_instance(tmpl, inst); 275 276 if (err) 276 277 goto err_drop_spawn; 278 + crypto_mod_put(alg); 277 279 278 280 out: 279 281 return err; 280 282 281 283 err_drop_spawn: 282 284 crypto_drop_spawn(spawn); 285 + err_put_alg: 286 + crypto_mod_put(alg); 283 287 err_free_inst: 284 288 kfree(inst); 285 289 goto out;
+1 -1
drivers/acpi/nfit/core.c
··· 1308 1308 if (nd_desc) { 1309 1309 struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); 1310 1310 1311 - rc = acpi_nfit_ars_rescan(acpi_desc, 0); 1311 + rc = acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_LONG); 1312 1312 } 1313 1313 device_unlock(dev); 1314 1314 if (rc)
+1
drivers/ata/libata-core.c
··· 4602 4602 { "SSD*INTEL*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4603 4603 { "Samsung*SSD*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4604 4604 { "SAMSUNG*SSD*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4605 + { "SAMSUNG*MZ7KM*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4605 4606 { "ST[1248][0248]0[FH]*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4606 4607 4607 4608 /*
+1 -1
drivers/clk/mmp/clk.c
··· 183 183 pr_err("CLK %d has invalid pointer %p\n", id, clk); 184 184 return; 185 185 } 186 - if (id > unit->nr_clks) { 186 + if (id >= unit->nr_clks) { 187 187 pr_err("CLK %d is invalid\n", id); 188 188 return; 189 189 }
+2 -2
drivers/clk/mvebu/cp110-system-controller.c
··· 200 200 unsigned int idx = clkspec->args[1]; 201 201 202 202 if (type == CP110_CLK_TYPE_CORE) { 203 - if (idx > CP110_MAX_CORE_CLOCKS) 203 + if (idx >= CP110_MAX_CORE_CLOCKS) 204 204 return ERR_PTR(-EINVAL); 205 205 return clk_data->hws[idx]; 206 206 } else if (type == CP110_CLK_TYPE_GATABLE) { 207 - if (idx > CP110_MAX_GATABLE_CLOCKS) 207 + if (idx >= CP110_MAX_GATABLE_CLOCKS) 208 208 return ERR_PTR(-EINVAL); 209 209 return clk_data->hws[CP110_MAX_CORE_CLOCKS + idx]; 210 210 }
+18
drivers/clk/qcom/common.c
··· 191 191 } 192 192 EXPORT_SYMBOL_GPL(qcom_cc_register_sleep_clk); 193 193 194 + /* Drop 'protected-clocks' from the list of clocks to register */ 195 + static void qcom_cc_drop_protected(struct device *dev, struct qcom_cc *cc) 196 + { 197 + struct device_node *np = dev->of_node; 198 + struct property *prop; 199 + const __be32 *p; 200 + u32 i; 201 + 202 + of_property_for_each_u32(np, "protected-clocks", prop, p, i) { 203 + if (i >= cc->num_rclks) 204 + continue; 205 + 206 + cc->rclks[i] = NULL; 207 + } 208 + } 209 + 194 210 static struct clk_hw *qcom_cc_clk_hw_get(struct of_phandle_args *clkspec, 195 211 void *data) 196 212 { ··· 266 250 267 251 cc->rclks = rclks; 268 252 cc->num_rclks = num_clks; 253 + 254 + qcom_cc_drop_protected(dev, cc); 269 255 270 256 for (i = 0; i < num_clks; i++) { 271 257 if (!rclks[i])
+4 -1
drivers/clk/zynqmp/clkc.c
··· 128 128 */ 129 129 static inline int zynqmp_is_valid_clock(u32 clk_id) 130 130 { 131 - if (clk_id > clock_max_idx) 131 + if (clk_id >= clock_max_idx) 132 132 return -ENODEV; 133 133 134 134 return clock[clk_id].valid; ··· 279 279 qdata.arg1 = clk_id; 280 280 281 281 ret = eemi_ops->query_data(qdata, ret_payload); 282 + if (ret) 283 + return ERR_PTR(ret); 284 + 282 285 mult = ret_payload[1]; 283 286 div = ret_payload[2]; 284 287
+3 -3
drivers/dma/dw/core.c
··· 1059 1059 /* 1060 1060 * Program FIFO size of channels. 1061 1061 * 1062 - * By default full FIFO (1024 bytes) is assigned to channel 0. Here we 1062 + * By default full FIFO (512 bytes) is assigned to channel 0. Here we 1063 1063 * slice FIFO on equal parts between channels. 1064 1064 */ 1065 1065 static void idma32_fifo_partition(struct dw_dma *dw) 1066 1066 { 1067 - u64 value = IDMA32C_FP_PSIZE_CH0(128) | IDMA32C_FP_PSIZE_CH1(128) | 1067 + u64 value = IDMA32C_FP_PSIZE_CH0(64) | IDMA32C_FP_PSIZE_CH1(64) | 1068 1068 IDMA32C_FP_UPDATE; 1069 1069 u64 fifo_partition = 0; 1070 1070 ··· 1077 1077 /* Fill FIFO_PARTITION high bits (Channels 2..3, 6..7) */ 1078 1078 fifo_partition |= value << 32; 1079 1079 1080 - /* Program FIFO Partition registers - 128 bytes for each channel */ 1080 + /* Program FIFO Partition registers - 64 bytes per channel */ 1081 1081 idma32_writeq(dw, FIFO_PARTITION1, fifo_partition); 1082 1082 idma32_writeq(dw, FIFO_PARTITION0, fifo_partition); 1083 1083 }
+44 -25
drivers/dma/imx-sdma.c
··· 24 24 #include <linux/spinlock.h> 25 25 #include <linux/device.h> 26 26 #include <linux/dma-mapping.h> 27 - #include <linux/dmapool.h> 28 27 #include <linux/firmware.h> 29 28 #include <linux/slab.h> 30 29 #include <linux/platform_device.h> ··· 32 33 #include <linux/of_address.h> 33 34 #include <linux/of_device.h> 34 35 #include <linux/of_dma.h> 36 + #include <linux/workqueue.h> 35 37 36 38 #include <asm/irq.h> 37 39 #include <linux/platform_data/dma-imx-sdma.h> ··· 376 376 u32 shp_addr, per_addr; 377 377 enum dma_status status; 378 378 struct imx_dma_data data; 379 - struct dma_pool *bd_pool; 379 + struct work_struct terminate_worker; 380 380 }; 381 381 382 382 #define IMX_DMA_SG_LOOP BIT(0) ··· 1027 1027 1028 1028 return 0; 1029 1029 } 1030 - 1031 - static int sdma_disable_channel_with_delay(struct dma_chan *chan) 1030 + static void sdma_channel_terminate_work(struct work_struct *work) 1032 1031 { 1033 - struct sdma_channel *sdmac = to_sdma_chan(chan); 1032 + struct sdma_channel *sdmac = container_of(work, struct sdma_channel, 1033 + terminate_worker); 1034 1034 unsigned long flags; 1035 1035 LIST_HEAD(head); 1036 - 1037 - sdma_disable_channel(chan); 1038 - spin_lock_irqsave(&sdmac->vc.lock, flags); 1039 - vchan_get_all_descriptors(&sdmac->vc, &head); 1040 - sdmac->desc = NULL; 1041 - spin_unlock_irqrestore(&sdmac->vc.lock, flags); 1042 - vchan_dma_desc_free_list(&sdmac->vc, &head); 1043 1036 1044 1037 /* 1045 1038 * According to NXP R&D team a delay of one BD SDMA cost time ··· 1040 1047 * bit, to ensure SDMA core has really been stopped after SDMA 1041 1048 * clients call .device_terminate_all. 1042 1049 */ 1043 - mdelay(1); 1050 + usleep_range(1000, 2000); 1051 + 1052 + spin_lock_irqsave(&sdmac->vc.lock, flags); 1053 + vchan_get_all_descriptors(&sdmac->vc, &head); 1054 + sdmac->desc = NULL; 1055 + spin_unlock_irqrestore(&sdmac->vc.lock, flags); 1056 + vchan_dma_desc_free_list(&sdmac->vc, &head); 1057 + } 1058 + 1059 + static int sdma_disable_channel_async(struct dma_chan *chan) 1060 + { 1061 + struct sdma_channel *sdmac = to_sdma_chan(chan); 1062 + 1063 + sdma_disable_channel(chan); 1064 + 1065 + if (sdmac->desc) 1066 + schedule_work(&sdmac->terminate_worker); 1044 1067 1045 1068 return 0; 1069 + } 1070 + 1071 + static void sdma_channel_synchronize(struct dma_chan *chan) 1072 + { 1073 + struct sdma_channel *sdmac = to_sdma_chan(chan); 1074 + 1075 + vchan_synchronize(&sdmac->vc); 1076 + 1077 + flush_work(&sdmac->terminate_worker); 1046 1078 } 1047 1079 1048 1080 static void sdma_set_watermarklevel_for_p2p(struct sdma_channel *sdmac) ··· 1210 1192 1211 1193 static int sdma_alloc_bd(struct sdma_desc *desc) 1212 1194 { 1195 + u32 bd_size = desc->num_bd * sizeof(struct sdma_buffer_descriptor); 1213 1196 int ret = 0; 1214 1197 1215 - desc->bd = dma_pool_alloc(desc->sdmac->bd_pool, GFP_NOWAIT, 1216 - &desc->bd_phys); 1198 + desc->bd = dma_zalloc_coherent(NULL, bd_size, &desc->bd_phys, 1199 + GFP_NOWAIT); 1217 1200 if (!desc->bd) { 1218 1201 ret = -ENOMEM; 1219 1202 goto out; ··· 1225 1206 1226 1207 static void sdma_free_bd(struct sdma_desc *desc) 1227 1208 { 1228 - dma_pool_free(desc->sdmac->bd_pool, desc->bd, desc->bd_phys); 1209 + u32 bd_size = desc->num_bd * sizeof(struct sdma_buffer_descriptor); 1210 + 1211 + dma_free_coherent(NULL, bd_size, desc->bd, desc->bd_phys); 1229 1212 } 1230 1213 1231 1214 static void sdma_desc_free(struct virt_dma_desc *vd) ··· 1293 1272 if (ret) 1294 1273 goto disable_clk_ahb; 1295 1274 1296 - sdmac->bd_pool = dma_pool_create("bd_pool", chan->device->dev, 1297 - sizeof(struct sdma_buffer_descriptor), 1298 - 32, 0); 1299 - 1300 1275 return 0; 1301 1276 1302 1277 disable_clk_ahb: ··· 1307 1290 struct sdma_channel *sdmac = to_sdma_chan(chan); 1308 1291 struct sdma_engine *sdma = sdmac->sdma; 1309 1292 1310 - sdma_disable_channel_with_delay(chan); 1293 + sdma_disable_channel_async(chan); 1294 + 1295 + sdma_channel_synchronize(chan); 1311 1296 1312 1297 if (sdmac->event_id0) 1313 1298 sdma_event_disable(sdmac, sdmac->event_id0); ··· 1323 1304 1324 1305 clk_disable(sdma->clk_ipg); 1325 1306 clk_disable(sdma->clk_ahb); 1326 - 1327 - dma_pool_destroy(sdmac->bd_pool); 1328 - sdmac->bd_pool = NULL; 1329 1307 } 1330 1308 1331 1309 static struct sdma_desc *sdma_transfer_init(struct sdma_channel *sdmac, ··· 2015 1999 2016 2000 sdmac->channel = i; 2017 2001 sdmac->vc.desc_free = sdma_desc_free; 2002 + INIT_WORK(&sdmac->terminate_worker, 2003 + sdma_channel_terminate_work); 2018 2004 /* 2019 2005 * Add the channel to the DMAC list. Do not add channel 0 though 2020 2006 * because we need it internally in the SDMA driver. This also means ··· 2068 2050 sdma->dma_device.device_prep_slave_sg = sdma_prep_slave_sg; 2069 2051 sdma->dma_device.device_prep_dma_cyclic = sdma_prep_dma_cyclic; 2070 2052 sdma->dma_device.device_config = sdma_config; 2071 - sdma->dma_device.device_terminate_all = sdma_disable_channel_with_delay; 2053 + sdma->dma_device.device_terminate_all = sdma_disable_channel_async; 2054 + sdma->dma_device.device_synchronize = sdma_channel_synchronize; 2072 2055 sdma->dma_device.src_addr_widths = SDMA_DMA_BUSWIDTHS; 2073 2056 sdma->dma_device.dst_addr_widths = SDMA_DMA_BUSWIDTHS; 2074 2057 sdma->dma_device.directions = SDMA_DMA_DIRECTIONS;
+15 -1
drivers/dma/ti/cppi41.c
··· 723 723 724 724 desc_phys = lower_32_bits(c->desc_phys); 725 725 desc_num = (desc_phys - cdd->descs_phys) / sizeof(struct cppi41_desc); 726 - if (!cdd->chan_busy[desc_num]) 726 + if (!cdd->chan_busy[desc_num]) { 727 + struct cppi41_channel *cc, *_ct; 728 + 729 + /* 730 + * channels might still be in the pendling list if 731 + * cppi41_dma_issue_pending() is called after 732 + * cppi41_runtime_suspend() is called 733 + */ 734 + list_for_each_entry_safe(cc, _ct, &cdd->pending, node) { 735 + if (cc != c) 736 + continue; 737 + list_del(&cc->node); 738 + break; 739 + } 727 740 return 0; 741 + } 728 742 729 743 ret = cppi41_tear_down_chan(c); 730 744 if (ret)
+3 -3
drivers/gnss/sirf.c
··· 168 168 else 169 169 timeout = SIRF_HIBERNATE_TIMEOUT; 170 170 171 - while (retries-- > 0) { 171 + do { 172 172 sirf_pulse_on_off(data); 173 173 ret = sirf_wait_for_power_state(data, active, timeout); 174 174 if (ret < 0) { ··· 179 179 } 180 180 181 181 break; 182 - } 182 + } while (retries--); 183 183 184 - if (retries == 0) 184 + if (retries < 0) 185 185 return -ETIMEDOUT; 186 186 187 187 return 0;
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 233 233 234 234 #define MAX_KIQ_REG_WAIT 5000 /* in usecs, 5ms */ 235 235 #define MAX_KIQ_REG_BAILOUT_INTERVAL 5 /* in msecs, 5ms */ 236 - #define MAX_KIQ_REG_TRY 20 236 + #define MAX_KIQ_REG_TRY 80 /* 20 -> 80 */ 237 237 238 238 int amdgpu_device_ip_set_clockgating_state(void *dev, 239 239 enum amd_ip_block_type block_type,
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
··· 39 39 [AMDGPU_HW_IP_UVD_ENC] = 1, 40 40 [AMDGPU_HW_IP_VCN_DEC] = 1, 41 41 [AMDGPU_HW_IP_VCN_ENC] = 1, 42 + [AMDGPU_HW_IP_VCN_JPEG] = 1, 42 43 }; 43 44 44 45 static int amdgput_ctx_total_num_entities(void)
+3 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 467 467 if (!info->return_size || !info->return_pointer) 468 468 return -EINVAL; 469 469 470 - /* Ensure IB tests are run on ring */ 471 - flush_delayed_work(&adev->late_init_work); 472 - 473 470 switch (info->query) { 474 471 case AMDGPU_INFO_ACCEL_WORKING: 475 472 ui32 = adev->accel_working; ··· 946 949 struct amdgpu_device *adev = dev->dev_private; 947 950 struct amdgpu_fpriv *fpriv; 948 951 int r, pasid; 952 + 953 + /* Ensure IB tests are run on ring */ 954 + flush_delayed_work(&adev->late_init_work); 949 955 950 956 file_priv->driver_priv = NULL; 951 957
+33 -11
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
··· 56 56 MODULE_FIRMWARE("amdgpu/polaris11_mc.bin"); 57 57 MODULE_FIRMWARE("amdgpu/polaris10_mc.bin"); 58 58 MODULE_FIRMWARE("amdgpu/polaris12_mc.bin"); 59 + MODULE_FIRMWARE("amdgpu/polaris11_k_mc.bin"); 60 + MODULE_FIRMWARE("amdgpu/polaris10_k_mc.bin"); 61 + MODULE_FIRMWARE("amdgpu/polaris12_k_mc.bin"); 59 62 60 63 static const u32 golden_settings_tonga_a11[] = 61 64 { ··· 227 224 chip_name = "tonga"; 228 225 break; 229 226 case CHIP_POLARIS11: 230 - chip_name = "polaris11"; 227 + if (((adev->pdev->device == 0x67ef) && 228 + ((adev->pdev->revision == 0xe0) || 229 + (adev->pdev->revision == 0xe5))) || 230 + ((adev->pdev->device == 0x67ff) && 231 + ((adev->pdev->revision == 0xcf) || 232 + (adev->pdev->revision == 0xef) || 233 + (adev->pdev->revision == 0xff)))) 234 + chip_name = "polaris11_k"; 235 + else if ((adev->pdev->device == 0x67ef) && 236 + (adev->pdev->revision == 0xe2)) 237 + chip_name = "polaris11_k"; 238 + else 239 + chip_name = "polaris11"; 231 240 break; 232 241 case CHIP_POLARIS10: 233 - chip_name = "polaris10"; 242 + if ((adev->pdev->device == 0x67df) && 243 + ((adev->pdev->revision == 0xe1) || 244 + (adev->pdev->revision == 0xf7))) 245 + chip_name = "polaris10_k"; 246 + else 247 + chip_name = "polaris10"; 234 248 break; 235 249 case CHIP_POLARIS12: 236 - chip_name = "polaris12"; 250 + if (((adev->pdev->device == 0x6987) && 251 + ((adev->pdev->revision == 0xc0) || 252 + (adev->pdev->revision == 0xc3))) || 253 + ((adev->pdev->device == 0x6981) && 254 + ((adev->pdev->revision == 0x00) || 255 + (adev->pdev->revision == 0x01) || 256 + (adev->pdev->revision == 0x10)))) 257 + chip_name = "polaris12_k"; 258 + else 259 + chip_name = "polaris12"; 237 260 break; 238 261 case CHIP_FIJI: 239 262 case CHIP_CARRIZO: ··· 366 337 const struct mc_firmware_header_v1_0 *hdr; 367 338 const __le32 *fw_data = NULL; 368 339 const __le32 *io_mc_regs = NULL; 369 - u32 data, vbios_version; 340 + u32 data; 370 341 int i, ucode_size, regs_size; 371 342 372 343 /* Skip MC ucode loading on SR-IOV capable boards. ··· 375 346 * for this adaptor. 376 347 */ 377 348 if (amdgpu_sriov_bios(adev)) 378 - return 0; 379 - 380 - WREG32(mmMC_SEQ_IO_DEBUG_INDEX, 0x9F); 381 - data = RREG32(mmMC_SEQ_IO_DEBUG_DATA); 382 - vbios_version = data & 0xf; 383 - 384 - if (vbios_version == 0) 385 349 return 0; 386 350 387 351 if (!adev->gmc.fw)
+2 -1
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
··· 48 48 static void vcn_v1_0_set_jpeg_ring_funcs(struct amdgpu_device *adev); 49 49 static void vcn_v1_0_set_irq_funcs(struct amdgpu_device *adev); 50 50 static void vcn_v1_0_jpeg_ring_set_patch_ring(struct amdgpu_ring *ring, uint32_t ptr); 51 + static int vcn_v1_0_set_powergating_state(void *handle, enum amd_powergating_state state); 51 52 52 53 /** 53 54 * vcn_v1_0_early_init - set function pointers ··· 223 222 struct amdgpu_ring *ring = &adev->vcn.ring_dec; 224 223 225 224 if (RREG32_SOC15(VCN, 0, mmUVD_STATUS)) 226 - vcn_v1_0_stop(adev); 225 + vcn_v1_0_set_powergating_state(adev, AMD_PG_STATE_GATE); 227 226 228 227 ring->ready = false; 229 228
+5 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2554 2554 2555 2555 cea_revision = drm_connector->display_info.cea_rev; 2556 2556 2557 - strncpy(audio_info->display_name, 2557 + strscpy(audio_info->display_name, 2558 2558 edid_caps->display_name, 2559 - AUDIO_INFO_DISPLAY_NAME_SIZE_IN_CHARS - 1); 2559 + AUDIO_INFO_DISPLAY_NAME_SIZE_IN_CHARS); 2560 2560 2561 2561 if (cea_revision >= 3) { 2562 2562 audio_info->mode_count = edid_caps->audio_mode_count; ··· 3042 3042 state->underscan_enable = false; 3043 3043 state->underscan_hborder = 0; 3044 3044 state->underscan_vborder = 0; 3045 + state->max_bpc = 8; 3045 3046 3046 3047 __drm_atomic_helper_connector_reset(connector, &state->base); 3047 3048 } ··· 3064 3063 3065 3064 new_state->freesync_capable = state->freesync_capable; 3066 3065 new_state->freesync_enable = state->freesync_enable; 3066 + new_state->max_bpc = state->max_bpc; 3067 3067 3068 3068 return &new_state->base; 3069 3069 } ··· 3652 3650 mode->hdisplay = hdisplay; 3653 3651 mode->vdisplay = vdisplay; 3654 3652 mode->type &= ~DRM_MODE_TYPE_PREFERRED; 3655 - strncpy(mode->name, name, DRM_DISPLAY_MODE_LEN); 3653 + strscpy(mode->name, name, DRM_DISPLAY_MODE_LEN); 3656 3654 3657 3655 return mode; 3658 3656
+2
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
··· 2512 2512 dc, 2513 2513 context->bw.dce.sclk_khz); 2514 2514 2515 + pp_display_cfg->min_dcfclock_khz = pp_display_cfg->min_engine_clock_khz; 2516 + 2515 2517 pp_display_cfg->min_engine_clock_deep_sleep_khz 2516 2518 = context->bw.dce.sclk_deep_sleep_khz; 2517 2519
+3 -1
drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
··· 80 80 PHM_FUNC_CHECK(hwmgr); 81 81 adev = hwmgr->adev; 82 82 83 - if (smum_is_dpm_running(hwmgr) && !amdgpu_passthrough(adev)) { 83 + /* Skip for suspend/resume case */ 84 + if (smum_is_dpm_running(hwmgr) && !amdgpu_passthrough(adev) 85 + && adev->in_suspend) { 84 86 pr_info("dpm has been enabled\n"); 85 87 return 0; 86 88 }
+3
drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
··· 352 352 353 353 switch (task_id) { 354 354 case AMD_PP_TASK_DISPLAY_CONFIG_CHANGE: 355 + ret = phm_pre_display_configuration_changed(hwmgr); 356 + if (ret) 357 + return ret; 355 358 ret = phm_set_cpu_power_state(hwmgr); 356 359 if (ret) 357 360 return ret;
-2
drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
··· 265 265 if (skip) 266 266 return 0; 267 267 268 - phm_pre_display_configuration_changed(hwmgr); 269 - 270 268 phm_display_configuration_changed(hwmgr); 271 269 272 270 if (hwmgr->ps)
+8 -4
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
··· 3589 3589 } 3590 3590 3591 3591 if (i >= sclk_table->count) { 3592 - data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_SCLK; 3593 - sclk_table->dpm_levels[i-1].value = sclk; 3592 + if (sclk > sclk_table->dpm_levels[i-1].value) { 3593 + data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_SCLK; 3594 + sclk_table->dpm_levels[i-1].value = sclk; 3595 + } 3594 3596 } else { 3595 3597 /* TODO: Check SCLK in DAL's minimum clocks 3596 3598 * in case DeepSleep divider update is required. ··· 3609 3607 } 3610 3608 3611 3609 if (i >= mclk_table->count) { 3612 - data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_MCLK; 3613 - mclk_table->dpm_levels[i-1].value = mclk; 3610 + if (mclk > mclk_table->dpm_levels[i-1].value) { 3611 + data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_MCLK; 3612 + mclk_table->dpm_levels[i-1].value = mclk; 3613 + } 3614 3614 } 3615 3615 3616 3616 if (data->display_timing.num_existing_displays != hwmgr->display_config->num_display)
+8 -4
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
··· 3266 3266 } 3267 3267 3268 3268 if (i >= sclk_table->count) { 3269 - data->need_update_dpm_table |= DPMTABLE_OD_UPDATE_SCLK; 3270 - sclk_table->dpm_levels[i-1].value = sclk; 3269 + if (sclk > sclk_table->dpm_levels[i-1].value) { 3270 + data->need_update_dpm_table |= DPMTABLE_OD_UPDATE_SCLK; 3271 + sclk_table->dpm_levels[i-1].value = sclk; 3272 + } 3271 3273 } 3272 3274 3273 3275 for (i = 0; i < mclk_table->count; i++) { ··· 3278 3276 } 3279 3277 3280 3278 if (i >= mclk_table->count) { 3281 - data->need_update_dpm_table |= DPMTABLE_OD_UPDATE_MCLK; 3282 - mclk_table->dpm_levels[i-1].value = mclk; 3279 + if (mclk > mclk_table->dpm_levels[i-1].value) { 3280 + data->need_update_dpm_table |= DPMTABLE_OD_UPDATE_MCLK; 3281 + mclk_table->dpm_levels[i-1].value = mclk; 3282 + } 3283 3283 } 3284 3284 3285 3285 if (data->display_timing.num_existing_displays != hwmgr->display_config->num_display)
+32 -22
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
··· 1660 1660 return i; 1661 1661 } 1662 1662 1663 - static int vega20_upload_dpm_min_level(struct pp_hwmgr *hwmgr) 1663 + static int vega20_upload_dpm_min_level(struct pp_hwmgr *hwmgr, uint32_t feature_mask) 1664 1664 { 1665 1665 struct vega20_hwmgr *data = 1666 1666 (struct vega20_hwmgr *)(hwmgr->backend); 1667 1667 uint32_t min_freq; 1668 1668 int ret = 0; 1669 1669 1670 - if (data->smu_features[GNLD_DPM_GFXCLK].enabled) { 1670 + if (data->smu_features[GNLD_DPM_GFXCLK].enabled && 1671 + (feature_mask & FEATURE_DPM_GFXCLK_MASK)) { 1671 1672 min_freq = data->dpm_table.gfx_table.dpm_state.soft_min_level; 1672 1673 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( 1673 1674 hwmgr, PPSMC_MSG_SetSoftMinByFreq, ··· 1677 1676 return ret); 1678 1677 } 1679 1678 1680 - if (data->smu_features[GNLD_DPM_UCLK].enabled) { 1679 + if (data->smu_features[GNLD_DPM_UCLK].enabled && 1680 + (feature_mask & FEATURE_DPM_UCLK_MASK)) { 1681 1681 min_freq = data->dpm_table.mem_table.dpm_state.soft_min_level; 1682 1682 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( 1683 1683 hwmgr, PPSMC_MSG_SetSoftMinByFreq, ··· 1694 1692 return ret); 1695 1693 } 1696 1694 1697 - if (data->smu_features[GNLD_DPM_UVD].enabled) { 1695 + if (data->smu_features[GNLD_DPM_UVD].enabled && 1696 + (feature_mask & FEATURE_DPM_UVD_MASK)) { 1698 1697 min_freq = data->dpm_table.vclk_table.dpm_state.soft_min_level; 1699 1698 1700 1699 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( ··· 1713 1710 return ret); 1714 1711 } 1715 1712 1716 - if (data->smu_features[GNLD_DPM_VCE].enabled) { 1713 + if (data->smu_features[GNLD_DPM_VCE].enabled && 1714 + (feature_mask & FEATURE_DPM_VCE_MASK)) { 1717 1715 min_freq = data->dpm_table.eclk_table.dpm_state.soft_min_level; 1718 1716 1719 1717 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( ··· 1724 1720 return ret); 1725 1721 } 1726 1722 1727 - if (data->smu_features[GNLD_DPM_SOCCLK].enabled) { 1723 + if (data->smu_features[GNLD_DPM_SOCCLK].enabled && 1724 + (feature_mask & FEATURE_DPM_SOCCLK_MASK)) { 1728 1725 min_freq = data->dpm_table.soc_table.dpm_state.soft_min_level; 1729 1726 1730 1727 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( ··· 1738 1733 return ret; 1739 1734 } 1740 1735 1741 - static int vega20_upload_dpm_max_level(struct pp_hwmgr *hwmgr) 1736 + static int vega20_upload_dpm_max_level(struct pp_hwmgr *hwmgr, uint32_t feature_mask) 1742 1737 { 1743 1738 struct vega20_hwmgr *data = 1744 1739 (struct vega20_hwmgr *)(hwmgr->backend); 1745 1740 uint32_t max_freq; 1746 1741 int ret = 0; 1747 1742 1748 - if (data->smu_features[GNLD_DPM_GFXCLK].enabled) { 1743 + if (data->smu_features[GNLD_DPM_GFXCLK].enabled && 1744 + (feature_mask & FEATURE_DPM_GFXCLK_MASK)) { 1749 1745 max_freq = data->dpm_table.gfx_table.dpm_state.soft_max_level; 1750 1746 1751 1747 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( ··· 1756 1750 return ret); 1757 1751 } 1758 1752 1759 - if (data->smu_features[GNLD_DPM_UCLK].enabled) { 1753 + if (data->smu_features[GNLD_DPM_UCLK].enabled && 1754 + (feature_mask & FEATURE_DPM_UCLK_MASK)) { 1760 1755 max_freq = data->dpm_table.mem_table.dpm_state.soft_max_level; 1761 1756 1762 1757 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( ··· 1767 1760 return ret); 1768 1761 } 1769 1762 1770 - if (data->smu_features[GNLD_DPM_UVD].enabled) { 1763 + if (data->smu_features[GNLD_DPM_UVD].enabled && 1764 + (feature_mask & FEATURE_DPM_UVD_MASK)) { 1771 1765 max_freq = data->dpm_table.vclk_table.dpm_state.soft_max_level; 1772 1766 1773 1767 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( ··· 1785 1777 return ret); 1786 1778 } 1787 1779 1788 - if (data->smu_features[GNLD_DPM_VCE].enabled) { 1780 + if (data->smu_features[GNLD_DPM_VCE].enabled && 1781 + (feature_mask & FEATURE_DPM_VCE_MASK)) { 1789 1782 max_freq = data->dpm_table.eclk_table.dpm_state.soft_max_level; 1790 1783 1791 1784 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( ··· 1796 1787 return ret); 1797 1788 } 1798 1789 1799 - if (data->smu_features[GNLD_DPM_SOCCLK].enabled) { 1790 + if (data->smu_features[GNLD_DPM_SOCCLK].enabled && 1791 + (feature_mask & FEATURE_DPM_SOCCLK_MASK)) { 1800 1792 max_freq = data->dpm_table.soc_table.dpm_state.soft_max_level; 1801 1793 1802 1794 PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( ··· 2136 2126 data->dpm_table.mem_table.dpm_state.soft_max_level = 2137 2127 data->dpm_table.mem_table.dpm_levels[soft_level].value; 2138 2128 2139 - ret = vega20_upload_dpm_min_level(hwmgr); 2129 + ret = vega20_upload_dpm_min_level(hwmgr, 0xFFFFFFFF); 2140 2130 PP_ASSERT_WITH_CODE(!ret, 2141 2131 "Failed to upload boot level to highest!", 2142 2132 return ret); 2143 2133 2144 - ret = vega20_upload_dpm_max_level(hwmgr); 2134 + ret = vega20_upload_dpm_max_level(hwmgr, 0xFFFFFFFF); 2145 2135 PP_ASSERT_WITH_CODE(!ret, 2146 2136 "Failed to upload dpm max level to highest!", 2147 2137 return ret); ··· 2168 2158 data->dpm_table.mem_table.dpm_state.soft_max_level = 2169 2159 data->dpm_table.mem_table.dpm_levels[soft_level].value; 2170 2160 2171 - ret = vega20_upload_dpm_min_level(hwmgr); 2161 + ret = vega20_upload_dpm_min_level(hwmgr, 0xFFFFFFFF); 2172 2162 PP_ASSERT_WITH_CODE(!ret, 2173 2163 "Failed to upload boot level to highest!", 2174 2164 return ret); 2175 2165 2176 - ret = vega20_upload_dpm_max_level(hwmgr); 2166 + ret = vega20_upload_dpm_max_level(hwmgr, 0xFFFFFFFF); 2177 2167 PP_ASSERT_WITH_CODE(!ret, 2178 2168 "Failed to upload dpm max level to highest!", 2179 2169 return ret); ··· 2186 2176 { 2187 2177 int ret = 0; 2188 2178 2189 - ret = vega20_upload_dpm_min_level(hwmgr); 2179 + ret = vega20_upload_dpm_min_level(hwmgr, 0xFFFFFFFF); 2190 2180 PP_ASSERT_WITH_CODE(!ret, 2191 2181 "Failed to upload DPM Bootup Levels!", 2192 2182 return ret); 2193 2183 2194 - ret = vega20_upload_dpm_max_level(hwmgr); 2184 + ret = vega20_upload_dpm_max_level(hwmgr, 0xFFFFFFFF); 2195 2185 PP_ASSERT_WITH_CODE(!ret, 2196 2186 "Failed to upload DPM Max Levels!", 2197 2187 return ret); ··· 2249 2239 data->dpm_table.gfx_table.dpm_state.soft_max_level = 2250 2240 data->dpm_table.gfx_table.dpm_levels[soft_max_level].value; 2251 2241 2252 - ret = vega20_upload_dpm_min_level(hwmgr); 2242 + ret = vega20_upload_dpm_min_level(hwmgr, FEATURE_DPM_GFXCLK_MASK); 2253 2243 PP_ASSERT_WITH_CODE(!ret, 2254 2244 "Failed to upload boot level to lowest!", 2255 2245 return ret); 2256 2246 2257 - ret = vega20_upload_dpm_max_level(hwmgr); 2247 + ret = vega20_upload_dpm_max_level(hwmgr, FEATURE_DPM_GFXCLK_MASK); 2258 2248 PP_ASSERT_WITH_CODE(!ret, 2259 2249 "Failed to upload dpm max level to highest!", 2260 2250 return ret); ··· 2269 2259 data->dpm_table.mem_table.dpm_state.soft_max_level = 2270 2260 data->dpm_table.mem_table.dpm_levels[soft_max_level].value; 2271 2261 2272 - ret = vega20_upload_dpm_min_level(hwmgr); 2262 + ret = vega20_upload_dpm_min_level(hwmgr, FEATURE_DPM_UCLK_MASK); 2273 2263 PP_ASSERT_WITH_CODE(!ret, 2274 2264 "Failed to upload boot level to lowest!", 2275 2265 return ret); 2276 2266 2277 - ret = vega20_upload_dpm_max_level(hwmgr); 2267 + ret = vega20_upload_dpm_max_level(hwmgr, FEATURE_DPM_UCLK_MASK); 2278 2268 PP_ASSERT_WITH_CODE(!ret, 2279 2269 "Failed to upload dpm max level to highest!", 2280 2270 return ret);
+1
drivers/gpu/drm/ast/ast_fb.c
··· 263 263 { 264 264 struct ast_framebuffer *afb = &afbdev->afb; 265 265 266 + drm_crtc_force_disable_all(dev); 266 267 drm_fb_helper_unregister_fbi(&afbdev->helper); 267 268 268 269 if (afb->obj) {
+1 -1
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 54 54 #define SN_AUX_ADDR_7_0_REG 0x76 55 55 #define SN_AUX_LENGTH_REG 0x77 56 56 #define SN_AUX_CMD_REG 0x78 57 - #define AUX_CMD_SEND BIT(1) 57 + #define AUX_CMD_SEND BIT(0) 58 58 #define AUX_CMD_REQ(x) ((x) << 4) 59 59 #define SN_AUX_RDATA_REG(x) (0x79 + (x)) 60 60 #define SN_SSC_CONFIG_REG 0x93
+1 -1
drivers/gpu/drm/drm_fb_helper.c
··· 71 71 #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM) 72 72 static bool drm_leak_fbdev_smem = false; 73 73 module_param_unsafe(drm_leak_fbdev_smem, bool, 0600); 74 - MODULE_PARM_DESC(fbdev_emulation, 74 + MODULE_PARM_DESC(drm_leak_fbdev_smem, 75 75 "Allow unsafe leaking fbdev physical smem address [default=false]"); 76 76 #endif 77 77
+2
drivers/gpu/drm/drm_internal.h
··· 104 104 int drm_sysfs_connector_add(struct drm_connector *connector); 105 105 void drm_sysfs_connector_remove(struct drm_connector *connector); 106 106 107 + void drm_sysfs_lease_event(struct drm_device *dev); 108 + 107 109 /* drm_gem.c */ 108 110 int drm_gem_init(struct drm_device *dev); 109 111 void drm_gem_destroy(struct drm_device *dev);
+1 -1
drivers/gpu/drm/drm_lease.c
··· 296 296 297 297 if (master->lessor) { 298 298 /* Tell the master to check the lessee list */ 299 - drm_sysfs_hotplug_event(dev); 299 + drm_sysfs_lease_event(dev); 300 300 drm_master_put(&master->lessor); 301 301 } 302 302
+10
drivers/gpu/drm/drm_sysfs.c
··· 301 301 connector->kdev = NULL; 302 302 } 303 303 304 + void drm_sysfs_lease_event(struct drm_device *dev) 305 + { 306 + char *event_string = "LEASE=1"; 307 + char *envp[] = { event_string, NULL }; 308 + 309 + DRM_DEBUG("generating lease event\n"); 310 + 311 + kobject_uevent_env(&dev->primary->kdev->kobj, KOBJ_CHANGE, envp); 312 + } 313 + 304 314 /** 305 315 * drm_sysfs_hotplug_event - generate a DRM uevent 306 316 * @dev: DRM device
-1
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 1594 1594 NULL); 1595 1595 1596 1596 drm_crtc_helper_add(crtc, &dpu_crtc_helper_funcs); 1597 - plane->crtc = crtc; 1598 1597 1599 1598 /* save user friendly CRTC name for later */ 1600 1599 snprintf(dpu_crtc->name, DPU_CRTC_NAME_SIZE, "crtc%u", crtc->base.id);
-2
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
··· 488 488 489 489 drm_encoder_cleanup(drm_enc); 490 490 mutex_destroy(&dpu_enc->enc_lock); 491 - 492 - kfree(dpu_enc); 493 491 } 494 492 495 493 void dpu_encoder_helper_split_config(
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c
··· 216 216 INTERLEAVED_RGB_FMT(XBGR8888, 217 217 COLOR_8BIT, COLOR_8BIT, COLOR_8BIT, COLOR_8BIT, 218 218 C2_R_Cr, C0_G_Y, C1_B_Cb, C3_ALPHA, 4, 219 - true, 4, 0, 219 + false, 4, 0, 220 220 DPU_FETCH_LINEAR, 1), 221 221 222 222 INTERLEAVED_RGB_FMT(RGBA8888,
+3 -1
drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c
··· 39 39 #define DSI_PIXEL_PLL_CLK 1 40 40 #define NUM_PROVIDED_CLKS 2 41 41 42 + #define VCO_REF_CLK_RATE 19200000 43 + 42 44 struct dsi_pll_regs { 43 45 u32 pll_prop_gain_rate; 44 46 u32 pll_lockdet_rate; ··· 318 316 parent_rate); 319 317 320 318 pll_10nm->vco_current_rate = rate; 321 - pll_10nm->vco_ref_clk_rate = parent_rate; 319 + pll_10nm->vco_ref_clk_rate = VCO_REF_CLK_RATE; 322 320 323 321 dsi_pll_setup_config(pll_10nm); 324 322
+7 -1
drivers/gpu/drm/msm/hdmi/hdmi.c
··· 332 332 goto fail; 333 333 } 334 334 335 + ret = msm_hdmi_hpd_enable(hdmi->connector); 336 + if (ret < 0) { 337 + DRM_DEV_ERROR(&hdmi->pdev->dev, "failed to enable HPD: %d\n", ret); 338 + goto fail; 339 + } 340 + 335 341 encoder->bridge = hdmi->bridge; 336 342 337 343 priv->bridges[priv->num_bridges++] = hdmi->bridge; ··· 577 571 { 578 572 struct drm_device *drm = dev_get_drvdata(master); 579 573 struct msm_drm_private *priv = drm->dev_private; 580 - static struct hdmi_platform_config *hdmi_cfg; 574 + struct hdmi_platform_config *hdmi_cfg; 581 575 struct hdmi *hdmi; 582 576 struct device_node *of_node = dev->of_node; 583 577 int i, err;
+1
drivers/gpu/drm/msm/hdmi/hdmi.h
··· 245 245 246 246 void msm_hdmi_connector_irq(struct drm_connector *connector); 247 247 struct drm_connector *msm_hdmi_connector_init(struct hdmi *hdmi); 248 + int msm_hdmi_hpd_enable(struct drm_connector *connector); 248 249 249 250 /* 250 251 * i2c adapter for ddc:
+2 -8
drivers/gpu/drm/msm/hdmi/hdmi_connector.c
··· 167 167 } 168 168 } 169 169 170 - static int hpd_enable(struct hdmi_connector *hdmi_connector) 170 + int msm_hdmi_hpd_enable(struct drm_connector *connector) 171 171 { 172 + struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector); 172 173 struct hdmi *hdmi = hdmi_connector->hdmi; 173 174 const struct hdmi_platform_config *config = hdmi->config; 174 175 struct device *dev = &hdmi->pdev->dev; ··· 451 450 { 452 451 struct drm_connector *connector = NULL; 453 452 struct hdmi_connector *hdmi_connector; 454 - int ret; 455 453 456 454 hdmi_connector = kzalloc(sizeof(*hdmi_connector), GFP_KERNEL); 457 455 if (!hdmi_connector) ··· 470 470 471 471 connector->interlace_allowed = 0; 472 472 connector->doublescan_allowed = 0; 473 - 474 - ret = hpd_enable(hdmi_connector); 475 - if (ret) { 476 - dev_err(&hdmi->pdev->dev, "failed to enable HPD: %d\n", ret); 477 - return ERR_PTR(ret); 478 - } 479 473 480 474 drm_connector_attach_encoder(connector, hdmi->encoder); 481 475
+5
drivers/gpu/drm/msm/msm_atomic.c
··· 34 34 if (!new_crtc_state->active) 35 35 continue; 36 36 37 + if (drm_crtc_vblank_get(crtc)) 38 + continue; 39 + 37 40 kms->funcs->wait_for_crtc_commit_done(kms, crtc); 41 + 42 + drm_crtc_vblank_put(crtc); 38 43 } 39 44 } 40 45
+11 -4
drivers/gpu/drm/msm/msm_debugfs.c
··· 84 84 85 85 ret = mutex_lock_interruptible(&dev->struct_mutex); 86 86 if (ret) 87 - return ret; 87 + goto free_priv; 88 88 89 89 pm_runtime_get_sync(&gpu->pdev->dev); 90 90 show_priv->state = gpu->funcs->gpu_state_get(gpu); ··· 94 94 95 95 if (IS_ERR(show_priv->state)) { 96 96 ret = PTR_ERR(show_priv->state); 97 - kfree(show_priv); 98 - return ret; 97 + goto free_priv; 99 98 } 100 99 101 100 show_priv->dev = dev; 102 101 103 - return single_open(file, msm_gpu_show, show_priv); 102 + ret = single_open(file, msm_gpu_show, show_priv); 103 + if (ret) 104 + goto free_priv; 105 + 106 + return 0; 107 + 108 + free_priv: 109 + kfree(show_priv); 110 + return ret; 104 111 } 105 112 106 113 static const struct file_operations msm_gpu_fops = {
+16 -33
drivers/gpu/drm/msm/msm_drv.c
··· 553 553 kthread_run(kthread_worker_fn, 554 554 &priv->disp_thread[i].worker, 555 555 "crtc_commit:%d", priv->disp_thread[i].crtc_id); 556 - ret = sched_setscheduler(priv->disp_thread[i].thread, 557 - SCHED_FIFO, &param); 558 - if (ret) 559 - pr_warn("display thread priority update failed: %d\n", 560 - ret); 561 - 562 556 if (IS_ERR(priv->disp_thread[i].thread)) { 563 557 dev_err(dev, "failed to create crtc_commit kthread\n"); 564 558 priv->disp_thread[i].thread = NULL; 559 + goto err_msm_uninit; 565 560 } 561 + 562 + ret = sched_setscheduler(priv->disp_thread[i].thread, 563 + SCHED_FIFO, &param); 564 + if (ret) 565 + dev_warn(dev, "disp_thread set priority failed: %d\n", 566 + ret); 566 567 567 568 /* initialize event thread */ 568 569 priv->event_thread[i].crtc_id = priv->crtcs[i]->base.id; ··· 573 572 kthread_run(kthread_worker_fn, 574 573 &priv->event_thread[i].worker, 575 574 "crtc_event:%d", priv->event_thread[i].crtc_id); 575 + if (IS_ERR(priv->event_thread[i].thread)) { 576 + dev_err(dev, "failed to create crtc_event kthread\n"); 577 + priv->event_thread[i].thread = NULL; 578 + goto err_msm_uninit; 579 + } 580 + 576 581 /** 577 582 * event thread should also run at same priority as disp_thread 578 583 * because it is handling frame_done events. A lower priority ··· 587 580 * failure at crtc commit level. 588 581 */ 589 582 ret = sched_setscheduler(priv->event_thread[i].thread, 590 - SCHED_FIFO, &param); 583 + SCHED_FIFO, &param); 591 584 if (ret) 592 - pr_warn("display event thread priority update failed: %d\n", 593 - ret); 594 - 595 - if (IS_ERR(priv->event_thread[i].thread)) { 596 - dev_err(dev, "failed to create crtc_event kthread\n"); 597 - priv->event_thread[i].thread = NULL; 598 - } 599 - 600 - if ((!priv->disp_thread[i].thread) || 601 - !priv->event_thread[i].thread) { 602 - /* clean up previously created threads if any */ 603 - for ( ; i >= 0; i--) { 604 - if (priv->disp_thread[i].thread) { 605 - kthread_stop( 606 - priv->disp_thread[i].thread); 607 - priv->disp_thread[i].thread = NULL; 608 - } 609 - 610 - if (priv->event_thread[i].thread) { 611 - kthread_stop( 612 - priv->event_thread[i].thread); 613 - priv->event_thread[i].thread = NULL; 614 - } 615 - } 616 - goto err_msm_uninit; 617 - } 585 + dev_warn(dev, "event_thread set priority failed:%d\n", 586 + ret); 618 587 } 619 588 620 589 ret = drm_vblank_init(ddev, priv->num_crtcs);
+11 -7
drivers/gpu/drm/msm/msm_gem_submit.c
··· 317 317 uint32_t *ptr; 318 318 int ret = 0; 319 319 320 + if (!nr_relocs) 321 + return 0; 322 + 320 323 if (offset % 4) { 321 324 DRM_ERROR("non-aligned cmdstream buffer: %u\n", offset); 322 325 return -EINVAL; ··· 413 410 struct msm_file_private *ctx = file->driver_priv; 414 411 struct msm_gem_submit *submit; 415 412 struct msm_gpu *gpu = priv->gpu; 416 - struct dma_fence *in_fence = NULL; 417 413 struct sync_file *sync_file = NULL; 418 414 struct msm_gpu_submitqueue *queue; 419 415 struct msm_ringbuffer *ring; ··· 445 443 ring = gpu->rb[queue->prio]; 446 444 447 445 if (args->flags & MSM_SUBMIT_FENCE_FD_IN) { 446 + struct dma_fence *in_fence; 447 + 448 448 in_fence = sync_file_get_fence(args->fence_fd); 449 449 450 450 if (!in_fence) ··· 456 452 * Wait if the fence is from a foreign context, or if the fence 457 453 * array contains any fence from a foreign context. 458 454 */ 459 - if (!dma_fence_match_context(in_fence, ring->fctx->context)) { 455 + ret = 0; 456 + if (!dma_fence_match_context(in_fence, ring->fctx->context)) 460 457 ret = dma_fence_wait(in_fence, true); 461 - if (ret) 462 - return ret; 463 - } 458 + 459 + dma_fence_put(in_fence); 460 + if (ret) 461 + return ret; 464 462 } 465 463 466 464 ret = mutex_lock_interruptible(&dev->struct_mutex); ··· 588 582 } 589 583 590 584 out: 591 - if (in_fence) 592 - dma_fence_put(in_fence); 593 585 submit_cleanup(submit); 594 586 if (ret) 595 587 msm_gem_submit_free(submit);
+8 -5
drivers/gpu/drm/msm/msm_gpu.c
··· 345 345 { 346 346 struct msm_gpu_state *state; 347 347 348 + /* Check if the target supports capturing crash state */ 349 + if (!gpu->funcs->gpu_state_get) 350 + return; 351 + 348 352 /* Only save one crash state at a time */ 349 353 if (gpu->crashstate) 350 354 return; ··· 438 434 if (submit) { 439 435 struct task_struct *task; 440 436 441 - rcu_read_lock(); 442 - task = pid_task(submit->pid, PIDTYPE_PID); 437 + task = get_pid_task(submit->pid, PIDTYPE_PID); 443 438 if (task) { 444 - comm = kstrdup(task->comm, GFP_ATOMIC); 439 + comm = kstrdup(task->comm, GFP_KERNEL); 445 440 446 441 /* 447 442 * So slightly annoying, in other paths like ··· 453 450 * about the submit going away. 454 451 */ 455 452 mutex_unlock(&dev->struct_mutex); 456 - cmd = kstrdup_quotable_cmdline(task, GFP_ATOMIC); 453 + cmd = kstrdup_quotable_cmdline(task, GFP_KERNEL); 454 + put_task_struct(task); 457 455 mutex_lock(&dev->struct_mutex); 458 456 } 459 - rcu_read_unlock(); 460 457 461 458 if (comm && cmd) { 462 459 dev_err(dev->dev, "%s: offending task: %s (%s)\n",
+1 -1
drivers/gpu/drm/msm/msm_iommu.c
··· 66 66 // pm_runtime_get_sync(mmu->dev); 67 67 ret = iommu_map_sg(iommu->domain, iova, sgt->sgl, sgt->nents, prot); 68 68 // pm_runtime_put_sync(mmu->dev); 69 - WARN_ON(ret < 0); 69 + WARN_ON(!ret); 70 70 71 71 return (ret == len) ? 0 : -EINVAL; 72 72 }
+4 -1
drivers/gpu/drm/msm/msm_rd.c
··· 316 316 uint64_t iova, uint32_t size) 317 317 { 318 318 struct msm_gem_object *obj = submit->bos[idx].obj; 319 + unsigned offset = 0; 319 320 const char *buf; 320 321 321 322 if (iova) { 322 - buf += iova - submit->bos[idx].iova; 323 + offset = iova - submit->bos[idx].iova; 323 324 } else { 324 325 iova = submit->bos[idx].iova; 325 326 size = obj->base.size; ··· 340 339 buf = msm_gem_get_vaddr_active(&obj->base); 341 340 if (IS_ERR(buf)) 342 341 return; 342 + 343 + buf += offset; 343 344 344 345 rd_write_section(rd, RD_BUFFER_CONTENTS, buf, size); 345 346
+1
drivers/gpu/drm/omapdrm/displays/panel-dpi.c
··· 177 177 dssdev->type = OMAP_DISPLAY_TYPE_DPI; 178 178 dssdev->owner = THIS_MODULE; 179 179 dssdev->of_ports = BIT(0); 180 + drm_bus_flags_from_videomode(&ddata->vm, &dssdev->bus_flags); 180 181 181 182 omapdss_display_init(dssdev); 182 183 omapdss_device_register(dssdev);
+10 -10
drivers/gpu/drm/omapdrm/dss/dsi.c
··· 5418 5418 dsi->num_lanes_supported = 3; 5419 5419 } 5420 5420 5421 + r = of_platform_populate(dev->of_node, NULL, NULL, dev); 5422 + if (r) { 5423 + DSSERR("Failed to populate DSI child devices: %d\n", r); 5424 + goto err_pm_disable; 5425 + } 5426 + 5421 5427 r = dsi_init_output(dsi); 5422 5428 if (r) 5423 - goto err_pm_disable; 5429 + goto err_of_depopulate; 5424 5430 5425 5431 r = dsi_probe_of(dsi); 5426 5432 if (r) { ··· 5434 5428 goto err_uninit_output; 5435 5429 } 5436 5430 5437 - r = of_platform_populate(dev->of_node, NULL, NULL, dev); 5438 - if (r) { 5439 - DSSERR("Failed to populate DSI child devices: %d\n", r); 5440 - goto err_uninit_output; 5441 - } 5442 - 5443 5431 r = component_add(&pdev->dev, &dsi_component_ops); 5444 5432 if (r) 5445 - goto err_of_depopulate; 5433 + goto err_uninit_output; 5446 5434 5447 5435 return 0; 5448 5436 5449 - err_of_depopulate: 5450 - of_platform_depopulate(dev); 5451 5437 err_uninit_output: 5452 5438 dsi_uninit_output(dsi); 5439 + err_of_depopulate: 5440 + of_platform_depopulate(dev); 5453 5441 err_pm_disable: 5454 5442 pm_runtime_disable(dev); 5455 5443 return r;
+1 -1
drivers/gpu/drm/omapdrm/dss/omapdss.h
··· 432 432 const struct omap_dss_driver *driver; 433 433 const struct omap_dss_device_ops *ops; 434 434 unsigned long ops_flags; 435 - unsigned long bus_flags; 435 + u32 bus_flags; 436 436 437 437 /* helper variable for driver suspend/resume */ 438 438 bool activate_after_resume;
+33 -25
drivers/gpu/drm/omapdrm/omap_encoder.c
··· 52 52 .destroy = omap_encoder_destroy, 53 53 }; 54 54 55 + static void omap_encoder_hdmi_mode_set(struct drm_encoder *encoder, 56 + struct drm_display_mode *adjusted_mode) 57 + { 58 + struct drm_device *dev = encoder->dev; 59 + struct omap_encoder *omap_encoder = to_omap_encoder(encoder); 60 + struct omap_dss_device *dssdev = omap_encoder->output; 61 + struct drm_connector *connector; 62 + bool hdmi_mode; 63 + 64 + hdmi_mode = false; 65 + list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 66 + if (connector->encoder == encoder) { 67 + hdmi_mode = omap_connector_get_hdmi_mode(connector); 68 + break; 69 + } 70 + } 71 + 72 + if (dssdev->ops->hdmi.set_hdmi_mode) 73 + dssdev->ops->hdmi.set_hdmi_mode(dssdev, hdmi_mode); 74 + 75 + if (hdmi_mode && dssdev->ops->hdmi.set_infoframe) { 76 + struct hdmi_avi_infoframe avi; 77 + int r; 78 + 79 + r = drm_hdmi_avi_infoframe_from_display_mode(&avi, adjusted_mode, 80 + false); 81 + if (r == 0) 82 + dssdev->ops->hdmi.set_infoframe(dssdev, &avi); 83 + } 84 + } 85 + 55 86 static void omap_encoder_mode_set(struct drm_encoder *encoder, 56 87 struct drm_display_mode *mode, 57 88 struct drm_display_mode *adjusted_mode) 58 89 { 59 - struct drm_device *dev = encoder->dev; 60 90 struct omap_encoder *omap_encoder = to_omap_encoder(encoder); 61 - struct drm_connector *connector; 62 91 struct omap_dss_device *dssdev; 63 92 struct videomode vm = { 0 }; 64 - bool hdmi_mode; 65 - int r; 66 93 67 94 drm_display_mode_to_videomode(adjusted_mode, &vm); 68 95 ··· 139 112 } 140 113 141 114 /* Set the HDMI mode and HDMI infoframe if applicable. */ 142 - hdmi_mode = false; 143 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 144 - if (connector->encoder == encoder) { 145 - hdmi_mode = omap_connector_get_hdmi_mode(connector); 146 - break; 147 - } 148 - } 149 - 150 - dssdev = omap_encoder->output; 151 - 152 - if (dssdev->ops->hdmi.set_hdmi_mode) 153 - dssdev->ops->hdmi.set_hdmi_mode(dssdev, hdmi_mode); 154 - 155 - if (hdmi_mode && dssdev->ops->hdmi.set_infoframe) { 156 - struct hdmi_avi_infoframe avi; 157 - 158 - r = drm_hdmi_avi_infoframe_from_display_mode(&avi, adjusted_mode, 159 - false); 160 - if (r == 0) 161 - dssdev->ops->hdmi.set_infoframe(dssdev, &avi); 162 - } 115 + if (omap_encoder->output->output_type == OMAP_DISPLAY_TYPE_HDMI) 116 + omap_encoder_hdmi_mode_set(encoder, adjusted_mode); 163 117 } 164 118 165 119 static void omap_encoder_disable(struct drm_encoder *encoder)
+3 -1
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 492 492 if (!fbo) 493 493 return -ENOMEM; 494 494 495 - ttm_bo_get(bo); 496 495 fbo->base = *bo; 496 + fbo->base.mem.placement |= TTM_PL_FLAG_NO_EVICT; 497 + 498 + ttm_bo_get(bo); 497 499 fbo->bo = bo; 498 500 499 501 /**
+1 -1
drivers/hid/hid-hyperv.c
··· 309 309 hid_input_report(input_dev->hid_device, HID_INPUT_REPORT, 310 310 input_dev->input_buf, len, 1); 311 311 312 - pm_wakeup_event(&input_dev->device->device, 0); 312 + pm_wakeup_hard_event(&input_dev->device->device); 313 313 314 314 break; 315 315 default:
+126 -63
drivers/hv/channel_mgmt.c
··· 435 435 } 436 436 } 437 437 438 - /* 439 - * vmbus_process_offer - Process the offer by creating a channel/device 440 - * associated with this offer 441 - */ 442 - static void vmbus_process_offer(struct vmbus_channel *newchannel) 438 + /* Note: the function can run concurrently for primary/sub channels. */ 439 + static void vmbus_add_channel_work(struct work_struct *work) 443 440 { 444 - struct vmbus_channel *channel; 445 - bool fnew = true; 441 + struct vmbus_channel *newchannel = 442 + container_of(work, struct vmbus_channel, add_channel_work); 443 + struct vmbus_channel *primary_channel = newchannel->primary_channel; 446 444 unsigned long flags; 447 445 u16 dev_type; 448 446 int ret; 449 - 450 - /* Make sure this is a new offer */ 451 - mutex_lock(&vmbus_connection.channel_mutex); 452 - 453 - /* 454 - * Now that we have acquired the channel_mutex, 455 - * we can release the potentially racing rescind thread. 456 - */ 457 - atomic_dec(&vmbus_connection.offer_in_progress); 458 - 459 - list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) { 460 - if (!uuid_le_cmp(channel->offermsg.offer.if_type, 461 - newchannel->offermsg.offer.if_type) && 462 - !uuid_le_cmp(channel->offermsg.offer.if_instance, 463 - newchannel->offermsg.offer.if_instance)) { 464 - fnew = false; 465 - break; 466 - } 467 - } 468 - 469 - if (fnew) 470 - list_add_tail(&newchannel->listentry, 471 - &vmbus_connection.chn_list); 472 - 473 - mutex_unlock(&vmbus_connection.channel_mutex); 474 - 475 - if (!fnew) { 476 - /* 477 - * Check to see if this is a sub-channel. 478 - */ 479 - if (newchannel->offermsg.offer.sub_channel_index != 0) { 480 - /* 481 - * Process the sub-channel. 482 - */ 483 - newchannel->primary_channel = channel; 484 - spin_lock_irqsave(&channel->lock, flags); 485 - list_add_tail(&newchannel->sc_list, &channel->sc_list); 486 - channel->num_sc++; 487 - spin_unlock_irqrestore(&channel->lock, flags); 488 - } else { 489 - goto err_free_chan; 490 - } 491 - } 492 447 493 448 dev_type = hv_get_dev_type(newchannel); 494 449 ··· 462 507 /* 463 508 * This state is used to indicate a successful open 464 509 * so that when we do close the channel normally, we 465 - * can cleanup properly 510 + * can cleanup properly. 466 511 */ 467 512 newchannel->state = CHANNEL_OPEN_STATE; 468 513 469 - if (!fnew) { 470 - struct hv_device *dev 471 - = newchannel->primary_channel->device_obj; 514 + if (primary_channel != NULL) { 515 + /* newchannel is a sub-channel. */ 516 + struct hv_device *dev = primary_channel->device_obj; 472 517 473 518 if (vmbus_add_channel_kobj(dev, newchannel)) 474 - goto err_free_chan; 519 + goto err_deq_chan; 475 520 476 - if (channel->sc_creation_callback != NULL) 477 - channel->sc_creation_callback(newchannel); 521 + if (primary_channel->sc_creation_callback != NULL) 522 + primary_channel->sc_creation_callback(newchannel); 523 + 478 524 newchannel->probe_done = true; 479 525 return; 480 526 } 481 527 482 528 /* 483 - * Start the process of binding this offer to the driver 484 - * We need to set the DeviceObject field before calling 485 - * vmbus_child_dev_add() 529 + * Start the process of binding the primary channel to the driver 486 530 */ 487 531 newchannel->device_obj = vmbus_device_create( 488 532 &newchannel->offermsg.offer.if_type, ··· 510 556 511 557 err_deq_chan: 512 558 mutex_lock(&vmbus_connection.channel_mutex); 513 - list_del(&newchannel->listentry); 559 + 560 + /* 561 + * We need to set the flag, otherwise 562 + * vmbus_onoffer_rescind() can be blocked. 563 + */ 564 + newchannel->probe_done = true; 565 + 566 + if (primary_channel == NULL) { 567 + list_del(&newchannel->listentry); 568 + } else { 569 + spin_lock_irqsave(&primary_channel->lock, flags); 570 + list_del(&newchannel->sc_list); 571 + spin_unlock_irqrestore(&primary_channel->lock, flags); 572 + } 573 + 514 574 mutex_unlock(&vmbus_connection.channel_mutex); 515 575 516 576 if (newchannel->target_cpu != get_cpu()) { 517 577 put_cpu(); 518 578 smp_call_function_single(newchannel->target_cpu, 519 - percpu_channel_deq, newchannel, true); 579 + percpu_channel_deq, 580 + newchannel, true); 520 581 } else { 521 582 percpu_channel_deq(newchannel); 522 583 put_cpu(); ··· 539 570 540 571 vmbus_release_relid(newchannel->offermsg.child_relid); 541 572 542 - err_free_chan: 543 573 free_channel(newchannel); 574 + } 575 + 576 + /* 577 + * vmbus_process_offer - Process the offer by creating a channel/device 578 + * associated with this offer 579 + */ 580 + static void vmbus_process_offer(struct vmbus_channel *newchannel) 581 + { 582 + struct vmbus_channel *channel; 583 + struct workqueue_struct *wq; 584 + unsigned long flags; 585 + bool fnew = true; 586 + 587 + mutex_lock(&vmbus_connection.channel_mutex); 588 + 589 + /* 590 + * Now that we have acquired the channel_mutex, 591 + * we can release the potentially racing rescind thread. 592 + */ 593 + atomic_dec(&vmbus_connection.offer_in_progress); 594 + 595 + list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) { 596 + if (!uuid_le_cmp(channel->offermsg.offer.if_type, 597 + newchannel->offermsg.offer.if_type) && 598 + !uuid_le_cmp(channel->offermsg.offer.if_instance, 599 + newchannel->offermsg.offer.if_instance)) { 600 + fnew = false; 601 + break; 602 + } 603 + } 604 + 605 + if (fnew) 606 + list_add_tail(&newchannel->listentry, 607 + &vmbus_connection.chn_list); 608 + else { 609 + /* 610 + * Check to see if this is a valid sub-channel. 611 + */ 612 + if (newchannel->offermsg.offer.sub_channel_index == 0) { 613 + mutex_unlock(&vmbus_connection.channel_mutex); 614 + /* 615 + * Don't call free_channel(), because newchannel->kobj 616 + * is not initialized yet. 617 + */ 618 + kfree(newchannel); 619 + WARN_ON_ONCE(1); 620 + return; 621 + } 622 + /* 623 + * Process the sub-channel. 624 + */ 625 + newchannel->primary_channel = channel; 626 + spin_lock_irqsave(&channel->lock, flags); 627 + list_add_tail(&newchannel->sc_list, &channel->sc_list); 628 + spin_unlock_irqrestore(&channel->lock, flags); 629 + } 630 + 631 + mutex_unlock(&vmbus_connection.channel_mutex); 632 + 633 + /* 634 + * vmbus_process_offer() mustn't call channel->sc_creation_callback() 635 + * directly for sub-channels, because sc_creation_callback() -> 636 + * vmbus_open() may never get the host's response to the 637 + * OPEN_CHANNEL message (the host may rescind a channel at any time, 638 + * e.g. in the case of hot removing a NIC), and vmbus_onoffer_rescind() 639 + * may not wake up the vmbus_open() as it's blocked due to a non-zero 640 + * vmbus_connection.offer_in_progress, and finally we have a deadlock. 641 + * 642 + * The above is also true for primary channels, if the related device 643 + * drivers use sync probing mode by default. 644 + * 645 + * And, usually the handling of primary channels and sub-channels can 646 + * depend on each other, so we should offload them to different 647 + * workqueues to avoid possible deadlock, e.g. in sync-probing mode, 648 + * NIC1's netvsc_subchan_work() can race with NIC2's netvsc_probe() -> 649 + * rtnl_lock(), and causes deadlock: the former gets the rtnl_lock 650 + * and waits for all the sub-channels to appear, but the latter 651 + * can't get the rtnl_lock and this blocks the handling of 652 + * sub-channels. 653 + */ 654 + INIT_WORK(&newchannel->add_channel_work, vmbus_add_channel_work); 655 + wq = fnew ? vmbus_connection.handle_primary_chan_wq : 656 + vmbus_connection.handle_sub_chan_wq; 657 + queue_work(wq, &newchannel->add_channel_work); 544 658 } 545 659 546 660 /* 547 661 * We use this state to statically distribute the channel interrupt load. 548 662 */ 549 663 static int next_numa_node_id; 664 + /* 665 + * init_vp_index() accesses global variables like next_numa_node_id, and 666 + * it can run concurrently for primary channels and sub-channels: see 667 + * vmbus_process_offer(), so we need the lock to protect the global 668 + * variables. 669 + */ 670 + static DEFINE_SPINLOCK(bind_channel_to_cpu_lock); 550 671 551 672 /* 552 673 * Starting with Win8, we can statically distribute the incoming ··· 671 612 channel->target_vp = hv_cpu_number_to_vp_number(0); 672 613 return; 673 614 } 615 + 616 + spin_lock(&bind_channel_to_cpu_lock); 674 617 675 618 /* 676 619 * Based on the channel affinity policy, we will assign the NUMA ··· 755 694 756 695 channel->target_cpu = cur_cpu; 757 696 channel->target_vp = hv_cpu_number_to_vp_number(cur_cpu); 697 + 698 + spin_unlock(&bind_channel_to_cpu_lock); 758 699 759 700 free_cpumask_var(available_mask); 760 701 }
+21 -3
drivers/hv/connection.c
··· 190 190 goto cleanup; 191 191 } 192 192 193 + vmbus_connection.handle_primary_chan_wq = 194 + create_workqueue("hv_pri_chan"); 195 + if (!vmbus_connection.handle_primary_chan_wq) { 196 + ret = -ENOMEM; 197 + goto cleanup; 198 + } 199 + 200 + vmbus_connection.handle_sub_chan_wq = 201 + create_workqueue("hv_sub_chan"); 202 + if (!vmbus_connection.handle_sub_chan_wq) { 203 + ret = -ENOMEM; 204 + goto cleanup; 205 + } 206 + 193 207 INIT_LIST_HEAD(&vmbus_connection.chn_msg_list); 194 208 spin_lock_init(&vmbus_connection.channelmsg_lock); 195 209 ··· 294 280 */ 295 281 vmbus_initiate_unload(false); 296 282 297 - if (vmbus_connection.work_queue) { 298 - drain_workqueue(vmbus_connection.work_queue); 283 + if (vmbus_connection.handle_sub_chan_wq) 284 + destroy_workqueue(vmbus_connection.handle_sub_chan_wq); 285 + 286 + if (vmbus_connection.handle_primary_chan_wq) 287 + destroy_workqueue(vmbus_connection.handle_primary_chan_wq); 288 + 289 + if (vmbus_connection.work_queue) 299 290 destroy_workqueue(vmbus_connection.work_queue); 300 - } 301 291 302 292 if (vmbus_connection.int_page) { 303 293 free_pages((unsigned long)vmbus_connection.int_page, 0);
+7
drivers/hv/hyperv_vmbus.h
··· 335 335 struct list_head chn_list; 336 336 struct mutex channel_mutex; 337 337 338 + /* 339 + * An offer message is handled first on the work_queue, and then 340 + * is further handled on handle_primary_chan_wq or 341 + * handle_sub_chan_wq. 342 + */ 338 343 struct workqueue_struct *work_queue; 344 + struct workqueue_struct *handle_primary_chan_wq; 345 + struct workqueue_struct *handle_sub_chan_wq; 339 346 }; 340 347 341 348
+29 -11
drivers/i2c/busses/i2c-axxia.c
··· 74 74 MST_STATUS_ND) 75 75 #define MST_STATUS_ERR (MST_STATUS_NAK | \ 76 76 MST_STATUS_AL | \ 77 - MST_STATUS_IP | \ 78 - MST_STATUS_TSS) 77 + MST_STATUS_IP) 79 78 #define MST_TX_BYTES_XFRD 0x50 80 79 #define MST_RX_BYTES_XFRD 0x54 81 80 #define SCL_HIGH_PERIOD 0x80 ··· 240 241 */ 241 242 if (c <= 0 || c > I2C_SMBUS_BLOCK_MAX) { 242 243 idev->msg_err = -EPROTO; 243 - i2c_int_disable(idev, ~0); 244 + i2c_int_disable(idev, ~MST_STATUS_TSS); 244 245 complete(&idev->msg_complete); 245 246 break; 246 247 } ··· 298 299 299 300 if (status & MST_STATUS_SCC) { 300 301 /* Stop completed */ 301 - i2c_int_disable(idev, ~0); 302 + i2c_int_disable(idev, ~MST_STATUS_TSS); 302 303 complete(&idev->msg_complete); 303 304 } else if (status & MST_STATUS_SNS) { 304 305 /* Transfer done */ 305 - i2c_int_disable(idev, ~0); 306 + i2c_int_disable(idev, ~MST_STATUS_TSS); 306 307 if (i2c_m_rd(idev->msg) && idev->msg_xfrd < idev->msg->len) 307 308 axxia_i2c_empty_rx_fifo(idev); 309 + complete(&idev->msg_complete); 310 + } else if (status & MST_STATUS_TSS) { 311 + /* Transfer timeout */ 312 + idev->msg_err = -ETIMEDOUT; 313 + i2c_int_disable(idev, ~MST_STATUS_TSS); 308 314 complete(&idev->msg_complete); 309 315 } else if (unlikely(status & MST_STATUS_ERR)) { 310 316 /* Transfer error */ ··· 343 339 u32 rx_xfer, tx_xfer; 344 340 u32 addr_1, addr_2; 345 341 unsigned long time_left; 342 + unsigned int wt_value; 346 343 347 344 idev->msg = msg; 348 345 idev->msg_xfrd = 0; 349 - idev->msg_err = 0; 350 346 reinit_completion(&idev->msg_complete); 351 347 352 348 if (i2c_m_ten(msg)) { ··· 387 383 else if (axxia_i2c_fill_tx_fifo(idev) != 0) 388 384 int_mask |= MST_STATUS_TFL; 389 385 386 + wt_value = WT_VALUE(readl(idev->base + WAIT_TIMER_CONTROL)); 387 + /* Disable wait timer temporarly */ 388 + writel(wt_value, idev->base + WAIT_TIMER_CONTROL); 389 + /* Check if timeout error happened */ 390 + if (idev->msg_err) 391 + goto out; 392 + 390 393 /* Start manual mode */ 391 394 writel(CMD_MANUAL, idev->base + MST_COMMAND); 395 + 396 + writel(WT_EN | wt_value, idev->base + WAIT_TIMER_CONTROL); 392 397 393 398 i2c_int_enable(idev, int_mask); 394 399 ··· 409 396 if (readl(idev->base + MST_COMMAND) & CMD_BUSY) 410 397 dev_warn(idev->dev, "busy after xfer\n"); 411 398 412 - if (time_left == 0) 399 + if (time_left == 0) { 413 400 idev->msg_err = -ETIMEDOUT; 414 - 415 - if (idev->msg_err == -ETIMEDOUT) 416 401 i2c_recover_bus(&idev->adapter); 402 + axxia_i2c_init(idev); 403 + } 417 404 418 - if (unlikely(idev->msg_err) && idev->msg_err != -ENXIO) 405 + out: 406 + if (unlikely(idev->msg_err) && idev->msg_err != -ENXIO && 407 + idev->msg_err != -ETIMEDOUT) 419 408 axxia_i2c_init(idev); 420 409 421 410 return idev->msg_err; ··· 425 410 426 411 static int axxia_i2c_stop(struct axxia_i2c_dev *idev) 427 412 { 428 - u32 int_mask = MST_STATUS_ERR | MST_STATUS_SCC; 413 + u32 int_mask = MST_STATUS_ERR | MST_STATUS_SCC | MST_STATUS_TSS; 429 414 unsigned long time_left; 430 415 431 416 reinit_completion(&idev->msg_complete); ··· 451 436 struct axxia_i2c_dev *idev = i2c_get_adapdata(adap); 452 437 int i; 453 438 int ret = 0; 439 + 440 + idev->msg_err = 0; 441 + i2c_int_enable(idev, MST_STATUS_TSS); 454 442 455 443 for (i = 0; ret == 0 && i < num; ++i) 456 444 ret = axxia_i2c_xfer_msg(idev, &msgs[i]);
+4 -3
drivers/i2c/busses/i2c-nvidia-gpu.c
··· 89 89 90 90 if (time_is_before_jiffies(target)) { 91 91 dev_err(i2cd->dev, "i2c timeout error %x\n", val); 92 - return -ETIME; 92 + return -ETIMEDOUT; 93 93 } 94 94 95 95 val = readl(i2cd->regs + I2C_MST_CNTL); ··· 97 97 case I2C_MST_CNTL_STATUS_OKAY: 98 98 return 0; 99 99 case I2C_MST_CNTL_STATUS_NO_ACK: 100 - return -EIO; 100 + return -ENXIO; 101 101 case I2C_MST_CNTL_STATUS_TIMEOUT: 102 - return -ETIME; 102 + return -ETIMEDOUT; 103 103 default: 104 104 return 0; 105 105 } ··· 218 218 219 219 static const struct i2c_adapter_quirks gpu_i2c_quirks = { 220 220 .max_read_len = 4, 221 + .max_comb_2nd_msg_len = 4, 221 222 .flags = I2C_AQ_COMB_WRITE_THEN_READ, 222 223 }; 223 224
+5 -4
drivers/i2c/busses/i2c-rcar.c
··· 779 779 780 780 pm_runtime_get_sync(dev); 781 781 782 + /* Check bus state before init otherwise bus busy info will be lost */ 783 + ret = rcar_i2c_bus_barrier(priv); 784 + if (ret < 0) 785 + goto out; 786 + 782 787 /* Gen3 needs a reset before allowing RXDMA once */ 783 788 if (priv->devtype == I2C_RCAR_GEN3) { 784 789 priv->flags |= ID_P_NO_RXDMA; ··· 795 790 } 796 791 797 792 rcar_i2c_init(priv); 798 - 799 - ret = rcar_i2c_bus_barrier(priv); 800 - if (ret < 0) 801 - goto out; 802 793 803 794 for (i = 0; i < num; i++) 804 795 rcar_i2c_request_dma(priv, msgs + i);
+7 -3
drivers/i2c/busses/i2c-scmi.c
··· 367 367 { 368 368 struct acpi_smbus_cmi *smbus_cmi; 369 369 const struct acpi_device_id *id; 370 + int ret; 370 371 371 372 smbus_cmi = kzalloc(sizeof(struct acpi_smbus_cmi), GFP_KERNEL); 372 373 if (!smbus_cmi) ··· 389 388 acpi_walk_namespace(ACPI_TYPE_METHOD, smbus_cmi->handle, 1, 390 389 acpi_smbus_cmi_query_methods, NULL, smbus_cmi, NULL); 391 390 392 - if (smbus_cmi->cap_info == 0) 391 + if (smbus_cmi->cap_info == 0) { 392 + ret = -ENODEV; 393 393 goto err; 394 + } 394 395 395 396 snprintf(smbus_cmi->adapter.name, sizeof(smbus_cmi->adapter.name), 396 397 "SMBus CMI adapter %s", ··· 403 400 smbus_cmi->adapter.class = I2C_CLASS_HWMON | I2C_CLASS_SPD; 404 401 smbus_cmi->adapter.dev.parent = &device->dev; 405 402 406 - if (i2c_add_adapter(&smbus_cmi->adapter)) { 403 + ret = i2c_add_adapter(&smbus_cmi->adapter); 404 + if (ret) { 407 405 dev_err(&device->dev, "Couldn't register adapter!\n"); 408 406 goto err; 409 407 } ··· 414 410 err: 415 411 kfree(smbus_cmi); 416 412 device->driver_data = NULL; 417 - return -EIO; 413 + return ret; 418 414 } 419 415 420 416 static int acpi_smbus_cmi_remove(struct acpi_device *device)
+41 -8
drivers/i2c/busses/i2c-uniphier-f.c
··· 173 173 "interrupt: enabled_irqs=%04x, irq_status=%04x\n", 174 174 priv->enabled_irqs, irq_status); 175 175 176 - uniphier_fi2c_clear_irqs(priv, irq_status); 177 - 178 176 if (irq_status & UNIPHIER_FI2C_INT_STOP) 179 177 goto complete; 180 178 ··· 212 214 213 215 if (irq_status & (UNIPHIER_FI2C_INT_RF | UNIPHIER_FI2C_INT_RB)) { 214 216 uniphier_fi2c_drain_rxfifo(priv); 215 - if (!priv->len) 217 + /* 218 + * If the number of bytes to read is multiple of the FIFO size 219 + * (msg->len == 8, 16, 24, ...), the INT_RF bit is set a little 220 + * earlier than INT_RB. We wait for INT_RB to confirm the 221 + * completion of the current message. 222 + */ 223 + if (!priv->len && (irq_status & UNIPHIER_FI2C_INT_RB)) 216 224 goto data_done; 217 225 218 226 if (unlikely(priv->flags & UNIPHIER_FI2C_MANUAL_NACK)) { ··· 257 253 } 258 254 259 255 handled: 256 + /* 257 + * This controller makes a pause while any bit of the IRQ status is 258 + * asserted. Clear the asserted bit to kick the controller just before 259 + * exiting the handler. 260 + */ 261 + uniphier_fi2c_clear_irqs(priv, irq_status); 262 + 260 263 spin_unlock(&priv->lock); 261 264 262 265 return IRQ_HANDLED; 263 266 } 264 267 265 - static void uniphier_fi2c_tx_init(struct uniphier_fi2c_priv *priv, u16 addr) 268 + static void uniphier_fi2c_tx_init(struct uniphier_fi2c_priv *priv, u16 addr, 269 + bool repeat) 266 270 { 267 271 priv->enabled_irqs |= UNIPHIER_FI2C_INT_TE; 268 272 uniphier_fi2c_set_irqs(priv); ··· 280 268 /* set slave address */ 281 269 writel(UNIPHIER_FI2C_DTTX_CMD | addr << 1, 282 270 priv->membase + UNIPHIER_FI2C_DTTX); 283 - /* first chunk of data */ 284 - uniphier_fi2c_fill_txfifo(priv, true); 271 + /* 272 + * First chunk of data. For a repeated START condition, do not write 273 + * data to the TX fifo here to avoid the timing issue. 274 + */ 275 + if (!repeat) 276 + uniphier_fi2c_fill_txfifo(priv, true); 285 277 } 286 278 287 279 static void uniphier_fi2c_rx_init(struct uniphier_fi2c_priv *priv, u16 addr) ··· 366 350 if (is_read) 367 351 uniphier_fi2c_rx_init(priv, msg->addr); 368 352 else 369 - uniphier_fi2c_tx_init(priv, msg->addr); 353 + uniphier_fi2c_tx_init(priv, msg->addr, repeat); 370 354 371 355 dev_dbg(&adap->dev, "start condition\n"); 372 356 /* ··· 518 502 519 503 uniphier_fi2c_reset(priv); 520 504 505 + /* 506 + * Standard-mode: tLOW + tHIGH = 10 us 507 + * Fast-mode: tLOW + tHIGH = 2.5 us 508 + */ 521 509 writel(cyc, priv->membase + UNIPHIER_FI2C_CYC); 522 - writel(cyc / 2, priv->membase + UNIPHIER_FI2C_LCTL); 510 + /* 511 + * Standard-mode: tLOW = 4.7 us, tHIGH = 4.0 us, tBUF = 4.7 us 512 + * Fast-mode: tLOW = 1.3 us, tHIGH = 0.6 us, tBUF = 1.3 us 513 + * "tLow/tHIGH = 5/4" meets both. 514 + */ 515 + writel(cyc * 5 / 9, priv->membase + UNIPHIER_FI2C_LCTL); 516 + /* 517 + * Standard-mode: tHD;STA = 4.0 us, tSU;STA = 4.7 us, tSU;STO = 4.0 us 518 + * Fast-mode: tHD;STA = 0.6 us, tSU;STA = 0.6 us, tSU;STO = 0.6 us 519 + */ 523 520 writel(cyc / 2, priv->membase + UNIPHIER_FI2C_SSUT); 521 + /* 522 + * Standard-mode: tSU;DAT = 250 ns 523 + * Fast-mode: tSU;DAT = 100 ns 524 + */ 524 525 writel(cyc / 16, priv->membase + UNIPHIER_FI2C_DSUT); 525 526 526 527 uniphier_fi2c_prepare_operation(priv);
+7 -1
drivers/i2c/busses/i2c-uniphier.c
··· 320 320 321 321 uniphier_i2c_reset(priv, true); 322 322 323 - writel((cyc / 2 << 16) | cyc, priv->membase + UNIPHIER_I2C_CLK); 323 + /* 324 + * Bit30-16: clock cycles of tLOW. 325 + * Standard-mode: tLOW = 4.7 us, tHIGH = 4.0 us 326 + * Fast-mode: tLOW = 1.3 us, tHIGH = 0.6 us 327 + * "tLow/tHIGH = 5/4" meets both. 328 + */ 329 + writel((cyc * 5 / 9 << 16) | cyc, priv->membase + UNIPHIER_I2C_CLK); 324 330 325 331 uniphier_i2c_reset(priv, false); 326 332 }
+2 -13
drivers/ide/ide-proc.c
··· 614 614 return 0; 615 615 } 616 616 617 - static int ide_drivers_open(struct inode *inode, struct file *file) 618 - { 619 - return single_open(file, &ide_drivers_show, NULL); 620 - } 621 - 622 - static const struct file_operations ide_drivers_operations = { 623 - .owner = THIS_MODULE, 624 - .open = ide_drivers_open, 625 - .read = seq_read, 626 - .llseek = seq_lseek, 627 - .release = single_release, 628 - }; 617 + DEFINE_SHOW_ATTRIBUTE(ide_drivers); 629 618 630 619 void proc_ide_create(void) 631 620 { ··· 623 634 if (!proc_ide_root) 624 635 return; 625 636 626 - proc_create("drivers", 0, proc_ide_root, &ide_drivers_operations); 637 + proc_create("drivers", 0, proc_ide_root, &ide_drivers_fops); 627 638 } 628 639 629 640 void proc_ide_destroy(void)
+1
drivers/ide/pmac.c
··· 920 920 struct device_node *root = of_find_node_by_path("/"); 921 921 const char *model = of_get_property(root, "model", NULL); 922 922 923 + of_node_put(root); 923 924 /* Get cable type from device-tree. */ 924 925 if (cable && !strncmp(cable, "80-", 3)) { 925 926 /* Some drives fail to detect 80c cable in PowerBook */
+6 -10
drivers/input/joystick/xpad.c
··· 480 480 }; 481 481 482 482 /* 483 - * This packet is required for some of the PDP pads to start 483 + * This packet is required for most (all?) of the PDP pads to start 484 484 * sending input reports. These pads include: (0x0e6f:0x02ab), 485 - * (0x0e6f:0x02a4). 485 + * (0x0e6f:0x02a4), (0x0e6f:0x02a6). 486 486 */ 487 487 static const u8 xboxone_pdp_init1[] = { 488 488 0x0a, 0x20, 0x00, 0x03, 0x00, 0x01, 0x14 489 489 }; 490 490 491 491 /* 492 - * This packet is required for some of the PDP pads to start 492 + * This packet is required for most (all?) of the PDP pads to start 493 493 * sending input reports. These pads include: (0x0e6f:0x02ab), 494 - * (0x0e6f:0x02a4). 494 + * (0x0e6f:0x02a4), (0x0e6f:0x02a6). 495 495 */ 496 496 static const u8 xboxone_pdp_init2[] = { 497 497 0x06, 0x20, 0x00, 0x02, 0x01, 0x00 ··· 527 527 XBOXONE_INIT_PKT(0x0e6f, 0x0165, xboxone_hori_init), 528 528 XBOXONE_INIT_PKT(0x0f0d, 0x0067, xboxone_hori_init), 529 529 XBOXONE_INIT_PKT(0x0000, 0x0000, xboxone_fw2015_init), 530 - XBOXONE_INIT_PKT(0x0e6f, 0x02ab, xboxone_pdp_init1), 531 - XBOXONE_INIT_PKT(0x0e6f, 0x02ab, xboxone_pdp_init2), 532 - XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init1), 533 - XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init2), 534 - XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init1), 535 - XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init2), 530 + XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_init1), 531 + XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_init2), 536 532 XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init), 537 533 XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init), 538 534 XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init),
+1 -1
drivers/input/keyboard/atkbd.c
··· 841 841 if (param[0] != 3) { 842 842 param[0] = 2; 843 843 if (ps2_command(ps2dev, param, ATKBD_CMD_SSCANSET)) 844 - return 2; 844 + return 2; 845 845 } 846 846 847 847 ps2_command(ps2dev, param, ATKBD_CMD_SETALL_MBR);
+2 -1
drivers/input/keyboard/cros_ec_keyb.c
··· 493 493 for (i = 0; i < ARRAY_SIZE(cros_ec_keyb_bs); i++) { 494 494 const struct cros_ec_bs_map *map = &cros_ec_keyb_bs[i]; 495 495 496 - if (buttons & BIT(map->bit)) 496 + if ((map->ev_type == EV_KEY && (buttons & BIT(map->bit))) || 497 + (map->ev_type == EV_SW && (switches & BIT(map->bit)))) 497 498 input_set_capability(idev, map->ev_type, map->code); 498 499 } 499 500
+14 -9
drivers/input/keyboard/matrix_keypad.c
··· 407 407 struct matrix_keypad_platform_data *pdata; 408 408 struct device_node *np = dev->of_node; 409 409 unsigned int *gpios; 410 - int i, nrow, ncol; 410 + int ret, i, nrow, ncol; 411 411 412 412 if (!np) { 413 413 dev_err(dev, "device lacks DT data\n"); ··· 452 452 return ERR_PTR(-ENOMEM); 453 453 } 454 454 455 - for (i = 0; i < pdata->num_row_gpios; i++) 456 - gpios[i] = of_get_named_gpio(np, "row-gpios", i); 455 + for (i = 0; i < nrow; i++) { 456 + ret = of_get_named_gpio(np, "row-gpios", i); 457 + if (ret < 0) 458 + return ERR_PTR(ret); 459 + gpios[i] = ret; 460 + } 457 461 458 - for (i = 0; i < pdata->num_col_gpios; i++) 459 - gpios[pdata->num_row_gpios + i] = 460 - of_get_named_gpio(np, "col-gpios", i); 462 + for (i = 0; i < ncol; i++) { 463 + ret = of_get_named_gpio(np, "col-gpios", i); 464 + if (ret < 0) 465 + return ERR_PTR(ret); 466 + gpios[nrow + i] = ret; 467 + } 461 468 462 469 pdata->row_gpios = gpios; 463 470 pdata->col_gpios = &gpios[pdata->num_row_gpios]; ··· 491 484 pdata = dev_get_platdata(&pdev->dev); 492 485 if (!pdata) { 493 486 pdata = matrix_keypad_parse_dt(&pdev->dev); 494 - if (IS_ERR(pdata)) { 495 - dev_err(&pdev->dev, "no platform data defined\n"); 487 + if (IS_ERR(pdata)) 496 488 return PTR_ERR(pdata); 497 - } 498 489 } else if (!pdata->keymap_data) { 499 490 dev_err(&pdev->dev, "no keymap data defined\n"); 500 491 return -EINVAL;
+14 -4
drivers/input/keyboard/omap4-keypad.c
··· 60 60 61 61 /* OMAP4 values */ 62 62 #define OMAP4_VAL_IRQDISABLE 0x0 63 - #define OMAP4_VAL_DEBOUNCINGTIME 0x7 64 - #define OMAP4_VAL_PVT 0x7 63 + 64 + /* 65 + * Errata i689: If a key is released for a time shorter than debounce time, 66 + * the keyboard will idle and never detect the key release. The workaround 67 + * is to use at least a 12ms debounce time. See omap5432 TRM chapter 68 + * "26.4.6.2 Keyboard Controller Timer" for more information. 69 + */ 70 + #define OMAP4_KEYPAD_PTV_DIV_128 0x6 71 + #define OMAP4_KEYPAD_DEBOUNCINGTIME_MS(dbms, ptv) \ 72 + ((((dbms) * 1000) / ((1 << ((ptv) + 1)) * (1000000 / 32768))) - 1) 73 + #define OMAP4_VAL_DEBOUNCINGTIME_16MS \ 74 + OMAP4_KEYPAD_DEBOUNCINGTIME_MS(16, OMAP4_KEYPAD_PTV_DIV_128) 65 75 66 76 enum { 67 77 KBD_REVISION_OMAP4 = 0, ··· 191 181 192 182 kbd_writel(keypad_data, OMAP4_KBD_CTRL, 193 183 OMAP4_DEF_CTRL_NOSOFTMODE | 194 - (OMAP4_VAL_PVT << OMAP4_DEF_CTRL_PTV_SHIFT)); 184 + (OMAP4_KEYPAD_PTV_DIV_128 << OMAP4_DEF_CTRL_PTV_SHIFT)); 195 185 kbd_writel(keypad_data, OMAP4_KBD_DEBOUNCINGTIME, 196 - OMAP4_VAL_DEBOUNCINGTIME); 186 + OMAP4_VAL_DEBOUNCINGTIME_16MS); 197 187 /* clear pending interrupts */ 198 188 kbd_write_irqreg(keypad_data, OMAP4_KBD_IRQSTATUS, 199 189 kbd_read_irqreg(keypad_data, OMAP4_KBD_IRQSTATUS));
+3
drivers/input/mouse/elan_i2c_core.c
··· 1348 1348 { "ELAN0618", 0 }, 1349 1349 { "ELAN061C", 0 }, 1350 1350 { "ELAN061D", 0 }, 1351 + { "ELAN061E", 0 }, 1352 + { "ELAN0620", 0 }, 1353 + { "ELAN0621", 0 }, 1351 1354 { "ELAN0622", 0 }, 1352 1355 { "ELAN1000", 0 }, 1353 1356 { }
+2
drivers/input/mouse/synaptics.c
··· 170 170 "LEN0048", /* X1 Carbon 3 */ 171 171 "LEN0046", /* X250 */ 172 172 "LEN004a", /* W541 */ 173 + "LEN005b", /* P50 */ 173 174 "LEN0071", /* T480 */ 174 175 "LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */ 175 176 "LEN0073", /* X1 Carbon G5 (Elantech) */ ··· 178 177 "LEN0096", /* X280 */ 179 178 "LEN0097", /* X280 -> ALPS trackpoint */ 180 179 "LEN200f", /* T450s */ 180 + "SYN3221", /* HP 15-ay000 */ 181 181 NULL 182 182 }; 183 183
+1 -1
drivers/input/serio/hyperv-keyboard.c
··· 177 177 * state because the Enter-UP can trigger a wakeup at once. 178 178 */ 179 179 if (!(info & IS_BREAK)) 180 - pm_wakeup_event(&hv_dev->device, 0); 180 + pm_wakeup_hard_event(&hv_dev->device); 181 181 182 182 break; 183 183
+1 -14
drivers/input/touchscreen/migor_ts.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 1 2 /* 2 3 * Touch Screen driver for Renesas MIGO-R Platform 3 4 * 4 5 * Copyright (c) 2008 Magnus Damm 5 6 * Copyright (c) 2007 Ujjwal Pande <ujjwal@kenati.com>, 6 7 * Kenati Technologies Pvt Ltd. 7 - * 8 - * This file is free software; you can redistribute it and/or 9 - * modify it under the terms of the GNU General Public 10 - * License as published by the Free Software Foundation; either 11 - * version 2 of the License, or (at your option) any later version. 12 - * 13 - * This file is distributed in the hope that it will be useful, 14 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 16 - * General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public 19 - * License along with this library; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 8 */ 22 9 #include <linux/module.h> 23 10 #include <linux/kernel.h>
+2 -10
drivers/input/touchscreen/st1232.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * ST1232 Touchscreen Controller Driver 3 4 * ··· 8 7 * Using code from: 9 8 * - android.git.kernel.org: projects/kernel/common.git: synaptics_i2c_rmi.c 10 9 * Copyright (C) 2007 Google, Inc. 11 - * 12 - * This software is licensed under the terms of the GNU General Public 13 - * License version 2, as published by the Free Software Foundation, and 14 - * may be copied, distributed, and modified under those terms. 15 - * 16 - * This program is distributed in the hope that it will be useful, 17 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 - * GNU General Public License for more details. 20 10 */ 21 11 22 12 #include <linux/delay.h> ··· 287 295 288 296 MODULE_AUTHOR("Tony SIM <chinyeow.sim.xt@renesas.com>"); 289 297 MODULE_DESCRIPTION("SITRONIX ST1232 Touchscreen Controller Driver"); 290 - MODULE_LICENSE("GPL"); 298 + MODULE_LICENSE("GPL v2");
+48 -58
drivers/media/dvb-frontends/dvb-pll.c
··· 80 80 81 81 static const struct dvb_pll_desc dvb_pll_thomson_dtt7579 = { 82 82 .name = "Thomson dtt7579", 83 - .min = 177000000, 84 - .max = 858000000, 83 + .min = 177 * MHz, 84 + .max = 858 * MHz, 85 85 .iffreq= 36166667, 86 86 .sleepdata = (u8[]){ 2, 0xb4, 0x03 }, 87 87 .count = 4, ··· 102 102 103 103 static const struct dvb_pll_desc dvb_pll_thomson_dtt759x = { 104 104 .name = "Thomson dtt759x", 105 - .min = 177000000, 106 - .max = 896000000, 105 + .min = 177 * MHz, 106 + .max = 896 * MHz, 107 107 .set = thomson_dtt759x_bw, 108 108 .iffreq= 36166667, 109 109 .sleepdata = (u8[]){ 2, 0x84, 0x03 }, ··· 126 126 127 127 static const struct dvb_pll_desc dvb_pll_thomson_dtt7520x = { 128 128 .name = "Thomson dtt7520x", 129 - .min = 185000000, 130 - .max = 900000000, 129 + .min = 185 * MHz, 130 + .max = 900 * MHz, 131 131 .set = thomson_dtt7520x_bw, 132 132 .iffreq = 36166667, 133 133 .count = 7, ··· 144 144 145 145 static const struct dvb_pll_desc dvb_pll_lg_z201 = { 146 146 .name = "LG z201", 147 - .min = 174000000, 148 - .max = 862000000, 147 + .min = 174 * MHz, 148 + .max = 862 * MHz, 149 149 .iffreq= 36166667, 150 150 .sleepdata = (u8[]){ 2, 0xbc, 0x03 }, 151 151 .count = 5, ··· 160 160 161 161 static const struct dvb_pll_desc dvb_pll_unknown_1 = { 162 162 .name = "unknown 1", /* used by dntv live dvb-t */ 163 - .min = 174000000, 164 - .max = 862000000, 163 + .min = 174 * MHz, 164 + .max = 862 * MHz, 165 165 .iffreq= 36166667, 166 166 .count = 9, 167 167 .entries = { ··· 182 182 */ 183 183 static const struct dvb_pll_desc dvb_pll_tua6010xs = { 184 184 .name = "Infineon TUA6010XS", 185 - .min = 44250000, 186 - .max = 858000000, 185 + .min = 44250 * kHz, 186 + .max = 858 * MHz, 187 187 .iffreq= 36125000, 188 188 .count = 3, 189 189 .entries = { ··· 196 196 /* Panasonic env57h1xd5 (some Philips PLL ?) */ 197 197 static const struct dvb_pll_desc dvb_pll_env57h1xd5 = { 198 198 .name = "Panasonic ENV57H1XD5", 199 - .min = 44250000, 200 - .max = 858000000, 199 + .min = 44250 * kHz, 200 + .max = 858 * MHz, 201 201 .iffreq= 36125000, 202 202 .count = 4, 203 203 .entries = { ··· 220 220 221 221 static const struct dvb_pll_desc dvb_pll_tda665x = { 222 222 .name = "Philips TDA6650/TDA6651", 223 - .min = 44250000, 224 - .max = 858000000, 223 + .min = 44250 * kHz, 224 + .max = 858 * MHz, 225 225 .set = tda665x_bw, 226 226 .iffreq= 36166667, 227 227 .initdata = (u8[]){ 4, 0x0b, 0xf5, 0x85, 0xab }, ··· 254 254 255 255 static const struct dvb_pll_desc dvb_pll_tua6034 = { 256 256 .name = "Infineon TUA6034", 257 - .min = 44250000, 258 - .max = 858000000, 257 + .min = 44250 * kHz, 258 + .max = 858 * MHz, 259 259 .iffreq= 36166667, 260 260 .count = 3, 261 261 .set = tua6034_bw, ··· 278 278 279 279 static const struct dvb_pll_desc dvb_pll_tded4 = { 280 280 .name = "ALPS TDED4", 281 - .min = 47000000, 282 - .max = 863000000, 281 + .min = 47 * MHz, 282 + .max = 863 * MHz, 283 283 .iffreq= 36166667, 284 284 .set = tded4_bw, 285 285 .count = 4, ··· 296 296 */ 297 297 static const struct dvb_pll_desc dvb_pll_tdhu2 = { 298 298 .name = "ALPS TDHU2", 299 - .min = 54000000, 300 - .max = 864000000, 299 + .min = 54 * MHz, 300 + .max = 864 * MHz, 301 301 .iffreq= 44000000, 302 302 .count = 4, 303 303 .entries = { ··· 313 313 */ 314 314 static const struct dvb_pll_desc dvb_pll_samsung_tbmv = { 315 315 .name = "Samsung TBMV30111IN / TBMV30712IN1", 316 - .min = 54000000, 317 - .max = 860000000, 316 + .min = 54 * MHz, 317 + .max = 860 * MHz, 318 318 .iffreq= 44000000, 319 319 .count = 6, 320 320 .entries = { ··· 332 332 */ 333 333 static const struct dvb_pll_desc dvb_pll_philips_sd1878_tda8261 = { 334 334 .name = "Philips SD1878", 335 - .min = 950000, 336 - .max = 2150000, 335 + .min = 950 * MHz, 336 + .max = 2150 * MHz, 337 337 .iffreq= 249, /* zero-IF, offset 249 is to round up */ 338 338 .count = 4, 339 339 .entries = { ··· 398 398 399 399 static const struct dvb_pll_desc dvb_pll_opera1 = { 400 400 .name = "Opera Tuner", 401 - .min = 900000, 402 - .max = 2250000, 401 + .min = 900 * MHz, 402 + .max = 2250 * MHz, 403 403 .initdata = (u8[]){ 4, 0x08, 0xe5, 0xe1, 0x00 }, 404 404 .initdata2 = (u8[]){ 4, 0x08, 0xe5, 0xe5, 0x00 }, 405 405 .iffreq= 0, ··· 445 445 /* unknown pll used in Samsung DTOS403IH102A DVB-C tuner */ 446 446 static const struct dvb_pll_desc dvb_pll_samsung_dtos403ih102a = { 447 447 .name = "Samsung DTOS403IH102A", 448 - .min = 44250000, 449 - .max = 858000000, 448 + .min = 44250 * kHz, 449 + .max = 858 * MHz, 450 450 .iffreq = 36125000, 451 451 .count = 8, 452 452 .set = samsung_dtos403ih102a_set, ··· 465 465 /* Samsung TDTC9251DH0 DVB-T NIM, as used on AirStar 2 */ 466 466 static const struct dvb_pll_desc dvb_pll_samsung_tdtc9251dh0 = { 467 467 .name = "Samsung TDTC9251DH0", 468 - .min = 48000000, 469 - .max = 863000000, 468 + .min = 48 * MHz, 469 + .max = 863 * MHz, 470 470 .iffreq = 36166667, 471 471 .count = 3, 472 472 .entries = { ··· 479 479 /* Samsung TBDU18132 DVB-S NIM with TSA5059 PLL, used in SkyStar2 DVB-S 2.3 */ 480 480 static const struct dvb_pll_desc dvb_pll_samsung_tbdu18132 = { 481 481 .name = "Samsung TBDU18132", 482 - .min = 950000, 483 - .max = 2150000, /* guesses */ 482 + .min = 950 * MHz, 483 + .max = 2150 * MHz, /* guesses */ 484 484 .iffreq = 0, 485 485 .count = 2, 486 486 .entries = { ··· 500 500 /* Samsung TBMU24112 DVB-S NIM with SL1935 zero-IF tuner */ 501 501 static const struct dvb_pll_desc dvb_pll_samsung_tbmu24112 = { 502 502 .name = "Samsung TBMU24112", 503 - .min = 950000, 504 - .max = 2150000, /* guesses */ 503 + .min = 950 * MHz, 504 + .max = 2150 * MHz, /* guesses */ 505 505 .iffreq = 0, 506 506 .count = 2, 507 507 .entries = { ··· 521 521 * 822 - 862 1 * 0 0 1 0 0 0 0x88 */ 522 522 static const struct dvb_pll_desc dvb_pll_alps_tdee4 = { 523 523 .name = "ALPS TDEE4", 524 - .min = 47000000, 525 - .max = 862000000, 524 + .min = 47 * MHz, 525 + .max = 862 * MHz, 526 526 .iffreq = 36125000, 527 527 .count = 4, 528 528 .entries = { ··· 537 537 /* CP cur. 50uA, AGC takeover: 103dBuV, PORT3 on */ 538 538 static const struct dvb_pll_desc dvb_pll_tua6034_friio = { 539 539 .name = "Infineon TUA6034 ISDB-T (Friio)", 540 - .min = 90000000, 541 - .max = 770000000, 540 + .min = 90 * MHz, 541 + .max = 770 * MHz, 542 542 .iffreq = 57000000, 543 543 .initdata = (u8[]){ 4, 0x9a, 0x50, 0xb2, 0x08 }, 544 544 .sleepdata = (u8[]){ 4, 0x9a, 0x70, 0xb3, 0x0b }, ··· 553 553 /* Philips TDA6651 ISDB-T, used in Earthsoft PT1 */ 554 554 static const struct dvb_pll_desc dvb_pll_tda665x_earth_pt1 = { 555 555 .name = "Philips TDA6651 ISDB-T (EarthSoft PT1)", 556 - .min = 90000000, 557 - .max = 770000000, 556 + .min = 90 * MHz, 557 + .max = 770 * MHz, 558 558 .iffreq = 57000000, 559 559 .initdata = (u8[]){ 5, 0x0e, 0x7f, 0xc1, 0x80, 0x80 }, 560 560 .count = 10, ··· 609 609 const struct dvb_pll_desc *desc = priv->pll_desc; 610 610 u32 div; 611 611 int i; 612 - 613 - if (frequency && (frequency < desc->min || frequency > desc->max)) 614 - return -EINVAL; 615 612 616 613 for (i = 0; i < desc->count; i++) { 617 614 if (frequency > desc->entries[i].limit) ··· 796 799 struct dvb_pll_priv *priv = NULL; 797 800 int ret; 798 801 const struct dvb_pll_desc *desc; 799 - struct dtv_frontend_properties *c = &fe->dtv_property_cache; 800 802 801 803 b1 = kmalloc(1, GFP_KERNEL); 802 804 if (!b1) ··· 841 845 842 846 strncpy(fe->ops.tuner_ops.info.name, desc->name, 843 847 sizeof(fe->ops.tuner_ops.info.name)); 844 - switch (c->delivery_system) { 845 - case SYS_DVBS: 846 - case SYS_DVBS2: 847 - case SYS_TURBO: 848 - case SYS_ISDBS: 849 - fe->ops.tuner_ops.info.frequency_min_hz = desc->min * kHz; 850 - fe->ops.tuner_ops.info.frequency_max_hz = desc->max * kHz; 851 - break; 852 - default: 853 - fe->ops.tuner_ops.info.frequency_min_hz = desc->min; 854 - fe->ops.tuner_ops.info.frequency_max_hz = desc->max; 855 - } 848 + 849 + fe->ops.tuner_ops.info.frequency_min_hz = desc->min; 850 + fe->ops.tuner_ops.info.frequency_max_hz = desc->max; 851 + 852 + dprintk("%s tuner, frequency range: %u...%u\n", 853 + desc->name, desc->min, desc->max); 856 854 857 855 if (!desc->initdata) 858 856 fe->ops.tuner_ops.init = NULL;
+3
drivers/media/media-request.c
··· 238 238 .owner = THIS_MODULE, 239 239 .poll = media_request_poll, 240 240 .unlocked_ioctl = media_request_ioctl, 241 + #ifdef CONFIG_COMPAT 242 + .compat_ioctl = media_request_ioctl, 243 + #endif /* CONFIG_COMPAT */ 241 244 .release = media_request_close, 242 245 }; 243 246
+2 -1
drivers/media/platform/vicodec/vicodec-core.c
··· 304 304 for (; p < p_out + sz; p++) { 305 305 u32 copy; 306 306 307 - p = memchr(p, magic[ctx->comp_magic_cnt], sz); 307 + p = memchr(p, magic[ctx->comp_magic_cnt], 308 + p_out + sz - p); 308 309 if (!p) { 309 310 ctx->comp_magic_cnt = 0; 310 311 break;
+6 -5
drivers/media/usb/gspca/gspca.c
··· 426 426 427 427 /* append the packet to the frame buffer */ 428 428 if (len > 0) { 429 - if (gspca_dev->image_len + len > gspca_dev->pixfmt.sizeimage) { 429 + if (gspca_dev->image_len + len > PAGE_ALIGN(gspca_dev->pixfmt.sizeimage)) { 430 430 gspca_err(gspca_dev, "frame overflow %d > %d\n", 431 431 gspca_dev->image_len + len, 432 - gspca_dev->pixfmt.sizeimage); 432 + PAGE_ALIGN(gspca_dev->pixfmt.sizeimage)); 433 433 packet_type = DISCARD_PACKET; 434 434 } else { 435 435 /* !! image is NULL only when last pkt is LAST or DISCARD ··· 1297 1297 unsigned int sizes[], struct device *alloc_devs[]) 1298 1298 { 1299 1299 struct gspca_dev *gspca_dev = vb2_get_drv_priv(vq); 1300 + unsigned int size = PAGE_ALIGN(gspca_dev->pixfmt.sizeimage); 1300 1301 1301 1302 if (*nplanes) 1302 - return sizes[0] < gspca_dev->pixfmt.sizeimage ? -EINVAL : 0; 1303 + return sizes[0] < size ? -EINVAL : 0; 1303 1304 *nplanes = 1; 1304 - sizes[0] = gspca_dev->pixfmt.sizeimage; 1305 + sizes[0] = size; 1305 1306 return 0; 1306 1307 } 1307 1308 1308 1309 static int gspca_buffer_prepare(struct vb2_buffer *vb) 1309 1310 { 1310 1311 struct gspca_dev *gspca_dev = vb2_get_drv_priv(vb->vb2_queue); 1311 - unsigned long size = gspca_dev->pixfmt.sizeimage; 1312 + unsigned long size = PAGE_ALIGN(gspca_dev->pixfmt.sizeimage); 1312 1313 1313 1314 if (vb2_plane_size(vb, 0) < size) { 1314 1315 gspca_err(gspca_dev, "buffer too small (%lu < %lu)\n",
+7 -1
drivers/mfd/cros_ec_dev.c
··· 263 263 #endif 264 264 }; 265 265 266 + static void cros_ec_class_release(struct device *dev) 267 + { 268 + kfree(to_cros_ec_dev(dev)); 269 + } 270 + 266 271 static void cros_ec_sensors_register(struct cros_ec_dev *ec) 267 272 { 268 273 /* ··· 400 395 int retval = -ENOMEM; 401 396 struct device *dev = &pdev->dev; 402 397 struct cros_ec_platform *ec_platform = dev_get_platdata(dev); 403 - struct cros_ec_dev *ec = devm_kzalloc(dev, sizeof(*ec), GFP_KERNEL); 398 + struct cros_ec_dev *ec = kzalloc(sizeof(*ec), GFP_KERNEL); 404 399 405 400 if (!ec) 406 401 return retval; ··· 422 417 ec->class_dev.devt = MKDEV(ec_major, pdev->id); 423 418 ec->class_dev.class = &cros_class; 424 419 ec->class_dev.parent = dev; 420 + ec->class_dev.release = cros_ec_class_release; 425 421 426 422 retval = dev_set_name(&ec->class_dev, "%s", ec_platform->ec_name); 427 423 if (retval) {
+3
drivers/net/bonding/bond_3ad.c
··· 2086 2086 aggregator->aggregator_identifier); 2087 2087 2088 2088 /* Tell the partner that this port is not suitable for aggregation */ 2089 + port->actor_oper_port_state &= ~AD_STATE_SYNCHRONIZATION; 2090 + port->actor_oper_port_state &= ~AD_STATE_COLLECTING; 2091 + port->actor_oper_port_state &= ~AD_STATE_DISTRIBUTING; 2089 2092 port->actor_oper_port_state &= ~AD_STATE_AGGREGATION; 2090 2093 __update_lacpdu_from_port(port); 2091 2094 ad_lacpdu_send(port);
+3 -7
drivers/net/dsa/mv88e6060.c
··· 116 116 /* Reset the switch. */ 117 117 REG_WRITE(REG_GLOBAL, GLOBAL_ATU_CONTROL, 118 118 GLOBAL_ATU_CONTROL_SWRESET | 119 - GLOBAL_ATU_CONTROL_ATUSIZE_1024 | 120 - GLOBAL_ATU_CONTROL_ATE_AGE_5MIN); 119 + GLOBAL_ATU_CONTROL_LEARNDIS); 121 120 122 121 /* Wait up to one second for reset to complete. */ 123 122 timeout = jiffies + 1 * HZ; ··· 141 142 */ 142 143 REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL, GLOBAL_CONTROL_MAX_FRAME_1536); 143 144 144 - /* Enable automatic address learning, set the address 145 - * database size to 1024 entries, and set the default aging 146 - * time to 5 minutes. 145 + /* Disable automatic address learning. 147 146 */ 148 147 REG_WRITE(REG_GLOBAL, GLOBAL_ATU_CONTROL, 149 - GLOBAL_ATU_CONTROL_ATUSIZE_1024 | 150 - GLOBAL_ATU_CONTROL_ATE_AGE_5MIN); 148 + GLOBAL_ATU_CONTROL_LEARNDIS); 151 149 152 150 return 0; 153 151 }
+1 -1
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
··· 674 674 675 675 rx_stat = (0x0000003CU & rxd_wb->status) >> 2; 676 676 677 - is_rx_check_sum_enabled = (rxd_wb->type) & (0x3U << 19); 677 + is_rx_check_sum_enabled = (rxd_wb->type >> 19) & 0x3U; 678 678 679 679 pkt_type = 0xFFU & (rxd_wb->type >> 4); 680 680
+48 -12
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 5162 5162 cp = le16_to_cpu(resp->alloc_cmpl_rings); 5163 5163 stats = le16_to_cpu(resp->alloc_stat_ctx); 5164 5164 cp = min_t(u16, cp, stats); 5165 + hw_resc->resv_irqs = cp; 5165 5166 if (bp->flags & BNXT_FLAG_CHIP_P5) { 5166 5167 int rx = hw_resc->resv_rx_rings; 5167 5168 int tx = hw_resc->resv_tx_rings; ··· 5176 5175 hw_resc->resv_rx_rings = rx; 5177 5176 hw_resc->resv_tx_rings = tx; 5178 5177 } 5179 - cp = le16_to_cpu(resp->alloc_msix); 5178 + hw_resc->resv_irqs = le16_to_cpu(resp->alloc_msix); 5180 5179 hw_resc->resv_hw_ring_grps = rx; 5181 5180 } 5182 5181 hw_resc->resv_cp_rings = cp; ··· 5354 5353 return bnxt_hwrm_reserve_vf_rings(bp, tx, rx, grp, cp, vnic); 5355 5354 } 5356 5355 5357 - static int bnxt_cp_rings_in_use(struct bnxt *bp) 5356 + static int bnxt_nq_rings_in_use(struct bnxt *bp) 5358 5357 { 5359 5358 int cp = bp->cp_nr_rings; 5360 5359 int ulp_msix, ulp_base; ··· 5369 5368 return cp; 5370 5369 } 5371 5370 5371 + static int bnxt_cp_rings_in_use(struct bnxt *bp) 5372 + { 5373 + int cp; 5374 + 5375 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 5376 + return bnxt_nq_rings_in_use(bp); 5377 + 5378 + cp = bp->tx_nr_rings + bp->rx_nr_rings; 5379 + return cp; 5380 + } 5381 + 5372 5382 static bool bnxt_need_reserve_rings(struct bnxt *bp) 5373 5383 { 5374 5384 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 5375 5385 int cp = bnxt_cp_rings_in_use(bp); 5386 + int nq = bnxt_nq_rings_in_use(bp); 5376 5387 int rx = bp->rx_nr_rings; 5377 5388 int vnic = 1, grp = rx; 5378 5389 ··· 5400 5387 rx <<= 1; 5401 5388 if (BNXT_NEW_RM(bp) && 5402 5389 (hw_resc->resv_rx_rings != rx || hw_resc->resv_cp_rings != cp || 5403 - hw_resc->resv_vnics != vnic || 5390 + hw_resc->resv_irqs < nq || hw_resc->resv_vnics != vnic || 5404 5391 (hw_resc->resv_hw_ring_grps != grp && 5405 5392 !(bp->flags & BNXT_FLAG_CHIP_P5)))) 5406 5393 return true; ··· 5410 5397 static int __bnxt_reserve_rings(struct bnxt *bp) 5411 5398 { 5412 5399 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 5413 - int cp = bnxt_cp_rings_in_use(bp); 5400 + int cp = bnxt_nq_rings_in_use(bp); 5414 5401 int tx = bp->tx_nr_rings; 5415 5402 int rx = bp->rx_nr_rings; 5416 5403 int grp, rx_rings, rc; ··· 5435 5422 tx = hw_resc->resv_tx_rings; 5436 5423 if (BNXT_NEW_RM(bp)) { 5437 5424 rx = hw_resc->resv_rx_rings; 5438 - cp = hw_resc->resv_cp_rings; 5425 + cp = hw_resc->resv_irqs; 5439 5426 grp = hw_resc->resv_hw_ring_grps; 5440 5427 vnic = hw_resc->resv_vnics; 5441 5428 } ··· 6305 6292 return rc; 6306 6293 } 6307 6294 6295 + static int bnxt_hwrm_queue_qportcfg(struct bnxt *bp); 6296 + 6308 6297 static int bnxt_hwrm_func_qcaps(struct bnxt *bp) 6309 6298 { 6310 6299 int rc; ··· 6314 6299 rc = __bnxt_hwrm_func_qcaps(bp); 6315 6300 if (rc) 6316 6301 return rc; 6302 + rc = bnxt_hwrm_queue_qportcfg(bp); 6303 + if (rc) { 6304 + netdev_err(bp->dev, "hwrm query qportcfg failure rc: %d\n", rc); 6305 + return rc; 6306 + } 6317 6307 if (bp->hwrm_spec_code >= 0x10803) { 6318 6308 rc = bnxt_alloc_ctx_mem(bp); 6319 6309 if (rc) ··· 7046 7026 7047 7027 unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp) 7048 7028 { 7049 - return bp->hw_resc.max_cp_rings - bnxt_get_ulp_msix_num(bp); 7029 + unsigned int cp = bp->hw_resc.max_cp_rings; 7030 + 7031 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 7032 + cp -= bnxt_get_ulp_msix_num(bp); 7033 + 7034 + return cp; 7050 7035 } 7051 7036 7052 7037 static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp) ··· 7073 7048 int total_req = bp->cp_nr_rings + num; 7074 7049 int max_idx, avail_msix; 7075 7050 7076 - max_idx = min_t(int, bp->total_irqs, max_cp); 7051 + max_idx = bp->total_irqs; 7052 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 7053 + max_idx = min_t(int, bp->total_irqs, max_cp); 7077 7054 avail_msix = max_idx - bp->cp_nr_rings; 7078 7055 if (!BNXT_NEW_RM(bp) || avail_msix >= num) 7079 7056 return avail_msix; ··· 7093 7066 if (!BNXT_NEW_RM(bp)) 7094 7067 return bnxt_get_max_func_irqs(bp); 7095 7068 7096 - return bnxt_cp_rings_in_use(bp); 7069 + return bnxt_nq_rings_in_use(bp); 7097 7070 } 7098 7071 7099 7072 static int bnxt_init_msix(struct bnxt *bp) ··· 7821 7794 7822 7795 rc = bnxt_hwrm_func_resc_qcaps(bp, true); 7823 7796 hw_resc->resv_cp_rings = 0; 7797 + hw_resc->resv_irqs = 0; 7824 7798 hw_resc->resv_tx_rings = 0; 7825 7799 hw_resc->resv_rx_rings = 0; 7826 7800 hw_resc->resv_hw_ring_grps = 0; ··· 9827 9799 int *max_cp) 9828 9800 { 9829 9801 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 9830 - int max_ring_grps = 0; 9802 + int max_ring_grps = 0, max_irq; 9831 9803 9832 9804 *max_tx = hw_resc->max_tx_rings; 9833 9805 *max_rx = hw_resc->max_rx_rings; 9834 - *max_cp = min_t(int, bnxt_get_max_func_cp_rings_for_en(bp), 9835 - hw_resc->max_irqs - bnxt_get_ulp_msix_num(bp)); 9836 - *max_cp = min_t(int, *max_cp, hw_resc->max_stat_ctxs); 9806 + *max_cp = bnxt_get_max_func_cp_rings_for_en(bp); 9807 + max_irq = min_t(int, bnxt_get_max_func_irqs(bp) - 9808 + bnxt_get_ulp_msix_num(bp), 9809 + bnxt_get_max_func_stat_ctxs(bp)); 9810 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 9811 + *max_cp = min_t(int, *max_cp, max_irq); 9837 9812 max_ring_grps = hw_resc->max_hw_ring_grps; 9838 9813 if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) { 9839 9814 *max_cp -= 1; ··· 9844 9813 } 9845 9814 if (bp->flags & BNXT_FLAG_AGG_RINGS) 9846 9815 *max_rx >>= 1; 9816 + if (bp->flags & BNXT_FLAG_CHIP_P5) { 9817 + bnxt_trim_rings(bp, max_rx, max_tx, *max_cp, false); 9818 + /* On P5 chips, max_cp output param should be available NQs */ 9819 + *max_cp = max_irq; 9820 + } 9847 9821 *max_rx = min_t(int, *max_rx, max_ring_grps); 9848 9822 } 9849 9823
+1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 928 928 u16 min_stat_ctxs; 929 929 u16 max_stat_ctxs; 930 930 u16 max_irqs; 931 + u16 resv_irqs; 931 932 }; 932 933 933 934 #if defined(CONFIG_BNXT_SRIOV)
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 168 168 if (BNXT_NEW_RM(bp)) { 169 169 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 170 170 171 - avail_msix = hw_resc->resv_cp_rings - bp->cp_nr_rings; 171 + avail_msix = hw_resc->resv_irqs - bp->cp_nr_rings; 172 172 edev->ulp_tbl[ulp_id].msix_requested = avail_msix; 173 173 } 174 174 bnxt_fill_msix_vecs(bp, ent);
+1 -1
drivers/net/ethernet/cavium/liquidio/lio_ethtool.c
··· 111 111 "mac_tx_one_collision", 112 112 "mac_tx_multi_collision", 113 113 "mac_tx_max_collision_fail", 114 - "mac_tx_max_deferal_fail", 114 + "mac_tx_max_deferral_fail", 115 115 "mac_tx_fifo_err", 116 116 "mac_tx_runts", 117 117
+3 -1
drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c
··· 349 349 struct octeon_soft_command *sc = (struct octeon_soft_command *)buf; 350 350 struct sk_buff *skb = sc->ctxptr; 351 351 struct net_device *ndev = skb->dev; 352 + u32 iq_no; 352 353 353 354 dma_unmap_single(&oct->pci_dev->dev, sc->dmadptr, 354 355 sc->datasize, DMA_TO_DEVICE); 355 356 dev_kfree_skb_any(skb); 357 + iq_no = sc->iq_no; 356 358 octeon_free_soft_command(oct, sc); 357 359 358 - if (octnet_iq_is_full(oct, sc->iq_no)) 360 + if (octnet_iq_is_full(oct, iq_no)) 359 361 return; 360 362 361 363 if (netif_queue_stopped(ndev))
+2 -3
drivers/net/ethernet/freescale/fman/fman.c
··· 2786 2786 if (!muram_node) { 2787 2787 dev_err(&of_dev->dev, "%s: could not find MURAM node\n", 2788 2788 __func__); 2789 - goto fman_node_put; 2789 + goto fman_free; 2790 2790 } 2791 2791 2792 2792 err = of_address_to_resource(muram_node, 0, ··· 2795 2795 of_node_put(muram_node); 2796 2796 dev_err(&of_dev->dev, "%s: of_address_to_resource() = %d\n", 2797 2797 __func__, err); 2798 - goto fman_node_put; 2798 + goto fman_free; 2799 2799 } 2800 2800 2801 2801 of_node_put(muram_node); 2802 - of_node_put(fm_node); 2803 2802 2804 2803 err = devm_request_irq(&of_dev->dev, irq, fman_irq, IRQF_SHARED, 2805 2804 "fman", fman);
+1 -1
drivers/net/ethernet/ibm/emac/emac.h
··· 231 231 #define EMAC_STACR_PHYE 0x00004000 232 232 #define EMAC_STACR_STAC_MASK 0x00003000 233 233 #define EMAC_STACR_STAC_READ 0x00001000 234 - #define EMAC_STACR_STAC_WRITE 0x00000800 234 + #define EMAC_STACR_STAC_WRITE 0x00002000 235 235 #define EMAC_STACR_OPBC_MASK 0x00000C00 236 236 #define EMAC_STACR_OPBC_50 0x00000000 237 237 #define EMAC_STACR_OPBC_66 0x00000400
+1 -1
drivers/net/ethernet/ibm/ibmvnic.c
··· 1859 1859 1860 1860 if (adapter->reset_reason != VNIC_RESET_FAILOVER && 1861 1861 adapter->reset_reason != VNIC_RESET_CHANGE_PARAM) 1862 - netdev_notify_peers(netdev); 1862 + call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev); 1863 1863 1864 1864 netif_carrier_on(netdev); 1865 1865
+33 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 4375 4375 unsigned long *supported, 4376 4376 struct phylink_link_state *state) 4377 4377 { 4378 + struct mvpp2_port *port = netdev_priv(dev); 4378 4379 __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; 4380 + 4381 + /* Invalid combinations */ 4382 + switch (state->interface) { 4383 + case PHY_INTERFACE_MODE_10GKR: 4384 + case PHY_INTERFACE_MODE_XAUI: 4385 + if (port->gop_id != 0) 4386 + goto empty_set; 4387 + break; 4388 + case PHY_INTERFACE_MODE_RGMII: 4389 + case PHY_INTERFACE_MODE_RGMII_ID: 4390 + case PHY_INTERFACE_MODE_RGMII_RXID: 4391 + case PHY_INTERFACE_MODE_RGMII_TXID: 4392 + if (port->gop_id == 0) 4393 + goto empty_set; 4394 + break; 4395 + default: 4396 + break; 4397 + } 4379 4398 4380 4399 phylink_set(mask, Autoneg); 4381 4400 phylink_set_port_modes(mask); ··· 4403 4384 4404 4385 switch (state->interface) { 4405 4386 case PHY_INTERFACE_MODE_10GKR: 4387 + case PHY_INTERFACE_MODE_XAUI: 4388 + case PHY_INTERFACE_MODE_NA: 4406 4389 phylink_set(mask, 10000baseCR_Full); 4407 4390 phylink_set(mask, 10000baseSR_Full); 4408 4391 phylink_set(mask, 10000baseLR_Full); ··· 4412 4391 phylink_set(mask, 10000baseER_Full); 4413 4392 phylink_set(mask, 10000baseKR_Full); 4414 4393 /* Fall-through */ 4415 - default: 4394 + case PHY_INTERFACE_MODE_RGMII: 4395 + case PHY_INTERFACE_MODE_RGMII_ID: 4396 + case PHY_INTERFACE_MODE_RGMII_RXID: 4397 + case PHY_INTERFACE_MODE_RGMII_TXID: 4398 + case PHY_INTERFACE_MODE_SGMII: 4416 4399 phylink_set(mask, 10baseT_Half); 4417 4400 phylink_set(mask, 10baseT_Full); 4418 4401 phylink_set(mask, 100baseT_Half); ··· 4428 4403 phylink_set(mask, 1000baseT_Full); 4429 4404 phylink_set(mask, 1000baseX_Full); 4430 4405 phylink_set(mask, 2500baseX_Full); 4406 + break; 4407 + default: 4408 + goto empty_set; 4431 4409 } 4432 4410 4433 4411 bitmap_and(supported, supported, mask, __ETHTOOL_LINK_MODE_MASK_NBITS); 4434 4412 bitmap_and(state->advertising, state->advertising, mask, 4435 4413 __ETHTOOL_LINK_MODE_MASK_NBITS); 4414 + return; 4415 + 4416 + empty_set: 4417 + bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS); 4436 4418 } 4437 4419 4438 4420 static void mvpp22_xlg_link_state(struct mvpp2_port *port,
+1 -1
drivers/net/ethernet/mellanox/mlx4/Kconfig
··· 5 5 config MLX4_EN 6 6 tristate "Mellanox Technologies 1/10/40Gbit Ethernet support" 7 7 depends on MAY_USE_DEVLINK 8 - depends on PCI 8 + depends on PCI && NETDEVICES && ETHERNET && INET 9 9 select MLX4_CORE 10 10 imply PTP_1588_CLOCK 11 11 ---help---
+2 -2
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 1084 1084 1085 1085 tx_pause = !!(pause->tx_pause); 1086 1086 rx_pause = !!(pause->rx_pause); 1087 - rx_ppp = priv->prof->rx_ppp && !(tx_pause || rx_pause); 1088 - tx_ppp = priv->prof->tx_ppp && !(tx_pause || rx_pause); 1087 + rx_ppp = (tx_pause || rx_pause) ? 0 : priv->prof->rx_ppp; 1088 + tx_ppp = (tx_pause || rx_pause) ? 0 : priv->prof->tx_ppp; 1089 1089 1090 1090 err = mlx4_SET_PORT_general(mdev->dev, priv->port, 1091 1091 priv->rx_skb_size + ETH_FCS_LEN,
+2 -2
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 3493 3493 dev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM; 3494 3494 } 3495 3495 3496 - /* MTU range: 46 - hw-specific max */ 3497 - dev->min_mtu = MLX4_EN_MIN_MTU; 3496 + /* MTU range: 68 - hw-specific max */ 3497 + dev->min_mtu = ETH_MIN_MTU; 3498 3498 dev->max_mtu = priv->max_mtu; 3499 3499 3500 3500 mdev->pndev[port] = dev;
-1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 161 161 #define MLX4_SELFTEST_LB_MIN_MTU (MLX4_LOOPBACK_TEST_PAYLOAD + NET_IP_ALIGN + \ 162 162 ETH_HLEN + PREAMBLE_LEN) 163 163 164 - #define MLX4_EN_MIN_MTU 46 165 164 /* VLAN_HLEN is added twice,to support skb vlan tagged with multiple 166 165 * headers. (For example: ETH_P_8021Q and ETH_P_8021AD). 167 166 */
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 724 724 return __get_unaligned_cpu32(fcs_bytes); 725 725 } 726 726 727 - static u8 get_ip_proto(struct sk_buff *skb, __be16 proto) 727 + static u8 get_ip_proto(struct sk_buff *skb, int network_depth, __be16 proto) 728 728 { 729 - void *ip_p = skb->data + sizeof(struct ethhdr); 729 + void *ip_p = skb->data + network_depth; 730 730 731 731 return (proto == htons(ETH_P_IP)) ? ((struct iphdr *)ip_p)->protocol : 732 732 ((struct ipv6hdr *)ip_p)->nexthdr; ··· 755 755 goto csum_unnecessary; 756 756 757 757 if (likely(is_last_ethertype_ip(skb, &network_depth, &proto))) { 758 - if (unlikely(get_ip_proto(skb, proto) == IPPROTO_SCTP)) 758 + if (unlikely(get_ip_proto(skb, network_depth, proto) == IPPROTO_SCTP)) 759 759 goto csum_unnecessary; 760 760 761 761 skb->ip_summed = CHECKSUM_COMPLETE;
+2 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
··· 560 560 561 561 mc_record = mlxsw_sp_nve_mc_record_find(mc_list, proto, addr, 562 562 &mc_entry); 563 - if (WARN_ON(!mc_record)) 563 + if (!mc_record) 564 564 return; 565 565 566 566 mlxsw_sp_nve_mc_record_entry_del(mc_record, mc_entry); ··· 647 647 648 648 key.fid_index = mlxsw_sp_fid_index(fid); 649 649 mc_list = mlxsw_sp_nve_mc_list_find(mlxsw_sp, &key); 650 - if (WARN_ON(!mc_list)) 650 + if (!mc_list) 651 651 return; 652 652 653 653 mlxsw_sp_nve_fid_flood_index_clear(fid, mc_list);
+1 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 1275 1275 { 1276 1276 u32 ul_tb_id = l3mdev_fib_table(ul_dev) ? : RT_TABLE_MAIN; 1277 1277 enum mlxsw_sp_ipip_type ipipt = ipip_entry->ipipt; 1278 - struct net_device *ipip_ul_dev; 1279 1278 1280 1279 if (mlxsw_sp->router->ipip_ops_arr[ipipt]->ul_proto != ul_proto) 1281 1280 return false; 1282 1281 1283 - ipip_ul_dev = __mlxsw_sp_ipip_netdev_ul_dev_get(ipip_entry->ol_dev); 1284 1282 return mlxsw_sp_ipip_entry_saddr_matches(mlxsw_sp, ul_proto, ul_dip, 1285 - ul_tb_id, ipip_entry) && 1286 - (!ipip_ul_dev || ipip_ul_dev == ul_dev); 1283 + ul_tb_id, ipip_entry); 1287 1284 } 1288 1285 1289 1286 /* Given decap parameters, find the corresponding IPIP entry. */
+13 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
··· 296 296 mlxsw_sp_bridge_port_should_destroy(const struct mlxsw_sp_bridge_port * 297 297 bridge_port) 298 298 { 299 - struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(bridge_port->dev); 299 + struct net_device *dev = bridge_port->dev; 300 + struct mlxsw_sp *mlxsw_sp; 301 + 302 + if (is_vlan_dev(dev)) 303 + mlxsw_sp = mlxsw_sp_lower_get(vlan_dev_real_dev(dev)); 304 + else 305 + mlxsw_sp = mlxsw_sp_lower_get(dev); 300 306 301 307 /* In case ports were pulled from out of a bridged LAG, then 302 308 * it's possible the reference count isn't zero, yet the bridge ··· 2115 2109 2116 2110 vid = is_vlan_dev(dev) ? vlan_dev_vlan_id(dev) : 1; 2117 2111 mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, vid); 2118 - if (WARN_ON(!mlxsw_sp_port_vlan)) 2112 + if (!mlxsw_sp_port_vlan) 2119 2113 return; 2120 2114 2121 2115 mlxsw_sp_port_vlan_bridge_leave(mlxsw_sp_port_vlan); ··· 2140 2134 if (!fid) 2141 2135 return -EINVAL; 2142 2136 2143 - if (mlxsw_sp_fid_vni_is_set(fid)) 2144 - return -EINVAL; 2137 + if (mlxsw_sp_fid_vni_is_set(fid)) { 2138 + err = -EINVAL; 2139 + goto err_vni_exists; 2140 + } 2145 2141 2146 2142 err = mlxsw_sp_nve_fid_enable(mlxsw_sp, fid, &params, extack); 2147 2143 if (err) ··· 2157 2149 return 0; 2158 2150 2159 2151 err_nve_fid_enable: 2152 + err_vni_exists: 2160 2153 mlxsw_sp_fid_put(fid); 2161 2154 return err; 2162 2155 }
+12 -6
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 476 476 if (err) 477 477 goto err_destroy_flow; 478 478 479 - err = nfp_flower_xmit_flow(netdev, flow_pay, 480 - NFP_FLOWER_CMSG_TYPE_FLOW_ADD); 481 - if (err) 482 - goto err_destroy_flow; 483 - 484 479 flow_pay->tc_flower_cookie = flow->cookie; 485 480 err = rhashtable_insert_fast(&priv->flow_table, &flow_pay->fl_node, 486 481 nfp_flower_table_params); 487 482 if (err) 488 - goto err_destroy_flow; 483 + goto err_release_metadata; 484 + 485 + err = nfp_flower_xmit_flow(netdev, flow_pay, 486 + NFP_FLOWER_CMSG_TYPE_FLOW_ADD); 487 + if (err) 488 + goto err_remove_rhash; 489 489 490 490 port->tc_offload_cnt++; 491 491 ··· 494 494 495 495 return 0; 496 496 497 + err_remove_rhash: 498 + WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table, 499 + &flow_pay->fl_node, 500 + nfp_flower_table_params)); 501 + err_release_metadata: 502 + nfp_modify_flow_metadata(app, flow_pay); 497 503 err_destroy_flow: 498 504 kfree(flow_pay->action_data); 499 505 kfree(flow_pay->mask_data);
+5
drivers/net/ethernet/realtek/8139cp.c
··· 571 571 struct cp_private *cp; 572 572 int handled = 0; 573 573 u16 status; 574 + u16 mask; 574 575 575 576 if (unlikely(dev == NULL)) 576 577 return IRQ_NONE; 577 578 cp = netdev_priv(dev); 578 579 579 580 spin_lock(&cp->lock); 581 + 582 + mask = cpr16(IntrMask); 583 + if (!mask) 584 + goto out_unlock; 580 585 581 586 status = cpr16(IntrStatus); 582 587 if (!status || (status == 0xFFFF))
+14 -10
drivers/net/ethernet/socionext/sni_ave.c
··· 185 185 NETIF_MSG_TX_ERR) 186 186 187 187 /* Parameter for descriptor */ 188 - #define AVE_NR_TXDESC 32 /* Tx descriptor */ 189 - #define AVE_NR_RXDESC 64 /* Rx descriptor */ 188 + #define AVE_NR_TXDESC 64 /* Tx descriptor */ 189 + #define AVE_NR_RXDESC 256 /* Rx descriptor */ 190 190 191 191 #define AVE_DESC_OFS_CMDSTS 0 192 192 #define AVE_DESC_OFS_ADDRL 4 ··· 194 194 195 195 /* Parameter for ethernet frame */ 196 196 #define AVE_MAX_ETHFRAME 1518 197 + #define AVE_FRAME_HEADROOM 2 197 198 198 199 /* Parameter for interrupt */ 199 200 #define AVE_INTM_COUNT 20 ··· 577 576 578 577 skb = priv->rx.desc[entry].skbs; 579 578 if (!skb) { 580 - skb = netdev_alloc_skb_ip_align(ndev, 581 - AVE_MAX_ETHFRAME); 579 + skb = netdev_alloc_skb(ndev, AVE_MAX_ETHFRAME); 582 580 if (!skb) { 583 581 netdev_err(ndev, "can't allocate skb for Rx\n"); 584 582 return -ENOMEM; 585 583 } 584 + skb->data += AVE_FRAME_HEADROOM; 585 + skb->tail += AVE_FRAME_HEADROOM; 586 586 } 587 587 588 588 /* set disable to cmdsts */ ··· 596 594 * - Rx buffer begins with 2 byte headroom, and data will be put from 597 595 * (buffer + 2). 598 596 * To satisfy this, specify the address to put back the buffer 599 - * pointer advanced by NET_IP_ALIGN by netdev_alloc_skb_ip_align(), 600 - * and expand the map size by NET_IP_ALIGN. 597 + * pointer advanced by AVE_FRAME_HEADROOM, and expand the map size 598 + * by AVE_FRAME_HEADROOM. 601 599 */ 602 600 ret = ave_dma_map(ndev, &priv->rx.desc[entry], 603 - skb->data - NET_IP_ALIGN, 604 - AVE_MAX_ETHFRAME + NET_IP_ALIGN, 601 + skb->data - AVE_FRAME_HEADROOM, 602 + AVE_MAX_ETHFRAME + AVE_FRAME_HEADROOM, 605 603 DMA_FROM_DEVICE, &paddr); 606 604 if (ret) { 607 605 netdev_err(ndev, "can't map skb for Rx\n"); ··· 1691 1689 pdev->name, pdev->id); 1692 1690 1693 1691 /* Register as a NAPI supported driver */ 1694 - netif_napi_add(ndev, &priv->napi_rx, ave_napi_poll_rx, priv->rx.ndesc); 1692 + netif_napi_add(ndev, &priv->napi_rx, ave_napi_poll_rx, 1693 + NAPI_POLL_WEIGHT); 1695 1694 netif_tx_napi_add(ndev, &priv->napi_tx, ave_napi_poll_tx, 1696 - priv->tx.ndesc); 1695 + NAPI_POLL_WEIGHT); 1697 1696 1698 1697 platform_set_drvdata(pdev, ndev); 1699 1698 ··· 1916 1913 }; 1917 1914 module_platform_driver(ave_driver); 1918 1915 1916 + MODULE_AUTHOR("Kunihiko Hayashi <hayashi.kunihiko@socionext.com>"); 1919 1917 MODULE_DESCRIPTION("Socionext UniPhier AVE ethernet driver"); 1920 1918 MODULE_LICENSE("GPL v2");
+13 -10
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2550 2550 netdev_warn(priv->dev, "PTP init failed\n"); 2551 2551 } 2552 2552 2553 - #ifdef CONFIG_DEBUG_FS 2554 - ret = stmmac_init_fs(dev); 2555 - if (ret < 0) 2556 - netdev_warn(priv->dev, "%s: failed debugFS registration\n", 2557 - __func__); 2558 - #endif 2559 2553 priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS; 2560 2554 2561 2555 if (priv->use_riwt) { ··· 2749 2755 stmmac_mac_set(priv, priv->ioaddr, false); 2750 2756 2751 2757 netif_carrier_off(dev); 2752 - 2753 - #ifdef CONFIG_DEBUG_FS 2754 - stmmac_exit_fs(dev); 2755 - #endif 2756 2758 2757 2759 stmmac_release_ptp(priv); 2758 2760 ··· 3889 3899 u32 tx_count = priv->plat->tx_queues_to_use; 3890 3900 u32 queue; 3891 3901 3902 + if ((dev->flags & IFF_UP) == 0) 3903 + return 0; 3904 + 3892 3905 for (queue = 0; queue < rx_count; queue++) { 3893 3906 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 3894 3907 ··· 4390 4397 goto error_netdev_register; 4391 4398 } 4392 4399 4400 + #ifdef CONFIG_DEBUG_FS 4401 + ret = stmmac_init_fs(ndev); 4402 + if (ret < 0) 4403 + netdev_warn(priv->dev, "%s: failed debugFS registration\n", 4404 + __func__); 4405 + #endif 4406 + 4393 4407 return ret; 4394 4408 4395 4409 error_netdev_register: ··· 4432 4432 4433 4433 netdev_info(priv->dev, "%s: removing driver", __func__); 4434 4434 4435 + #ifdef CONFIG_DEBUG_FS 4436 + stmmac_exit_fs(ndev); 4437 + #endif 4435 4438 stmmac_stop_all_dma(priv); 4436 4439 4437 4440 stmmac_mac_set(priv, priv->ioaddr, false);
+5 -2
drivers/net/macvlan.c
··· 608 608 goto hash_add; 609 609 } 610 610 611 - err = -EBUSY; 611 + err = -EADDRINUSE; 612 612 if (macvlan_addr_busy(vlan->port, dev->dev_addr)) 613 613 goto out; 614 614 ··· 706 706 } else { 707 707 /* Rehash and update the device filters */ 708 708 if (macvlan_addr_busy(vlan->port, addr)) 709 - return -EBUSY; 709 + return -EADDRINUSE; 710 710 711 711 if (!macvlan_passthru(port)) { 712 712 err = dev_uc_add(lowerdev, addr); ··· 746 746 macvlan_set_addr_change(vlan->port); 747 747 return dev_set_mac_address(vlan->lowerdev, addr); 748 748 } 749 + 750 + if (macvlan_addr_busy(vlan->port, addr->sa_data)) 751 + return -EADDRINUSE; 749 752 750 753 return macvlan_sync_address(dev, addr->sa_data); 751 754 }
+8 -11
drivers/net/phy/phy_device.c
··· 1880 1880 1881 1881 static int __set_phy_supported(struct phy_device *phydev, u32 max_speed) 1882 1882 { 1883 - phydev->supported &= ~(PHY_1000BT_FEATURES | PHY_100BT_FEATURES | 1884 - PHY_10BT_FEATURES); 1885 - 1886 1883 switch (max_speed) { 1887 - default: 1888 - return -ENOTSUPP; 1889 - case SPEED_1000: 1890 - phydev->supported |= PHY_1000BT_FEATURES; 1884 + case SPEED_10: 1885 + phydev->supported &= ~PHY_100BT_FEATURES; 1891 1886 /* fall through */ 1892 1887 case SPEED_100: 1893 - phydev->supported |= PHY_100BT_FEATURES; 1894 - /* fall through */ 1895 - case SPEED_10: 1896 - phydev->supported |= PHY_10BT_FEATURES; 1888 + phydev->supported &= ~PHY_1000BT_FEATURES; 1889 + break; 1890 + case SPEED_1000: 1891 + break; 1892 + default: 1893 + return -ENOTSUPP; 1897 1894 } 1898 1895 1899 1896 return 0;
+1 -1
drivers/net/phy/sfp-bus.c
··· 162 162 /* 1000Base-PX or 1000Base-BX10 */ 163 163 if ((id->base.e_base_px || id->base.e_base_bx10) && 164 164 br_min <= 1300 && br_max >= 1200) 165 - phylink_set(support, 1000baseX_Full); 165 + phylink_set(modes, 1000baseX_Full); 166 166 167 167 /* For active or passive cables, select the link modes 168 168 * based on the bit rates and the cable compliance bytes.
+5 -4
drivers/net/tun.c
··· 2293 2293 static int tun_validate(struct nlattr *tb[], struct nlattr *data[], 2294 2294 struct netlink_ext_ack *extack) 2295 2295 { 2296 - if (!data) 2297 - return 0; 2298 - return -EINVAL; 2296 + NL_SET_ERR_MSG(extack, 2297 + "tun/tap creation via rtnetlink is not supported."); 2298 + return -EOPNOTSUPP; 2299 2299 } 2300 2300 2301 2301 static size_t tun_get_size(const struct net_device *dev) ··· 2385 2385 struct tun_file *tfile, 2386 2386 struct xdp_buff *xdp, int *flush) 2387 2387 { 2388 + unsigned int datasize = xdp->data_end - xdp->data; 2388 2389 struct tun_xdp_hdr *hdr = xdp->data_hard_start; 2389 2390 struct virtio_net_hdr *gso = &hdr->gso; 2390 2391 struct tun_pcpu_stats *stats; ··· 2462 2461 stats = get_cpu_ptr(tun->pcpu_stats); 2463 2462 u64_stats_update_begin(&stats->syncp); 2464 2463 stats->rx_packets++; 2465 - stats->rx_bytes += skb->len; 2464 + stats->rx_bytes += datasize; 2466 2465 u64_stats_update_end(&stats->syncp); 2467 2466 put_cpu_ptr(stats); 2468 2467
+9 -5
drivers/net/virtio_net.c
··· 365 365 static struct sk_buff *page_to_skb(struct virtnet_info *vi, 366 366 struct receive_queue *rq, 367 367 struct page *page, unsigned int offset, 368 - unsigned int len, unsigned int truesize) 368 + unsigned int len, unsigned int truesize, 369 + bool hdr_valid) 369 370 { 370 371 struct sk_buff *skb; 371 372 struct virtio_net_hdr_mrg_rxbuf *hdr; ··· 388 387 else 389 388 hdr_padded_len = sizeof(struct padded_vnet_hdr); 390 389 391 - memcpy(hdr, p, hdr_len); 390 + if (hdr_valid) 391 + memcpy(hdr, p, hdr_len); 392 392 393 393 len -= hdr_len; 394 394 offset += hdr_padded_len; ··· 741 739 struct virtnet_rq_stats *stats) 742 740 { 743 741 struct page *page = buf; 744 - struct sk_buff *skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE); 742 + struct sk_buff *skb = page_to_skb(vi, rq, page, 0, len, 743 + PAGE_SIZE, true); 745 744 746 745 stats->bytes += len - vi->hdr_len; 747 746 if (unlikely(!skb)) ··· 845 842 rcu_read_unlock(); 846 843 put_page(page); 847 844 head_skb = page_to_skb(vi, rq, xdp_page, 848 - offset, len, PAGE_SIZE); 845 + offset, len, 846 + PAGE_SIZE, false); 849 847 return head_skb; 850 848 } 851 849 break; ··· 902 898 goto err_skb; 903 899 } 904 900 905 - head_skb = page_to_skb(vi, rq, page, offset, len, truesize); 901 + head_skb = page_to_skb(vi, rq, page, offset, len, truesize, !xdp_prog); 906 902 curr_skb = head_skb; 907 903 908 904 if (unlikely(!curr_skb))
+11 -9
drivers/net/wireless/mac80211_hwsim.c
··· 2884 2884 2885 2885 wiphy_ext_feature_set(hw->wiphy, NL80211_EXT_FEATURE_CQM_RSSI_LIST); 2886 2886 2887 + tasklet_hrtimer_init(&data->beacon_timer, 2888 + mac80211_hwsim_beacon, 2889 + CLOCK_MONOTONIC, HRTIMER_MODE_ABS); 2890 + 2887 2891 err = ieee80211_register_hw(hw); 2888 2892 if (err < 0) { 2889 2893 pr_debug("mac80211_hwsim: ieee80211_register_hw failed (%d)\n", ··· 2911 2907 debugfs_create_file("dfs_simulate_radar", 0222, 2912 2908 data->debugfs, 2913 2909 data, &hwsim_simulate_radar); 2914 - 2915 - tasklet_hrtimer_init(&data->beacon_timer, 2916 - mac80211_hwsim_beacon, 2917 - CLOCK_MONOTONIC, HRTIMER_MODE_ABS); 2918 2910 2919 2911 spin_lock_bh(&hwsim_radio_lock); 2920 2912 err = rhashtable_insert_fast(&hwsim_radios_rht, &data->rht, ··· 3703 3703 if (err) 3704 3704 goto out_unregister_pernet; 3705 3705 3706 + err = hwsim_init_netlink(); 3707 + if (err) 3708 + goto out_unregister_driver; 3709 + 3706 3710 hwsim_class = class_create(THIS_MODULE, "mac80211_hwsim"); 3707 3711 if (IS_ERR(hwsim_class)) { 3708 3712 err = PTR_ERR(hwsim_class); 3709 - goto out_unregister_driver; 3713 + goto out_exit_netlink; 3710 3714 } 3711 - 3712 - err = hwsim_init_netlink(); 3713 - if (err < 0) 3714 - goto out_unregister_driver; 3715 3715 3716 3716 for (i = 0; i < radios; i++) { 3717 3717 struct hwsim_new_radio_params param = { 0 }; ··· 3818 3818 free_netdev(hwsim_mon); 3819 3819 out_free_radios: 3820 3820 mac80211_hwsim_free(); 3821 + out_exit_netlink: 3822 + hwsim_exit_netlink(); 3821 3823 out_unregister_driver: 3822 3824 platform_driver_unregister(&mac80211_hwsim_driver); 3823 3825 out_unregister_pernet:
+2
drivers/nvdimm/nd-core.h
··· 111 111 struct nd_mapping *nd_mapping, resource_size_t *overlap); 112 112 resource_size_t nd_blk_available_dpa(struct nd_region *nd_region); 113 113 resource_size_t nd_region_available_dpa(struct nd_region *nd_region); 114 + int nd_region_conflict(struct nd_region *nd_region, resource_size_t start, 115 + resource_size_t size); 114 116 resource_size_t nvdimm_allocated_dpa(struct nvdimm_drvdata *ndd, 115 117 struct nd_label_id *label_id); 116 118 int alias_dpa_busy(struct device *dev, void *data);
+37 -27
drivers/nvdimm/pfn_devs.c
··· 649 649 ALIGN_DOWN(phys, nd_pfn->align)); 650 650 } 651 651 652 + /* 653 + * Check if pmem collides with 'System RAM', or other regions when 654 + * section aligned. Trim it accordingly. 655 + */ 656 + static void trim_pfn_device(struct nd_pfn *nd_pfn, u32 *start_pad, u32 *end_trunc) 657 + { 658 + struct nd_namespace_common *ndns = nd_pfn->ndns; 659 + struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev); 660 + struct nd_region *nd_region = to_nd_region(nd_pfn->dev.parent); 661 + const resource_size_t start = nsio->res.start; 662 + const resource_size_t end = start + resource_size(&nsio->res); 663 + resource_size_t adjust, size; 664 + 665 + *start_pad = 0; 666 + *end_trunc = 0; 667 + 668 + adjust = start - PHYS_SECTION_ALIGN_DOWN(start); 669 + size = resource_size(&nsio->res) + adjust; 670 + if (region_intersects(start - adjust, size, IORESOURCE_SYSTEM_RAM, 671 + IORES_DESC_NONE) == REGION_MIXED 672 + || nd_region_conflict(nd_region, start - adjust, size)) 673 + *start_pad = PHYS_SECTION_ALIGN_UP(start) - start; 674 + 675 + /* Now check that end of the range does not collide. */ 676 + adjust = PHYS_SECTION_ALIGN_UP(end) - end; 677 + size = resource_size(&nsio->res) + adjust; 678 + if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM, 679 + IORES_DESC_NONE) == REGION_MIXED 680 + || !IS_ALIGNED(end, nd_pfn->align) 681 + || nd_region_conflict(nd_region, start, size + adjust)) 682 + *end_trunc = end - phys_pmem_align_down(nd_pfn, end); 683 + } 684 + 652 685 static int nd_pfn_init(struct nd_pfn *nd_pfn) 653 686 { 654 687 u32 dax_label_reserve = is_nd_dax(&nd_pfn->dev) ? SZ_128K : 0; 655 688 struct nd_namespace_common *ndns = nd_pfn->ndns; 656 - u32 start_pad = 0, end_trunc = 0; 689 + struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev); 657 690 resource_size_t start, size; 658 - struct nd_namespace_io *nsio; 659 691 struct nd_region *nd_region; 692 + u32 start_pad, end_trunc; 660 693 struct nd_pfn_sb *pfn_sb; 661 694 unsigned long npfns; 662 695 phys_addr_t offset; ··· 721 688 722 689 memset(pfn_sb, 0, sizeof(*pfn_sb)); 723 690 724 - /* 725 - * Check if pmem collides with 'System RAM' when section aligned and 726 - * trim it accordingly 727 - */ 728 - nsio = to_nd_namespace_io(&ndns->dev); 729 - start = PHYS_SECTION_ALIGN_DOWN(nsio->res.start); 730 - size = resource_size(&nsio->res); 731 - if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM, 732 - IORES_DESC_NONE) == REGION_MIXED) { 733 - start = nsio->res.start; 734 - start_pad = PHYS_SECTION_ALIGN_UP(start) - start; 735 - } 736 - 737 - start = nsio->res.start; 738 - size = PHYS_SECTION_ALIGN_UP(start + size) - start; 739 - if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM, 740 - IORES_DESC_NONE) == REGION_MIXED 741 - || !IS_ALIGNED(start + resource_size(&nsio->res), 742 - nd_pfn->align)) { 743 - size = resource_size(&nsio->res); 744 - end_trunc = start + size - phys_pmem_align_down(nd_pfn, 745 - start + size); 746 - } 747 - 691 + trim_pfn_device(nd_pfn, &start_pad, &end_trunc); 748 692 if (start_pad + end_trunc) 749 693 dev_info(&nd_pfn->dev, "%s alignment collision, truncate %d bytes\n", 750 694 dev_name(&ndns->dev), start_pad + end_trunc); ··· 732 722 * implementation will limit the pfns advertised through 733 723 * ->direct_access() to those that are included in the memmap. 734 724 */ 735 - start += start_pad; 725 + start = nsio->res.start + start_pad; 736 726 size = resource_size(&nsio->res); 737 727 npfns = PFN_SECTION_ALIGN_UP((size - start_pad - end_trunc - SZ_8K) 738 728 / PAGE_SIZE);
+41
drivers/nvdimm/region_devs.c
··· 1184 1184 } 1185 1185 EXPORT_SYMBOL_GPL(nvdimm_has_cache); 1186 1186 1187 + struct conflict_context { 1188 + struct nd_region *nd_region; 1189 + resource_size_t start, size; 1190 + }; 1191 + 1192 + static int region_conflict(struct device *dev, void *data) 1193 + { 1194 + struct nd_region *nd_region; 1195 + struct conflict_context *ctx = data; 1196 + resource_size_t res_end, region_end, region_start; 1197 + 1198 + if (!is_memory(dev)) 1199 + return 0; 1200 + 1201 + nd_region = to_nd_region(dev); 1202 + if (nd_region == ctx->nd_region) 1203 + return 0; 1204 + 1205 + res_end = ctx->start + ctx->size; 1206 + region_start = nd_region->ndr_start; 1207 + region_end = region_start + nd_region->ndr_size; 1208 + if (ctx->start >= region_start && ctx->start < region_end) 1209 + return -EBUSY; 1210 + if (res_end > region_start && res_end <= region_end) 1211 + return -EBUSY; 1212 + return 0; 1213 + } 1214 + 1215 + int nd_region_conflict(struct nd_region *nd_region, resource_size_t start, 1216 + resource_size_t size) 1217 + { 1218 + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(&nd_region->dev); 1219 + struct conflict_context ctx = { 1220 + .nd_region = nd_region, 1221 + .start = start, 1222 + .size = size, 1223 + }; 1224 + 1225 + return device_for_each_child(&nvdimm_bus->dev, &ctx, region_conflict); 1226 + } 1227 + 1187 1228 void __exit nd_region_devs_exit(void) 1188 1229 { 1189 1230 ida_destroy(&region_ida);
+9 -1
drivers/nvme/host/core.c
··· 831 831 static void nvme_keep_alive_end_io(struct request *rq, blk_status_t status) 832 832 { 833 833 struct nvme_ctrl *ctrl = rq->end_io_data; 834 + unsigned long flags; 835 + bool startka = false; 834 836 835 837 blk_mq_free_request(rq); 836 838 ··· 843 841 return; 844 842 } 845 843 846 - schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ); 844 + spin_lock_irqsave(&ctrl->lock, flags); 845 + if (ctrl->state == NVME_CTRL_LIVE || 846 + ctrl->state == NVME_CTRL_CONNECTING) 847 + startka = true; 848 + spin_unlock_irqrestore(&ctrl->lock, flags); 849 + if (startka) 850 + schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ); 847 851 } 848 852 849 853 static int nvme_keep_alive(struct nvme_ctrl *ctrl)
+2 -1
drivers/nvme/target/rdma.c
··· 529 529 { 530 530 struct nvmet_rdma_rsp *rsp = 531 531 container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe); 532 + struct nvmet_rdma_queue *queue = cq->cq_context; 532 533 533 534 nvmet_rdma_release_rsp(rsp); 534 535 ··· 537 536 wc->status != IB_WC_WR_FLUSH_ERR)) { 538 537 pr_err("SEND for CQE 0x%p failed with status %s (%d).\n", 539 538 wc->wr_cqe, ib_wc_status_msg(wc->status), wc->status); 540 - nvmet_rdma_error_comp(rsp->queue); 539 + nvmet_rdma_error_comp(queue); 541 540 } 542 541 } 543 542
+1 -1
drivers/pci/pcie/aspm.c
··· 895 895 struct pcie_link_state *link; 896 896 int blacklist = !!pcie_aspm_sanity_check(pdev); 897 897 898 - if (!aspm_support_enabled || aspm_disabled) 898 + if (!aspm_support_enabled) 899 899 return; 900 900 901 901 if (pdev->link_state)
+14 -3
drivers/s390/virtio/virtio_ccw.c
··· 56 56 unsigned int revision; /* Transport revision */ 57 57 wait_queue_head_t wait_q; 58 58 spinlock_t lock; 59 + struct mutex io_lock; /* Serializes I/O requests */ 59 60 struct list_head virtqueues; 60 61 unsigned long indicators; 61 62 unsigned long indicators2; ··· 297 296 unsigned long flags; 298 297 int flag = intparm & VIRTIO_CCW_INTPARM_MASK; 299 298 299 + mutex_lock(&vcdev->io_lock); 300 300 do { 301 301 spin_lock_irqsave(get_ccwdev_lock(vcdev->cdev), flags); 302 302 ret = ccw_device_start(vcdev->cdev, ccw, intparm, 0, 0); ··· 310 308 cpu_relax(); 311 309 } while (ret == -EBUSY); 312 310 wait_event(vcdev->wait_q, doing_io(vcdev, flag) == 0); 313 - return ret ? ret : vcdev->err; 311 + ret = ret ? ret : vcdev->err; 312 + mutex_unlock(&vcdev->io_lock); 313 + return ret; 314 314 } 315 315 316 316 static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev, ··· 832 828 int ret; 833 829 struct ccw1 *ccw; 834 830 void *config_area; 831 + unsigned long flags; 835 832 836 833 ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL); 837 834 if (!ccw) ··· 851 846 if (ret) 852 847 goto out_free; 853 848 849 + spin_lock_irqsave(&vcdev->lock, flags); 854 850 memcpy(vcdev->config, config_area, offset + len); 855 - if (buf) 856 - memcpy(buf, &vcdev->config[offset], len); 857 851 if (vcdev->config_ready < offset + len) 858 852 vcdev->config_ready = offset + len; 853 + spin_unlock_irqrestore(&vcdev->lock, flags); 854 + if (buf) 855 + memcpy(buf, config_area + offset, len); 859 856 860 857 out_free: 861 858 kfree(config_area); ··· 871 864 struct virtio_ccw_device *vcdev = to_vc_device(vdev); 872 865 struct ccw1 *ccw; 873 866 void *config_area; 867 + unsigned long flags; 874 868 875 869 ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL); 876 870 if (!ccw) ··· 884 876 /* Make sure we don't overwrite fields. */ 885 877 if (vcdev->config_ready < offset) 886 878 virtio_ccw_get_config(vdev, 0, NULL, offset); 879 + spin_lock_irqsave(&vcdev->lock, flags); 887 880 memcpy(&vcdev->config[offset], buf, len); 888 881 /* Write the config area to the host. */ 889 882 memcpy(config_area, vcdev->config, sizeof(vcdev->config)); 883 + spin_unlock_irqrestore(&vcdev->lock, flags); 890 884 ccw->cmd_code = CCW_CMD_WRITE_CONF; 891 885 ccw->flags = 0; 892 886 ccw->count = offset + len; ··· 1257 1247 init_waitqueue_head(&vcdev->wait_q); 1258 1248 INIT_LIST_HEAD(&vcdev->virtqueues); 1259 1249 spin_lock_init(&vcdev->lock); 1250 + mutex_init(&vcdev->io_lock); 1260 1251 1261 1252 spin_lock_irqsave(get_ccwdev_lock(cdev), flags); 1262 1253 dev_set_drvdata(&cdev->dev, vcdev);
+1
drivers/sbus/char/display7seg.c
··· 220 220 dev_set_drvdata(&op->dev, p); 221 221 d7s_device = p; 222 222 err = 0; 223 + of_node_put(opts); 223 224 224 225 out: 225 226 return err;
+2
drivers/sbus/char/envctrl.c
··· 910 910 for (len = 0; len < PCF8584_MAX_CHANNELS; ++len) { 911 911 pchild->mon_type[len] = ENVCTRL_NOMON; 912 912 } 913 + of_node_put(root_node); 913 914 return; 914 915 } 916 + of_node_put(root_node); 915 917 } 916 918 917 919 /* Get the monitor channels. */
+2 -2
drivers/scsi/libiscsi.c
··· 2416 2416 failed: 2417 2417 ISCSI_DBG_EH(session, 2418 2418 "failing session reset: Could not log back into " 2419 - "%s, %s [age %d]\n", session->targetname, 2420 - conn->persistent_address, session->age); 2419 + "%s [age %d]\n", session->targetname, 2420 + session->age); 2421 2421 spin_unlock_bh(&session->frwd_lock); 2422 2422 mutex_unlock(&session->eh_mutex); 2423 2423 return FAILED;
+5 -1
drivers/scsi/lpfc/lpfc_init.c
··· 167 167 sizeof(phba->wwpn)); 168 168 } 169 169 170 - phba->sli3_options = 0x0; 170 + /* 171 + * Clear all option bits except LPFC_SLI3_BG_ENABLED, 172 + * which was already set in lpfc_get_cfgparam() 173 + */ 174 + phba->sli3_options &= (uint32_t)LPFC_SLI3_BG_ENABLED; 171 175 172 176 /* Setup and issue mailbox READ REV command */ 173 177 lpfc_read_rev(phba, pmb);
-1
drivers/scsi/lpfc/lpfc_sli.c
··· 4965 4965 phba->sli3_options &= ~(LPFC_SLI3_NPIV_ENABLED | 4966 4966 LPFC_SLI3_HBQ_ENABLED | 4967 4967 LPFC_SLI3_CRP_ENABLED | 4968 - LPFC_SLI3_BG_ENABLED | 4969 4968 LPFC_SLI3_DSS_ENABLED); 4970 4969 if (rc != MBX_SUCCESS) { 4971 4970 lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+30 -31
drivers/scsi/storvsc_drv.c
··· 446 446 447 447 bool destroy; 448 448 bool drain_notify; 449 - bool open_sub_channel; 450 449 atomic_t num_outstanding_req; 451 450 struct Scsi_Host *host; 452 451 ··· 635 636 static void handle_sc_creation(struct vmbus_channel *new_sc) 636 637 { 637 638 struct hv_device *device = new_sc->primary_channel->device_obj; 639 + struct device *dev = &device->device; 638 640 struct storvsc_device *stor_device; 639 641 struct vmstorage_channel_properties props; 642 + int ret; 640 643 641 644 stor_device = get_out_stor_device(device); 642 645 if (!stor_device) 643 646 return; 644 647 645 - if (stor_device->open_sub_channel == false) 646 - return; 647 - 648 648 memset(&props, 0, sizeof(struct vmstorage_channel_properties)); 649 649 650 - vmbus_open(new_sc, 651 - storvsc_ringbuffer_size, 652 - storvsc_ringbuffer_size, 653 - (void *)&props, 654 - sizeof(struct vmstorage_channel_properties), 655 - storvsc_on_channel_callback, new_sc); 650 + ret = vmbus_open(new_sc, 651 + storvsc_ringbuffer_size, 652 + storvsc_ringbuffer_size, 653 + (void *)&props, 654 + sizeof(struct vmstorage_channel_properties), 655 + storvsc_on_channel_callback, new_sc); 656 656 657 - if (new_sc->state == CHANNEL_OPENED_STATE) { 658 - stor_device->stor_chns[new_sc->target_cpu] = new_sc; 659 - cpumask_set_cpu(new_sc->target_cpu, &stor_device->alloced_cpus); 657 + /* In case vmbus_open() fails, we don't use the sub-channel. */ 658 + if (ret != 0) { 659 + dev_err(dev, "Failed to open sub-channel: err=%d\n", ret); 660 + return; 660 661 } 662 + 663 + /* Add the sub-channel to the array of available channels. */ 664 + stor_device->stor_chns[new_sc->target_cpu] = new_sc; 665 + cpumask_set_cpu(new_sc->target_cpu, &stor_device->alloced_cpus); 661 666 } 662 667 663 668 static void handle_multichannel_storage(struct hv_device *device, int max_chns) 664 669 { 670 + struct device *dev = &device->device; 665 671 struct storvsc_device *stor_device; 666 672 int num_cpus = num_online_cpus(); 667 673 int num_sc; ··· 683 679 request = &stor_device->init_request; 684 680 vstor_packet = &request->vstor_packet; 685 681 686 - stor_device->open_sub_channel = true; 687 682 /* 688 683 * Establish a handler for dealing with subchannels. 689 684 */ 690 685 vmbus_set_sc_create_callback(device->channel, handle_sc_creation); 691 686 692 - /* 693 - * Check to see if sub-channels have already been created. This 694 - * can happen when this driver is re-loaded after unloading. 695 - */ 696 - 697 - if (vmbus_are_subchannels_present(device->channel)) 698 - return; 699 - 700 - stor_device->open_sub_channel = false; 701 687 /* 702 688 * Request the host to create sub-channels. 703 689 */ ··· 704 710 VM_PKT_DATA_INBAND, 705 711 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 706 712 707 - if (ret != 0) 713 + if (ret != 0) { 714 + dev_err(dev, "Failed to create sub-channel: err=%d\n", ret); 708 715 return; 716 + } 709 717 710 718 t = wait_for_completion_timeout(&request->wait_event, 10*HZ); 711 - if (t == 0) 719 + if (t == 0) { 720 + dev_err(dev, "Failed to create sub-channel: timed out\n"); 712 721 return; 722 + } 713 723 714 724 if (vstor_packet->operation != VSTOR_OPERATION_COMPLETE_IO || 715 - vstor_packet->status != 0) 725 + vstor_packet->status != 0) { 726 + dev_err(dev, "Failed to create sub-channel: op=%d, sts=%d\n", 727 + vstor_packet->operation, vstor_packet->status); 716 728 return; 729 + } 717 730 718 731 /* 719 - * Now that we created the sub-channels, invoke the check; this 720 - * may trigger the callback. 732 + * We need to do nothing here, because vmbus_process_offer() 733 + * invokes channel->sc_creation_callback, which will open and use 734 + * the sub-channel(s). 721 735 */ 722 - stor_device->open_sub_channel = true; 723 - vmbus_are_subchannels_present(device->channel); 724 736 } 725 737 726 738 static void cache_wwn(struct storvsc_device *stor_device, ··· 1794 1794 } 1795 1795 1796 1796 stor_device->destroy = false; 1797 - stor_device->open_sub_channel = false; 1798 1797 init_waitqueue_head(&stor_device->waiting_to_drain); 1799 1798 stor_device->device = device; 1800 1799 stor_device->host = host;
+2 -2
drivers/scsi/vmw_pvscsi.c
··· 1202 1202 1203 1203 static void pvscsi_release_resources(struct pvscsi_adapter *adapter) 1204 1204 { 1205 - pvscsi_shutdown_intr(adapter); 1206 - 1207 1205 if (adapter->workqueue) 1208 1206 destroy_workqueue(adapter->workqueue); 1209 1207 ··· 1532 1534 out_reset_adapter: 1533 1535 ll_adapter_reset(adapter); 1534 1536 out_release_resources: 1537 + pvscsi_shutdown_intr(adapter); 1535 1538 pvscsi_release_resources(adapter); 1536 1539 scsi_host_put(host); 1537 1540 out_disable_device: ··· 1541 1542 return error; 1542 1543 1543 1544 out_release_resources_and_disable: 1545 + pvscsi_shutdown_intr(adapter); 1544 1546 pvscsi_release_resources(adapter); 1545 1547 goto out_disable_device; 1546 1548 }
+5
drivers/staging/media/sunxi/cedrus/TODO
··· 5 5 * Userspace support for the Request API needs to be reviewed; 6 6 * Another stateless decoder driver should be submitted; 7 7 * At least one stateless encoder driver should be submitted. 8 + * When queueing a request containing references to I frames, the 9 + refcount of the memory for those I frames needs to be incremented 10 + and decremented when the request is completed. This will likely 11 + require some help from vb2. The driver should fail the request 12 + if the memory/buffer is gone.
+1 -1
drivers/staging/rtl8712/mlme_linux.c
··· 146 146 p = buff; 147 147 p += sprintf(p, "ASSOCINFO(ReqIEs="); 148 148 len = sec_ie[1] + 2; 149 - len = (len < IW_CUSTOM_MAX) ? len : IW_CUSTOM_MAX - 1; 149 + len = (len < IW_CUSTOM_MAX) ? len : IW_CUSTOM_MAX; 150 150 for (i = 0; i < len; i++) 151 151 p += sprintf(p, "%02x", sec_ie[i]); 152 152 p += sprintf(p, ")");
+1 -1
drivers/staging/rtl8712/rtl871x_mlme.c
··· 1346 1346 u8 *out_ie, uint in_len) 1347 1347 { 1348 1348 u8 authmode = 0, match; 1349 - u8 sec_ie[255], uncst_oui[4], bkup_ie[255]; 1349 + u8 sec_ie[IW_CUSTOM_MAX], uncst_oui[4], bkup_ie[255]; 1350 1350 u8 wpa_oui[4] = {0x0, 0x50, 0xf2, 0x01}; 1351 1351 uint ielength, cnt, remove_cnt; 1352 1352 int iEntry;
+1 -1
drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
··· 1565 1565 if (pstat->aid > 0) { 1566 1566 DBG_871X(" old AID %d\n", pstat->aid); 1567 1567 } else { 1568 - for (pstat->aid = 1; pstat->aid < NUM_STA; pstat->aid++) 1568 + for (pstat->aid = 1; pstat->aid <= NUM_STA; pstat->aid++) 1569 1569 if (pstapriv->sta_aid[pstat->aid - 1] == NULL) 1570 1570 break; 1571 1571
+13 -15
drivers/thermal/armada_thermal.c
··· 357 357 int ret; 358 358 359 359 /* Valid check */ 360 - if (armada_is_valid(priv)) { 360 + if (!armada_is_valid(priv)) { 361 361 dev_err(priv->dev, 362 362 "Temperature sensor reading not valid\n"); 363 363 return -EIO; ··· 395 395 return ret; 396 396 } 397 397 398 - static struct thermal_zone_of_device_ops of_ops = { 398 + static const struct thermal_zone_of_device_ops of_ops = { 399 399 .get_temp = armada_get_temp, 400 400 }; 401 401 ··· 526 526 527 527 /* First memory region points towards the status register */ 528 528 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 529 - if (!res) 530 - return -EIO; 531 - 532 - /* 533 - * Edit the resource start address and length to map over all the 534 - * registers, instead of pointing at them one by one. 535 - */ 536 - res->start -= data->syscon_status_off; 537 - res->end = res->start + max(data->syscon_status_off, 538 - max(data->syscon_control0_off, 539 - data->syscon_control1_off)) + 540 - sizeof(unsigned int) - 1; 541 - 542 529 base = devm_ioremap_resource(&pdev->dev, res); 543 530 if (IS_ERR(base)) 544 531 return PTR_ERR(base); 532 + 533 + /* 534 + * Fix up from the old individual DT register specification to 535 + * cover all the registers. We do this by adjusting the ioremap() 536 + * result, which should be fine as ioremap() deals with pages. 537 + * However, validate that we do not cross a page boundary while 538 + * making this adjustment. 539 + */ 540 + if (((unsigned long)base & ~PAGE_MASK) < data->syscon_status_off) 541 + return -EINVAL; 542 + base -= data->syscon_status_off; 545 543 546 544 priv->syscon = devm_regmap_init_mmio(&pdev->dev, base, 547 545 &armada_thermal_regmap_config);
+1 -10
drivers/thermal/broadcom/bcm2835_thermal.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 1 2 /* 2 3 * Driver for Broadcom BCM2835 SoC temperature sensor 3 4 * 4 5 * Copyright (C) 2016 Martin Sperl 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 6 */ 16 7 17 8 #include <linux/clk.h>
+1 -1
drivers/thermal/broadcom/brcmstb_thermal.c
··· 299 299 return 0; 300 300 } 301 301 302 - static struct thermal_zone_of_device_ops of_ops = { 302 + static const struct thermal_zone_of_device_ops of_ops = { 303 303 .get_temp = brcmstb_get_temp, 304 304 .set_trips = brcmstb_set_trips, 305 305 };
+7 -9
drivers/tty/serial/8250/8250_mtk.c
··· 213 213 214 214 platform_set_drvdata(pdev, data); 215 215 216 - pm_runtime_enable(&pdev->dev); 217 - if (!pm_runtime_enabled(&pdev->dev)) { 218 - err = mtk8250_runtime_resume(&pdev->dev); 219 - if (err) 220 - return err; 221 - } 216 + err = mtk8250_runtime_resume(&pdev->dev); 217 + if (err) 218 + return err; 222 219 223 220 data->line = serial8250_register_8250_port(&uart); 224 221 if (data->line < 0) 225 222 return data->line; 223 + 224 + pm_runtime_set_active(&pdev->dev); 225 + pm_runtime_enable(&pdev->dev); 226 226 227 227 return 0; 228 228 } ··· 234 234 pm_runtime_get_sync(&pdev->dev); 235 235 236 236 serial8250_unregister_port(data->line); 237 + mtk8250_runtime_suspend(&pdev->dev); 237 238 238 239 pm_runtime_disable(&pdev->dev); 239 240 pm_runtime_put_noidle(&pdev->dev); 240 - 241 - if (!pm_runtime_status_suspended(&pdev->dev)) 242 - mtk8250_runtime_suspend(&pdev->dev); 243 241 244 242 return 0; 245 243 }
+2 -2
drivers/tty/serial/kgdboc.c
··· 233 233 static int param_set_kgdboc_var(const char *kmessage, 234 234 const struct kernel_param *kp) 235 235 { 236 - int len = strlen(kmessage); 236 + size_t len = strlen(kmessage); 237 237 238 238 if (len >= MAX_CONFIG_LEN) { 239 239 pr_err("config string too long\n"); ··· 254 254 255 255 strcpy(config, kmessage); 256 256 /* Chop out \n char as a result of echo */ 257 - if (config[len - 1] == '\n') 257 + if (len && config[len - 1] == '\n') 258 258 config[len - 1] = '\0'; 259 259 260 260 if (configured == 1)
+1
drivers/tty/serial/suncore.c
··· 112 112 mode = of_get_property(dp, mode_prop, NULL); 113 113 if (!mode) 114 114 mode = "9600,8,n,1,-"; 115 + of_node_put(dp); 115 116 } 116 117 117 118 cflag = CREAD | HUPCL | CLOCAL;
+9 -2
drivers/tty/tty_io.c
··· 1373 1373 return ERR_PTR(retval); 1374 1374 } 1375 1375 1376 - static void tty_free_termios(struct tty_struct *tty) 1376 + /** 1377 + * tty_save_termios() - save tty termios data in driver table 1378 + * @tty: tty whose termios data to save 1379 + * 1380 + * Locking: Caller guarantees serialisation with tty_init_termios(). 1381 + */ 1382 + void tty_save_termios(struct tty_struct *tty) 1377 1383 { 1378 1384 struct ktermios *tp; 1379 1385 int idx = tty->index; ··· 1398 1392 } 1399 1393 *tp = tty->termios; 1400 1394 } 1395 + EXPORT_SYMBOL_GPL(tty_save_termios); 1401 1396 1402 1397 /** 1403 1398 * tty_flush_works - flush all works of a tty/pty pair ··· 1498 1491 WARN_ON(!mutex_is_locked(&tty_mutex)); 1499 1492 if (tty->ops->shutdown) 1500 1493 tty->ops->shutdown(tty); 1501 - tty_free_termios(tty); 1494 + tty_save_termios(tty); 1502 1495 tty_driver_remove_tty(tty->driver, tty); 1503 1496 tty->port->itty = NULL; 1504 1497 if (tty->link)
+2 -1
drivers/tty/tty_port.c
··· 633 633 if (tty_port_close_start(port, tty, filp) == 0) 634 634 return; 635 635 tty_port_shutdown(port, tty); 636 - set_bit(TTY_IO_ERROR, &tty->flags); 636 + if (!port->console) 637 + set_bit(TTY_IO_ERROR, &tty->flags); 637 638 tty_port_close_end(port, tty); 638 639 tty_port_tty_set(port, NULL); 639 640 }
+3 -2
drivers/usb/core/hub.c
··· 2251 2251 /* descriptor may appear anywhere in config */ 2252 2252 err = __usb_get_extra_descriptor(udev->rawdescriptors[0], 2253 2253 le16_to_cpu(udev->config[0].desc.wTotalLength), 2254 - USB_DT_OTG, (void **) &desc); 2254 + USB_DT_OTG, (void **) &desc, sizeof(*desc)); 2255 2255 if (err || !(desc->bmAttributes & USB_OTG_HNP)) 2256 2256 return 0; 2257 2257 ··· 5163 5163 /* Handle notifying userspace about hub over-current events */ 5164 5164 static void port_over_current_notify(struct usb_port *port_dev) 5165 5165 { 5166 - static char *envp[] = { NULL, NULL, NULL }; 5166 + char *envp[3]; 5167 5167 struct device *hub_dev; 5168 5168 char *port_dev_path; 5169 5169 ··· 5187 5187 if (!envp[1]) 5188 5188 goto exit; 5189 5189 5190 + envp[2] = NULL; 5190 5191 kobject_uevent_env(&hub_dev->kobj, KOBJ_CHANGE, envp); 5191 5192 5192 5193 kfree(envp[1]);
+4
drivers/usb/core/quirks.c
··· 333 333 /* Midiman M-Audio Keystation 88es */ 334 334 { USB_DEVICE(0x0763, 0x0192), .driver_info = USB_QUIRK_RESET_RESUME }, 335 335 336 + /* SanDisk Ultra Fit and Ultra Flair */ 337 + { USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM }, 338 + { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM }, 339 + 336 340 /* M-Systems Flash Disk Pioneers */ 337 341 { USB_DEVICE(0x08ec, 0x1000), .driver_info = USB_QUIRK_RESET_RESUME }, 338 342
+3 -3
drivers/usb/core/usb.c
··· 832 832 */ 833 833 834 834 int __usb_get_extra_descriptor(char *buffer, unsigned size, 835 - unsigned char type, void **ptr) 835 + unsigned char type, void **ptr, size_t minsize) 836 836 { 837 837 struct usb_descriptor_header *header; 838 838 839 839 while (size >= sizeof(struct usb_descriptor_header)) { 840 840 header = (struct usb_descriptor_header *)buffer; 841 841 842 - if (header->bLength < 2) { 842 + if (header->bLength < 2 || header->bLength > size) { 843 843 printk(KERN_ERR 844 844 "%s: bogus descriptor, type %d length %d\n", 845 845 usbcore_name, ··· 848 848 return -1; 849 849 } 850 850 851 - if (header->bDescriptorType == type) { 851 + if (header->bDescriptorType == type && header->bLength >= minsize) { 852 852 *ptr = header; 853 853 return 0; 854 854 }
+1 -1
drivers/usb/host/hwa-hc.c
··· 640 640 top = itr + itr_size; 641 641 result = __usb_get_extra_descriptor(usb_dev->rawdescriptors[index], 642 642 le16_to_cpu(usb_dev->actconfig->desc.wTotalLength), 643 - USB_DT_SECURITY, (void **) &secd); 643 + USB_DT_SECURITY, (void **) &secd, sizeof(*secd)); 644 644 if (result == -1) { 645 645 dev_warn(dev, "BUG? WUSB host has no security descriptors\n"); 646 646 return 0;
+4
drivers/usb/host/xhci-pci.c
··· 139 139 pdev->device == 0x43bb)) 140 140 xhci->quirks |= XHCI_SUSPEND_DELAY; 141 141 142 + if (pdev->vendor == PCI_VENDOR_ID_AMD && 143 + (pdev->device == 0x15e0 || pdev->device == 0x15e1)) 144 + xhci->quirks |= XHCI_SNPS_BROKEN_SUSPEND; 145 + 142 146 if (pdev->vendor == PCI_VENDOR_ID_AMD) 143 147 xhci->quirks |= XHCI_TRUST_TX_LENGTH; 144 148
+38 -4
drivers/usb/host/xhci.c
··· 968 968 unsigned int delay = XHCI_MAX_HALT_USEC; 969 969 struct usb_hcd *hcd = xhci_to_hcd(xhci); 970 970 u32 command; 971 + u32 res; 971 972 972 973 if (!hcd->state) 973 974 return 0; ··· 1022 1021 command = readl(&xhci->op_regs->command); 1023 1022 command |= CMD_CSS; 1024 1023 writel(command, &xhci->op_regs->command); 1024 + xhci->broken_suspend = 0; 1025 1025 if (xhci_handshake(&xhci->op_regs->status, 1026 1026 STS_SAVE, 0, 10 * 1000)) { 1027 - xhci_warn(xhci, "WARN: xHC save state timeout\n"); 1028 - spin_unlock_irq(&xhci->lock); 1029 - return -ETIMEDOUT; 1027 + /* 1028 + * AMD SNPS xHC 3.0 occasionally does not clear the 1029 + * SSS bit of USBSTS and when driver tries to poll 1030 + * to see if the xHC clears BIT(8) which never happens 1031 + * and driver assumes that controller is not responding 1032 + * and times out. To workaround this, its good to check 1033 + * if SRE and HCE bits are not set (as per xhci 1034 + * Section 5.4.2) and bypass the timeout. 1035 + */ 1036 + res = readl(&xhci->op_regs->status); 1037 + if ((xhci->quirks & XHCI_SNPS_BROKEN_SUSPEND) && 1038 + (((res & STS_SRE) == 0) && 1039 + ((res & STS_HCE) == 0))) { 1040 + xhci->broken_suspend = 1; 1041 + } else { 1042 + xhci_warn(xhci, "WARN: xHC save state timeout\n"); 1043 + spin_unlock_irq(&xhci->lock); 1044 + return -ETIMEDOUT; 1045 + } 1030 1046 } 1031 1047 spin_unlock_irq(&xhci->lock); 1032 1048 ··· 1096 1078 set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags); 1097 1079 1098 1080 spin_lock_irq(&xhci->lock); 1099 - if (xhci->quirks & XHCI_RESET_ON_RESUME) 1081 + if ((xhci->quirks & XHCI_RESET_ON_RESUME) || xhci->broken_suspend) 1100 1082 hibernated = true; 1101 1083 1102 1084 if (!hibernated) { ··· 4514 4496 { 4515 4497 unsigned long long timeout_ns; 4516 4498 4499 + /* Prevent U1 if service interval is shorter than U1 exit latency */ 4500 + if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) { 4501 + if (xhci_service_interval_to_ns(desc) <= udev->u1_params.mel) { 4502 + dev_dbg(&udev->dev, "Disable U1, ESIT shorter than exit latency\n"); 4503 + return USB3_LPM_DISABLED; 4504 + } 4505 + } 4506 + 4517 4507 if (xhci->quirks & XHCI_INTEL_HOST) 4518 4508 timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc); 4519 4509 else ··· 4577 4551 struct usb_endpoint_descriptor *desc) 4578 4552 { 4579 4553 unsigned long long timeout_ns; 4554 + 4555 + /* Prevent U2 if service interval is shorter than U2 exit latency */ 4556 + if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) { 4557 + if (xhci_service_interval_to_ns(desc) <= udev->u2_params.mel) { 4558 + dev_dbg(&udev->dev, "Disable U2, ESIT shorter than exit latency\n"); 4559 + return USB3_LPM_DISABLED; 4560 + } 4561 + } 4580 4562 4581 4563 if (xhci->quirks & XHCI_INTEL_HOST) 4582 4564 timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
+3
drivers/usb/host/xhci.h
··· 1850 1850 #define XHCI_ZERO_64B_REGS BIT_ULL(32) 1851 1851 #define XHCI_DEFAULT_PM_RUNTIME_ALLOW BIT_ULL(33) 1852 1852 #define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34) 1853 + #define XHCI_SNPS_BROKEN_SUSPEND BIT_ULL(35) 1853 1854 1854 1855 unsigned int num_active_eps; 1855 1856 unsigned int limit_active_eps; ··· 1880 1879 void *dbc; 1881 1880 /* platform-specific data -- must come last */ 1882 1881 unsigned long priv[0] __aligned(sizeof(s64)); 1882 + /* Broken Suspend flag for SNPS Suspend resume issue */ 1883 + u8 broken_suspend; 1883 1884 }; 1884 1885 1885 1886 /* Platform specific overrides to generic XHCI hc_driver ops */
+1
drivers/usb/misc/appledisplay.c
··· 51 51 { APPLEDISPLAY_DEVICE(0x921c) }, 52 52 { APPLEDISPLAY_DEVICE(0x921d) }, 53 53 { APPLEDISPLAY_DEVICE(0x9222) }, 54 + { APPLEDISPLAY_DEVICE(0x9226) }, 54 55 { APPLEDISPLAY_DEVICE(0x9236) }, 55 56 56 57 /* Terminating entry */
+1 -1
drivers/usb/serial/console.c
··· 101 101 cflag |= PARENB; 102 102 break; 103 103 } 104 - co->cflag = cflag; 105 104 106 105 /* 107 106 * no need to check the index here: if the index is wrong, console ··· 163 164 serial->type->set_termios(tty, port, &dummy); 164 165 165 166 tty_port_tty_set(&port->port, NULL); 167 + tty_save_termios(tty); 166 168 tty_kref_put(tty); 167 169 } 168 170 tty_port_set_initialized(&port->port, 1);
-3
drivers/vhost/vhost.c
··· 944 944 if (msg->iova <= vq_msg->iova && 945 945 msg->iova + msg->size - 1 >= vq_msg->iova && 946 946 vq_msg->type == VHOST_IOTLB_MISS) { 947 - mutex_lock(&node->vq->mutex); 948 947 vhost_poll_queue(&node->vq->poll); 949 - mutex_unlock(&node->vq->mutex); 950 - 951 948 list_del(&node->node); 952 949 kfree(node); 953 950 }
+48 -31
drivers/vhost/vsock.c
··· 15 15 #include <net/sock.h> 16 16 #include <linux/virtio_vsock.h> 17 17 #include <linux/vhost.h> 18 + #include <linux/hashtable.h> 18 19 19 20 #include <net/af_vsock.h> 20 21 #include "vhost.h" ··· 28 27 29 28 /* Used to track all the vhost_vsock instances on the system. */ 30 29 static DEFINE_SPINLOCK(vhost_vsock_lock); 31 - static LIST_HEAD(vhost_vsock_list); 30 + static DEFINE_READ_MOSTLY_HASHTABLE(vhost_vsock_hash, 8); 32 31 33 32 struct vhost_vsock { 34 33 struct vhost_dev dev; 35 34 struct vhost_virtqueue vqs[2]; 36 35 37 - /* Link to global vhost_vsock_list, protected by vhost_vsock_lock */ 38 - struct list_head list; 36 + /* Link to global vhost_vsock_hash, writes use vhost_vsock_lock */ 37 + struct hlist_node hash; 39 38 40 39 struct vhost_work send_pkt_work; 41 40 spinlock_t send_pkt_list_lock; ··· 51 50 return VHOST_VSOCK_DEFAULT_HOST_CID; 52 51 } 53 52 54 - static struct vhost_vsock *__vhost_vsock_get(u32 guest_cid) 53 + /* Callers that dereference the return value must hold vhost_vsock_lock or the 54 + * RCU read lock. 55 + */ 56 + static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) 55 57 { 56 58 struct vhost_vsock *vsock; 57 59 58 - list_for_each_entry(vsock, &vhost_vsock_list, list) { 60 + hash_for_each_possible_rcu(vhost_vsock_hash, vsock, hash, guest_cid) { 59 61 u32 other_cid = vsock->guest_cid; 60 62 61 63 /* Skip instances that have no CID yet */ ··· 71 67 } 72 68 73 69 return NULL; 74 - } 75 - 76 - static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) 77 - { 78 - struct vhost_vsock *vsock; 79 - 80 - spin_lock_bh(&vhost_vsock_lock); 81 - vsock = __vhost_vsock_get(guest_cid); 82 - spin_unlock_bh(&vhost_vsock_lock); 83 - 84 - return vsock; 85 70 } 86 71 87 72 static void ··· 203 210 struct vhost_vsock *vsock; 204 211 int len = pkt->len; 205 212 213 + rcu_read_lock(); 214 + 206 215 /* Find the vhost_vsock according to guest context id */ 207 216 vsock = vhost_vsock_get(le64_to_cpu(pkt->hdr.dst_cid)); 208 217 if (!vsock) { 218 + rcu_read_unlock(); 209 219 virtio_transport_free_pkt(pkt); 210 220 return -ENODEV; 211 221 } ··· 221 225 spin_unlock_bh(&vsock->send_pkt_list_lock); 222 226 223 227 vhost_work_queue(&vsock->dev, &vsock->send_pkt_work); 228 + 229 + rcu_read_unlock(); 224 230 return len; 225 231 } 226 232 ··· 232 234 struct vhost_vsock *vsock; 233 235 struct virtio_vsock_pkt *pkt, *n; 234 236 int cnt = 0; 237 + int ret = -ENODEV; 235 238 LIST_HEAD(freeme); 239 + 240 + rcu_read_lock(); 236 241 237 242 /* Find the vhost_vsock according to guest context id */ 238 243 vsock = vhost_vsock_get(vsk->remote_addr.svm_cid); 239 244 if (!vsock) 240 - return -ENODEV; 245 + goto out; 241 246 242 247 spin_lock_bh(&vsock->send_pkt_list_lock); 243 248 list_for_each_entry_safe(pkt, n, &vsock->send_pkt_list, list) { ··· 266 265 vhost_poll_queue(&tx_vq->poll); 267 266 } 268 267 269 - return 0; 268 + ret = 0; 269 + out: 270 + rcu_read_unlock(); 271 + return ret; 270 272 } 271 273 272 274 static struct virtio_vsock_pkt * ··· 537 533 spin_lock_init(&vsock->send_pkt_list_lock); 538 534 INIT_LIST_HEAD(&vsock->send_pkt_list); 539 535 vhost_work_init(&vsock->send_pkt_work, vhost_transport_send_pkt_work); 540 - 541 - spin_lock_bh(&vhost_vsock_lock); 542 - list_add_tail(&vsock->list, &vhost_vsock_list); 543 - spin_unlock_bh(&vhost_vsock_lock); 544 536 return 0; 545 537 546 538 out: ··· 563 563 * executing. 564 564 */ 565 565 566 - if (!vhost_vsock_get(vsk->remote_addr.svm_cid)) { 567 - sock_set_flag(sk, SOCK_DONE); 568 - vsk->peer_shutdown = SHUTDOWN_MASK; 569 - sk->sk_state = SS_UNCONNECTED; 570 - sk->sk_err = ECONNRESET; 571 - sk->sk_error_report(sk); 572 - } 566 + /* If the peer is still valid, no need to reset connection */ 567 + if (vhost_vsock_get(vsk->remote_addr.svm_cid)) 568 + return; 569 + 570 + /* If the close timeout is pending, let it expire. This avoids races 571 + * with the timeout callback. 572 + */ 573 + if (vsk->close_work_scheduled) 574 + return; 575 + 576 + sock_set_flag(sk, SOCK_DONE); 577 + vsk->peer_shutdown = SHUTDOWN_MASK; 578 + sk->sk_state = SS_UNCONNECTED; 579 + sk->sk_err = ECONNRESET; 580 + sk->sk_error_report(sk); 573 581 } 574 582 575 583 static int vhost_vsock_dev_release(struct inode *inode, struct file *file) ··· 585 577 struct vhost_vsock *vsock = file->private_data; 586 578 587 579 spin_lock_bh(&vhost_vsock_lock); 588 - list_del(&vsock->list); 580 + if (vsock->guest_cid) 581 + hash_del_rcu(&vsock->hash); 589 582 spin_unlock_bh(&vhost_vsock_lock); 583 + 584 + /* Wait for other CPUs to finish using vsock */ 585 + synchronize_rcu(); 590 586 591 587 /* Iterating over all connections for all CIDs to find orphans is 592 588 * inefficient. Room for improvement here. */ ··· 632 620 633 621 /* Refuse if CID is already in use */ 634 622 spin_lock_bh(&vhost_vsock_lock); 635 - other = __vhost_vsock_get(guest_cid); 623 + other = vhost_vsock_get(guest_cid); 636 624 if (other && other != vsock) { 637 625 spin_unlock_bh(&vhost_vsock_lock); 638 626 return -EADDRINUSE; 639 627 } 628 + 629 + if (vsock->guest_cid) 630 + hash_del_rcu(&vsock->hash); 631 + 640 632 vsock->guest_cid = guest_cid; 633 + hash_add_rcu(vhost_vsock_hash, &vsock->hash, guest_cid); 641 634 spin_unlock_bh(&vhost_vsock_lock); 642 635 643 636 return 0;
+3 -5
fs/btrfs/tree-checker.c
··· 389 389 390 390 /* 391 391 * Here we don't really care about alignment since extent allocator can 392 - * handle it. We care more about the size, as if one block group is 393 - * larger than maximum size, it's must be some obvious corruption. 392 + * handle it. We care more about the size. 394 393 */ 395 - if (key->offset > BTRFS_MAX_DATA_CHUNK_SIZE || key->offset == 0) { 394 + if (key->offset == 0) { 396 395 block_group_err(fs_info, leaf, slot, 397 - "invalid block group size, have %llu expect (0, %llu]", 398 - key->offset, BTRFS_MAX_DATA_CHUNK_SIZE); 396 + "invalid block group size 0"); 399 397 return -EUCLEAN; 400 398 } 401 399
+1 -1
fs/cifs/Kconfig
··· 133 133 134 134 config CIFS_POSIX 135 135 bool "CIFS POSIX Extensions" 136 - depends on CIFS_XATTR 136 + depends on CIFS && CIFS_ALLOW_INSECURE_LEGACY && CIFS_XATTR 137 137 help 138 138 Enabling this option will cause the cifs client to attempt to 139 139 negotiate a newer dialect with servers, such as Samba 3.0.5
+1 -1
fs/cifs/dir.c
··· 174 174 175 175 cifs_dbg(FYI, "using cifs_sb prepath <%s>\n", cifs_sb->prepath); 176 176 memcpy(full_path+dfsplen+1, cifs_sb->prepath, pplen-1); 177 - full_path[dfsplen] = '\\'; 177 + full_path[dfsplen] = dirsep; 178 178 for (i = 0; i < pplen-1; i++) 179 179 if (full_path[dfsplen+1+i] == '/') 180 180 full_path[dfsplen+1+i] = CIFS_DIR_SEP(cifs_sb);
+6 -25
fs/cifs/file.c
··· 2541 2541 cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list, 2542 2542 struct cifs_aio_ctx *ctx) 2543 2543 { 2544 - int wait_retry = 0; 2545 2544 unsigned int wsize, credits; 2546 2545 int rc; 2547 2546 struct TCP_Server_Info *server = 2548 2547 tlink_tcon(wdata->cfile->tlink)->ses->server; 2549 2548 2550 2549 /* 2551 - * Try to resend this wdata, waiting for credits up to 3 seconds. 2550 + * Wait for credits to resend this wdata. 2552 2551 * Note: we are attempting to resend the whole wdata not in segments 2553 2552 */ 2554 2553 do { ··· 2555 2556 server, wdata->bytes, &wsize, &credits); 2556 2557 2557 2558 if (rc) 2558 - break; 2559 + goto out; 2559 2560 2560 2561 if (wsize < wdata->bytes) { 2561 2562 add_credits_and_wake_if(server, credits, 0); 2562 2563 msleep(1000); 2563 - wait_retry++; 2564 2564 } 2565 - } while (wsize < wdata->bytes && wait_retry < 3); 2566 - 2567 - if (wsize < wdata->bytes) { 2568 - rc = -EBUSY; 2569 - goto out; 2570 - } 2565 + } while (wsize < wdata->bytes); 2571 2566 2572 2567 rc = -EAGAIN; 2573 2568 while (rc == -EAGAIN) { ··· 3227 3234 struct list_head *rdata_list, 3228 3235 struct cifs_aio_ctx *ctx) 3229 3236 { 3230 - int wait_retry = 0; 3231 3237 unsigned int rsize, credits; 3232 3238 int rc; 3233 3239 struct TCP_Server_Info *server = 3234 3240 tlink_tcon(rdata->cfile->tlink)->ses->server; 3235 3241 3236 3242 /* 3237 - * Try to resend this rdata, waiting for credits up to 3 seconds. 3243 + * Wait for credits to resend this rdata. 3238 3244 * Note: we are attempting to resend the whole rdata not in segments 3239 3245 */ 3240 3246 do { ··· 3241 3249 &rsize, &credits); 3242 3250 3243 3251 if (rc) 3244 - break; 3252 + goto out; 3245 3253 3246 3254 if (rsize < rdata->bytes) { 3247 3255 add_credits_and_wake_if(server, credits, 0); 3248 3256 msleep(1000); 3249 - wait_retry++; 3250 3257 } 3251 - } while (rsize < rdata->bytes && wait_retry < 3); 3252 - 3253 - /* 3254 - * If we can't find enough credits to send this rdata 3255 - * release the rdata and return failure, this will pass 3256 - * whatever I/O amount we have finished to VFS. 3257 - */ 3258 - if (rsize < rdata->bytes) { 3259 - rc = -EBUSY; 3260 - goto out; 3261 - } 3258 + } while (rsize < rdata->bytes); 3262 3259 3263 3260 rc = -EAGAIN; 3264 3261 while (rc == -EAGAIN) {
+38 -17
fs/dax.c
··· 232 232 } 233 233 } 234 234 235 + /* 236 + * The only thing keeping the address space around is the i_pages lock 237 + * (it's cycled in clear_inode() after removing the entries from i_pages) 238 + * After we call xas_unlock_irq(), we cannot touch xas->xa. 239 + */ 240 + static void wait_entry_unlocked(struct xa_state *xas, void *entry) 241 + { 242 + struct wait_exceptional_entry_queue ewait; 243 + wait_queue_head_t *wq; 244 + 245 + init_wait(&ewait.wait); 246 + ewait.wait.func = wake_exceptional_entry_func; 247 + 248 + wq = dax_entry_waitqueue(xas, entry, &ewait.key); 249 + prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); 250 + xas_unlock_irq(xas); 251 + schedule(); 252 + finish_wait(wq, &ewait.wait); 253 + 254 + /* 255 + * Entry lock waits are exclusive. Wake up the next waiter since 256 + * we aren't sure we will acquire the entry lock and thus wake 257 + * the next waiter up on unlock. 258 + */ 259 + if (waitqueue_active(wq)) 260 + __wake_up(wq, TASK_NORMAL, 1, &ewait.key); 261 + } 262 + 235 263 static void put_unlocked_entry(struct xa_state *xas, void *entry) 236 264 { 237 265 /* If we were the only waiter woken, wake the next one */ ··· 379 351 * @page: The page whose entry we want to lock 380 352 * 381 353 * Context: Process context. 382 - * Return: %true if the entry was locked or does not need to be locked. 354 + * Return: A cookie to pass to dax_unlock_page() or 0 if the entry could 355 + * not be locked. 383 356 */ 384 - bool dax_lock_mapping_entry(struct page *page) 357 + dax_entry_t dax_lock_page(struct page *page) 385 358 { 386 359 XA_STATE(xas, NULL, 0); 387 360 void *entry; 388 - bool locked; 389 361 390 362 /* Ensure page->mapping isn't freed while we look at it */ 391 363 rcu_read_lock(); 392 364 for (;;) { 393 365 struct address_space *mapping = READ_ONCE(page->mapping); 394 366 395 - locked = false; 396 - if (!dax_mapping(mapping)) 367 + entry = NULL; 368 + if (!mapping || !dax_mapping(mapping)) 397 369 break; 398 370 399 371 /* ··· 403 375 * otherwise we would not have a valid pfn_to_page() 404 376 * translation. 405 377 */ 406 - locked = true; 378 + entry = (void *)~0UL; 407 379 if (S_ISCHR(mapping->host->i_mode)) 408 380 break; 409 381 ··· 417 389 entry = xas_load(&xas); 418 390 if (dax_is_locked(entry)) { 419 391 rcu_read_unlock(); 420 - entry = get_unlocked_entry(&xas); 421 - xas_unlock_irq(&xas); 422 - put_unlocked_entry(&xas, entry); 392 + wait_entry_unlocked(&xas, entry); 423 393 rcu_read_lock(); 424 394 continue; 425 395 } ··· 426 400 break; 427 401 } 428 402 rcu_read_unlock(); 429 - return locked; 403 + return (dax_entry_t)entry; 430 404 } 431 405 432 - void dax_unlock_mapping_entry(struct page *page) 406 + void dax_unlock_page(struct page *page, dax_entry_t cookie) 433 407 { 434 408 struct address_space *mapping = page->mapping; 435 409 XA_STATE(xas, &mapping->i_pages, page->index); 436 - void *entry; 437 410 438 411 if (S_ISCHR(mapping->host->i_mode)) 439 412 return; 440 413 441 - rcu_read_lock(); 442 - entry = xas_load(&xas); 443 - rcu_read_unlock(); 444 - entry = dax_make_entry(page_to_pfn_t(page), dax_is_pmd_entry(entry)); 445 - dax_unlock_entry(&xas, entry); 414 + dax_unlock_entry(&xas, (void *)cookie); 446 415 } 447 416 448 417 /*
+2 -3
fs/exec.c
··· 62 62 #include <linux/oom.h> 63 63 #include <linux/compat.h> 64 64 #include <linux/vmalloc.h> 65 - #include <linux/freezer.h> 66 65 67 66 #include <linux/uaccess.h> 68 67 #include <asm/mmu_context.h> ··· 1083 1084 while (sig->notify_count) { 1084 1085 __set_current_state(TASK_KILLABLE); 1085 1086 spin_unlock_irq(lock); 1086 - freezable_schedule(); 1087 + schedule(); 1087 1088 if (unlikely(__fatal_signal_pending(tsk))) 1088 1089 goto killed; 1089 1090 spin_lock_irq(lock); ··· 1111 1112 __set_current_state(TASK_KILLABLE); 1112 1113 write_unlock_irq(&tasklist_lock); 1113 1114 cgroup_threadgroup_change_end(tsk); 1114 - freezable_schedule(); 1115 + schedule(); 1115 1116 if (unlikely(__fatal_signal_pending(tsk))) 1116 1117 goto killed; 1117 1118 }
-9
fs/iomap.c
··· 1877 1877 dio->wait_for_completion = true; 1878 1878 ret = 0; 1879 1879 } 1880 - 1881 - /* 1882 - * Splicing to pipes can fail on a full pipe. We have to 1883 - * swallow this to make it look like a short IO 1884 - * otherwise the higher splice layers will completely 1885 - * mishandle the error and stop moving data. 1886 - */ 1887 - if (ret == -EFAULT) 1888 - ret = 0; 1889 1880 break; 1890 1881 } 1891 1882 pos += ret;
+8 -1
fs/nfs/direct.c
··· 98 98 struct pnfs_ds_commit_info ds_cinfo; /* Storage for cinfo */ 99 99 struct work_struct work; 100 100 int flags; 101 + /* for write */ 101 102 #define NFS_ODIRECT_DO_COMMIT (1) /* an unstable reply was received */ 102 103 #define NFS_ODIRECT_RESCHED_WRITES (2) /* write verification failed */ 104 + /* for read */ 105 + #define NFS_ODIRECT_SHOULD_DIRTY (3) /* dirty user-space page after read */ 103 106 struct nfs_writeverf verf; /* unstable write verifier */ 104 107 }; 105 108 ··· 415 412 struct nfs_page *req = nfs_list_entry(hdr->pages.next); 416 413 struct page *page = req->wb_page; 417 414 418 - if (!PageCompound(page) && bytes < hdr->good_bytes) 415 + if (!PageCompound(page) && bytes < hdr->good_bytes && 416 + (dreq->flags == NFS_ODIRECT_SHOULD_DIRTY)) 419 417 set_page_dirty(page); 420 418 bytes += req->wb_bytes; 421 419 nfs_list_remove_request(req); ··· 590 586 dreq->l_ctx = l_ctx; 591 587 if (!is_sync_kiocb(iocb)) 592 588 dreq->iocb = iocb; 589 + 590 + if (iter_is_iovec(iter)) 591 + dreq->flags = NFS_ODIRECT_SHOULD_DIRTY; 593 592 594 593 nfs_start_io_direct(inode); 595 594
+4 -2
fs/nfs/flexfilelayout/flexfilelayout.c
··· 1733 1733 if (fh) 1734 1734 hdr->args.fh = fh; 1735 1735 1736 - if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1736 + if (vers == 4 && 1737 + !nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1737 1738 goto out_failed; 1738 1739 1739 1740 /* ··· 1799 1798 if (fh) 1800 1799 hdr->args.fh = fh; 1801 1800 1802 - if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1801 + if (vers == 4 && 1802 + !nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1803 1803 goto out_failed; 1804 1804 1805 1805 /*
+1 -1
fs/read_write.c
··· 1956 1956 struct inode *inode_out = file_inode(file_out); 1957 1957 loff_t ret; 1958 1958 1959 - WARN_ON_ONCE(remap_flags); 1959 + WARN_ON_ONCE(remap_flags & REMAP_FILE_DEDUP); 1960 1960 1961 1961 if (S_ISDIR(inode_in->i_mode) || S_ISDIR(inode_out->i_mode)) 1962 1962 return -EISDIR;
+6 -1
fs/splice.c
··· 945 945 sd->flags &= ~SPLICE_F_NONBLOCK; 946 946 more = sd->flags & SPLICE_F_MORE; 947 947 948 + WARN_ON_ONCE(pipe->nrbufs != 0); 949 + 948 950 while (len) { 949 951 size_t read_len; 950 952 loff_t pos = sd->pos, prev_pos = pos; 951 953 952 - ret = do_splice_to(in, &pos, pipe, len, flags); 954 + /* Don't try to read more the pipe has space for. */ 955 + read_len = min_t(size_t, len, 956 + (pipe->buffers - pipe->nrbufs) << PAGE_SHIFT); 957 + ret = do_splice_to(in, &pos, pipe, read_len, flags); 953 958 if (unlikely(ret <= 0)) 954 959 goto out_release; 955 960
+1 -1
fs/xfs/libxfs/xfs_btree.c
··· 330 330 331 331 if (xfs_sb_version_hascrc(&mp->m_sb)) { 332 332 if (!xfs_log_check_lsn(mp, be64_to_cpu(block->bb_u.s.bb_lsn))) 333 - return __this_address; 333 + return false; 334 334 return xfs_buf_verify_cksum(bp, XFS_BTREE_SBLOCK_CRC_OFF); 335 335 } 336 336
+2 -2
fs/xfs/xfs_bmap_util.c
··· 1126 1126 * page could be mmap'd and iomap_zero_range doesn't do that for us. 1127 1127 * Writeback of the eof page will do this, albeit clumsily. 1128 1128 */ 1129 - if (offset + len >= XFS_ISIZE(ip) && ((offset + len) & PAGE_MASK)) { 1129 + if (offset + len >= XFS_ISIZE(ip) && offset_in_page(offset + len) > 0) { 1130 1130 error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, 1131 - (offset + len) & ~PAGE_MASK, LLONG_MAX); 1131 + round_down(offset + len, PAGE_SIZE), LLONG_MAX); 1132 1132 } 1133 1133 1134 1134 return error;
+1 -1
fs/xfs/xfs_qm_bhv.c
··· 40 40 statp->f_files = limit; 41 41 statp->f_ffree = 42 42 (statp->f_files > dqp->q_res_icount) ? 43 - (statp->f_ffree - dqp->q_res_icount) : 0; 43 + (statp->f_files - dqp->q_res_icount) : 0; 44 44 } 45 45 } 46 46
+8 -6
include/linux/dax.h
··· 7 7 #include <linux/radix-tree.h> 8 8 #include <asm/pgtable.h> 9 9 10 + typedef unsigned long dax_entry_t; 11 + 10 12 struct iomap_ops; 11 13 struct dax_device; 12 14 struct dax_operations { ··· 90 88 struct block_device *bdev, struct writeback_control *wbc); 91 89 92 90 struct page *dax_layout_busy_page(struct address_space *mapping); 93 - bool dax_lock_mapping_entry(struct page *page); 94 - void dax_unlock_mapping_entry(struct page *page); 91 + dax_entry_t dax_lock_page(struct page *page); 92 + void dax_unlock_page(struct page *page, dax_entry_t cookie); 95 93 #else 96 94 static inline bool bdev_dax_supported(struct block_device *bdev, 97 95 int blocksize) ··· 124 122 return -EOPNOTSUPP; 125 123 } 126 124 127 - static inline bool dax_lock_mapping_entry(struct page *page) 125 + static inline dax_entry_t dax_lock_page(struct page *page) 128 126 { 129 127 if (IS_DAX(page->mapping->host)) 130 - return true; 131 - return false; 128 + return ~0UL; 129 + return 0; 132 130 } 133 131 134 - static inline void dax_unlock_mapping_entry(struct page *page) 132 + static inline void dax_unlock_page(struct page *page, dax_entry_t cookie) 135 133 { 136 134 } 137 135 #endif
+7
include/linux/filter.h
··· 449 449 offsetof(TYPE, MEMBER) ... offsetofend(TYPE, MEMBER) - 1 450 450 #define bpf_ctx_range_till(TYPE, MEMBER1, MEMBER2) \ 451 451 offsetof(TYPE, MEMBER1) ... offsetofend(TYPE, MEMBER2) - 1 452 + #if BITS_PER_LONG == 64 453 + # define bpf_ctx_range_ptr(TYPE, MEMBER) \ 454 + offsetof(TYPE, MEMBER) ... offsetofend(TYPE, MEMBER) - 1 455 + #else 456 + # define bpf_ctx_range_ptr(TYPE, MEMBER) \ 457 + offsetof(TYPE, MEMBER) ... offsetof(TYPE, MEMBER) + 8 - 1 458 + #endif /* BITS_PER_LONG == 64 */ 452 459 453 460 #define bpf_target_off(TYPE, MEMBER, SIZE, PTR_SIZE) \ 454 461 ({ \
+8 -4
include/linux/gfp.h
··· 510 510 } 511 511 extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order, 512 512 struct vm_area_struct *vma, unsigned long addr, 513 - int node); 513 + int node, bool hugepage); 514 + #define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ 515 + alloc_pages_vma(gfp_mask, order, vma, addr, numa_node_id(), true) 514 516 #else 515 517 #define alloc_pages(gfp_mask, order) \ 516 518 alloc_pages_node(numa_node_id(), gfp_mask, order) 517 - #define alloc_pages_vma(gfp_mask, order, vma, addr, node)\ 519 + #define alloc_pages_vma(gfp_mask, order, vma, addr, node, false)\ 520 + alloc_pages(gfp_mask, order) 521 + #define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ 518 522 alloc_pages(gfp_mask, order) 519 523 #endif 520 524 #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) 521 525 #define alloc_page_vma(gfp_mask, vma, addr) \ 522 - alloc_pages_vma(gfp_mask, 0, vma, addr, numa_node_id()) 526 + alloc_pages_vma(gfp_mask, 0, vma, addr, numa_node_id(), false) 523 527 #define alloc_page_vma_node(gfp_mask, vma, addr, node) \ 524 - alloc_pages_vma(gfp_mask, 0, vma, addr, node) 528 + alloc_pages_vma(gfp_mask, 0, vma, addr, node, false) 525 529 526 530 extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); 527 531 extern unsigned long get_zeroed_page(gfp_t gfp_mask);
+7
include/linux/hyperv.h
··· 905 905 906 906 bool probe_done; 907 907 908 + /* 909 + * We must offload the handling of the primary/sub channels 910 + * from the single-threaded vmbus_connection.work_queue to 911 + * two different workqueue, otherwise we can block 912 + * vmbus_connection.work_queue and hang: see vmbus_process_offer(). 913 + */ 914 + struct work_struct add_channel_work; 908 915 }; 909 916 910 917 static inline bool is_hvsock_channel(const struct vmbus_channel *c)
-2
include/linux/mempolicy.h
··· 139 139 struct mempolicy *get_task_policy(struct task_struct *p); 140 140 struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, 141 141 unsigned long addr); 142 - struct mempolicy *get_vma_policy(struct vm_area_struct *vma, 143 - unsigned long addr); 144 142 bool vma_policy_mof(struct vm_area_struct *vma); 145 143 146 144 extern void numa_default_policy(void);
+1 -1
include/linux/sfp.h
··· 224 224 * 225 225 * See the SFF-8472 specification and related documents for the definition 226 226 * of these structure members. This can be obtained from 227 - * ftp://ftp.seagate.com/sff 227 + * https://www.snia.org/technology-communities/sff/specifications 228 228 */ 229 229 struct sfp_eeprom_id { 230 230 struct sfp_eeprom_base base;
-1
include/linux/sunrpc/xdr.h
··· 72 72 buf->head[0].iov_base = start; 73 73 buf->head[0].iov_len = len; 74 74 buf->tail[0].iov_len = 0; 75 - buf->bvec = NULL; 76 75 buf->pages = NULL; 77 76 buf->page_len = 0; 78 77 buf->flags = 0;
+1
include/linux/tty.h
··· 556 556 extern void tty_release_struct(struct tty_struct *tty, int idx); 557 557 extern int tty_release(struct inode *inode, struct file *filp); 558 558 extern void tty_init_termios(struct tty_struct *tty); 559 + extern void tty_save_termios(struct tty_struct *tty); 559 560 extern int tty_standard_install(struct tty_driver *driver, 560 561 struct tty_struct *tty); 561 562
+2 -2
include/linux/usb.h
··· 407 407 }; 408 408 409 409 int __usb_get_extra_descriptor(char *buffer, unsigned size, 410 - unsigned char type, void **ptr); 410 + unsigned char type, void **ptr, size_t min); 411 411 #define usb_get_extra_descriptor(ifpoint, type, ptr) \ 412 412 __usb_get_extra_descriptor((ifpoint)->extra, \ 413 413 (ifpoint)->extralen, \ 414 - type, (void **)ptr) 414 + type, (void **)ptr, sizeof(**(ptr))) 415 415 416 416 /* ----------------------------------------------------------------------- */ 417 417
+1 -1
include/media/media-request.h
··· 68 68 unsigned int access_count; 69 69 struct list_head objects; 70 70 unsigned int num_incomplete_objects; 71 - struct wait_queue_head poll_wait; 71 + wait_queue_head_t poll_wait; 72 72 spinlock_t lock; 73 73 }; 74 74
+24 -6
include/net/neighbour.h
··· 454 454 455 455 static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb) 456 456 { 457 + unsigned int hh_alen = 0; 457 458 unsigned int seq; 458 459 unsigned int hh_len; 459 460 ··· 462 461 seq = read_seqbegin(&hh->hh_lock); 463 462 hh_len = hh->hh_len; 464 463 if (likely(hh_len <= HH_DATA_MOD)) { 465 - /* this is inlined by gcc */ 466 - memcpy(skb->data - HH_DATA_MOD, hh->hh_data, HH_DATA_MOD); 467 - } else { 468 - unsigned int hh_alen = HH_DATA_ALIGN(hh_len); 464 + hh_alen = HH_DATA_MOD; 469 465 470 - memcpy(skb->data - hh_alen, hh->hh_data, hh_alen); 466 + /* skb_push() would proceed silently if we have room for 467 + * the unaligned size but not for the aligned size: 468 + * check headroom explicitly. 469 + */ 470 + if (likely(skb_headroom(skb) >= HH_DATA_MOD)) { 471 + /* this is inlined by gcc */ 472 + memcpy(skb->data - HH_DATA_MOD, hh->hh_data, 473 + HH_DATA_MOD); 474 + } 475 + } else { 476 + hh_alen = HH_DATA_ALIGN(hh_len); 477 + 478 + if (likely(skb_headroom(skb) >= hh_alen)) { 479 + memcpy(skb->data - hh_alen, hh->hh_data, 480 + hh_alen); 481 + } 471 482 } 472 483 } while (read_seqretry(&hh->hh_lock, seq)); 473 484 474 - skb_push(skb, hh_len); 485 + if (WARN_ON_ONCE(skb_headroom(skb) < hh_alen)) { 486 + kfree_skb(skb); 487 + return NET_XMIT_DROP; 488 + } 489 + 490 + __skb_push(skb, hh_len); 475 491 return dev_queue_xmit(skb); 476 492 } 477 493
+5
include/net/sctp/sctp.h
··· 620 620 return false; 621 621 } 622 622 623 + static inline __u32 sctp_min_frag_point(struct sctp_sock *sp, __u16 datasize) 624 + { 625 + return sctp_mtu_payload(sp, SCTP_DEFAULT_MINSEGMENT, datasize); 626 + } 627 + 623 628 #endif /* __net_sctp_h__ */
+2
include/net/sctp/structs.h
··· 2075 2075 2076 2076 __u64 abandoned_unsent[SCTP_PR_INDEX(MAX) + 1]; 2077 2077 __u64 abandoned_sent[SCTP_PR_INDEX(MAX) + 1]; 2078 + 2079 + struct rcu_head rcu; 2078 2080 }; 2079 2081 2080 2082
+3 -1
include/sound/pcm_params.h
··· 254 254 static inline int snd_interval_single(const struct snd_interval *i) 255 255 { 256 256 return (i->min == i->max || 257 - (i->min + 1 == i->max && i->openmax)); 257 + (i->min + 1 == i->max && (i->openmin || i->openmax))); 258 258 } 259 259 260 260 static inline int snd_interval_value(const struct snd_interval *i) 261 261 { 262 + if (i->openmin && !i->openmax) 263 + return i->max; 262 264 return i->min; 263 265 } 264 266
+4
include/uapi/asm-generic/unistd.h
··· 760 760 #define __NR_ftruncate __NR3264_ftruncate 761 761 #define __NR_lseek __NR3264_lseek 762 762 #define __NR_sendfile __NR3264_sendfile 763 + #if defined(__ARCH_WANT_NEW_STAT) || defined(__ARCH_WANT_STAT64) 763 764 #define __NR_newfstatat __NR3264_fstatat 764 765 #define __NR_fstat __NR3264_fstat 766 + #endif 765 767 #define __NR_mmap __NR3264_mmap 766 768 #define __NR_fadvise64 __NR3264_fadvise64 767 769 #ifdef __NR3264_stat ··· 778 776 #define __NR_ftruncate64 __NR3264_ftruncate 779 777 #define __NR_llseek __NR3264_lseek 780 778 #define __NR_sendfile64 __NR3264_sendfile 779 + #if defined(__ARCH_WANT_NEW_STAT) || defined(__ARCH_WANT_STAT64) 781 780 #define __NR_fstatat64 __NR3264_fstatat 782 781 #define __NR_fstat64 __NR3264_fstat 782 + #endif 783 783 #define __NR_mmap2 __NR3264_mmap 784 784 #define __NR_fadvise64_64 __NR3264_fadvise64 785 785 #ifdef __NR3264_stat
+37 -19
include/uapi/linux/bpf.h
··· 2170 2170 * Return 2171 2171 * 0 on success, or a negative error in case of failure. 2172 2172 * 2173 - * struct bpf_sock *bpf_sk_lookup_tcp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u32 netns, u64 flags) 2173 + * struct bpf_sock *bpf_sk_lookup_tcp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u64 netns, u64 flags) 2174 2174 * Description 2175 2175 * Look for TCP socket matching *tuple*, optionally in a child 2176 2176 * network namespace *netns*. The return value must be checked, ··· 2187 2187 * **sizeof**\ (*tuple*\ **->ipv6**) 2188 2188 * Look for an IPv6 socket. 2189 2189 * 2190 - * If the *netns* is zero, then the socket lookup table in the 2191 - * netns associated with the *ctx* will be used. For the TC hooks, 2192 - * this in the netns of the device in the skb. For socket hooks, 2193 - * this in the netns of the socket. If *netns* is non-zero, then 2194 - * it specifies the ID of the netns relative to the netns 2195 - * associated with the *ctx*. 2190 + * If the *netns* is a negative signed 32-bit integer, then the 2191 + * socket lookup table in the netns associated with the *ctx* will 2192 + * will be used. For the TC hooks, this is the netns of the device 2193 + * in the skb. For socket hooks, this is the netns of the socket. 2194 + * If *netns* is any other signed 32-bit value greater than or 2195 + * equal to zero then it specifies the ID of the netns relative to 2196 + * the netns associated with the *ctx*. *netns* values beyond the 2197 + * range of 32-bit integers are reserved for future use. 2196 2198 * 2197 2199 * All values for *flags* are reserved for future usage, and must 2198 2200 * be left at zero. ··· 2203 2201 * **CONFIG_NET** configuration option. 2204 2202 * Return 2205 2203 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2204 + * For sockets with reuseport option, the *struct bpf_sock* 2205 + * result is from reuse->socks[] using the hash of the tuple. 2206 2206 * 2207 - * struct bpf_sock *bpf_sk_lookup_udp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u32 netns, u64 flags) 2207 + * struct bpf_sock *bpf_sk_lookup_udp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u64 netns, u64 flags) 2208 2208 * Description 2209 2209 * Look for UDP socket matching *tuple*, optionally in a child 2210 2210 * network namespace *netns*. The return value must be checked, ··· 2223 2219 * **sizeof**\ (*tuple*\ **->ipv6**) 2224 2220 * Look for an IPv6 socket. 2225 2221 * 2226 - * If the *netns* is zero, then the socket lookup table in the 2227 - * netns associated with the *ctx* will be used. For the TC hooks, 2228 - * this in the netns of the device in the skb. For socket hooks, 2229 - * this in the netns of the socket. If *netns* is non-zero, then 2230 - * it specifies the ID of the netns relative to the netns 2231 - * associated with the *ctx*. 2222 + * If the *netns* is a negative signed 32-bit integer, then the 2223 + * socket lookup table in the netns associated with the *ctx* will 2224 + * will be used. For the TC hooks, this is the netns of the device 2225 + * in the skb. For socket hooks, this is the netns of the socket. 2226 + * If *netns* is any other signed 32-bit value greater than or 2227 + * equal to zero then it specifies the ID of the netns relative to 2228 + * the netns associated with the *ctx*. *netns* values beyond the 2229 + * range of 32-bit integers are reserved for future use. 2232 2230 * 2233 2231 * All values for *flags* are reserved for future usage, and must 2234 2232 * be left at zero. ··· 2239 2233 * **CONFIG_NET** configuration option. 2240 2234 * Return 2241 2235 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2236 + * For sockets with reuseport option, the *struct bpf_sock* 2237 + * result is from reuse->socks[] using the hash of the tuple. 2242 2238 * 2243 2239 * int bpf_sk_release(struct bpf_sock *sk) 2244 2240 * Description ··· 2413 2405 /* BPF_FUNC_perf_event_output for sk_buff input context. */ 2414 2406 #define BPF_F_CTXLEN_MASK (0xfffffULL << 32) 2415 2407 2408 + /* Current network namespace */ 2409 + #define BPF_F_CURRENT_NETNS (-1L) 2410 + 2416 2411 /* Mode for BPF_FUNC_skb_adjust_room helper. */ 2417 2412 enum bpf_adj_room_mode { 2418 2413 BPF_ADJ_ROOM_NET, ··· 2432 2421 BPF_LWT_ENCAP_SEG6, 2433 2422 BPF_LWT_ENCAP_SEG6_INLINE 2434 2423 }; 2424 + 2425 + #define __bpf_md_ptr(type, name) \ 2426 + union { \ 2427 + type name; \ 2428 + __u64 :64; \ 2429 + } __attribute__((aligned(8))) 2435 2430 2436 2431 /* user accessible mirror of in-kernel sk_buff. 2437 2432 * new fields can only be added to the end of this structure ··· 2473 2456 /* ... here. */ 2474 2457 2475 2458 __u32 data_meta; 2476 - struct bpf_flow_keys *flow_keys; 2459 + __bpf_md_ptr(struct bpf_flow_keys *, flow_keys); 2477 2460 }; 2478 2461 2479 2462 struct bpf_tunnel_key { ··· 2589 2572 * be added to the end of this structure 2590 2573 */ 2591 2574 struct sk_msg_md { 2592 - void *data; 2593 - void *data_end; 2575 + __bpf_md_ptr(void *, data); 2576 + __bpf_md_ptr(void *, data_end); 2594 2577 2595 2578 __u32 family; 2596 2579 __u32 remote_ip4; /* Stored in network byte order */ ··· 2606 2589 * Start of directly accessible data. It begins from 2607 2590 * the tcp/udp header. 2608 2591 */ 2609 - void *data; 2610 - void *data_end; /* End of directly accessible data */ 2592 + __bpf_md_ptr(void *, data); 2593 + /* End of directly accessible data */ 2594 + __bpf_md_ptr(void *, data_end); 2611 2595 /* 2612 2596 * Total length of packet (starting from the tcp/udp header). 2613 2597 * Note that the directly accessible bytes (data_end - data)
+82
kernel/bpf/btf.c
··· 5 5 #include <uapi/linux/types.h> 6 6 #include <linux/seq_file.h> 7 7 #include <linux/compiler.h> 8 + #include <linux/ctype.h> 8 9 #include <linux/errno.h> 9 10 #include <linux/slab.h> 10 11 #include <linux/anon_inodes.h> ··· 425 424 { 426 425 return BTF_STR_OFFSET_VALID(offset) && 427 426 offset < btf->hdr.str_len; 427 + } 428 + 429 + /* Only C-style identifier is permitted. This can be relaxed if 430 + * necessary. 431 + */ 432 + static bool btf_name_valid_identifier(const struct btf *btf, u32 offset) 433 + { 434 + /* offset must be valid */ 435 + const char *src = &btf->strings[offset]; 436 + const char *src_limit; 437 + 438 + if (!isalpha(*src) && *src != '_') 439 + return false; 440 + 441 + /* set a limit on identifier length */ 442 + src_limit = src + KSYM_NAME_LEN; 443 + src++; 444 + while (*src && src < src_limit) { 445 + if (!isalnum(*src) && *src != '_') 446 + return false; 447 + src++; 448 + } 449 + 450 + return !*src; 428 451 } 429 452 430 453 static const char *btf_name_by_offset(const struct btf *btf, u32 offset) ··· 1168 1143 return -EINVAL; 1169 1144 } 1170 1145 1146 + /* typedef type must have a valid name, and other ref types, 1147 + * volatile, const, restrict, should have a null name. 1148 + */ 1149 + if (BTF_INFO_KIND(t->info) == BTF_KIND_TYPEDEF) { 1150 + if (!t->name_off || 1151 + !btf_name_valid_identifier(env->btf, t->name_off)) { 1152 + btf_verifier_log_type(env, t, "Invalid name"); 1153 + return -EINVAL; 1154 + } 1155 + } else { 1156 + if (t->name_off) { 1157 + btf_verifier_log_type(env, t, "Invalid name"); 1158 + return -EINVAL; 1159 + } 1160 + } 1161 + 1171 1162 btf_verifier_log_type(env, t, NULL); 1172 1163 1173 1164 return 0; ··· 1341 1300 return -EINVAL; 1342 1301 } 1343 1302 1303 + /* fwd type must have a valid name */ 1304 + if (!t->name_off || 1305 + !btf_name_valid_identifier(env->btf, t->name_off)) { 1306 + btf_verifier_log_type(env, t, "Invalid name"); 1307 + return -EINVAL; 1308 + } 1309 + 1344 1310 btf_verifier_log_type(env, t, NULL); 1345 1311 1346 1312 return 0; ··· 1401 1353 btf_verifier_log_basic(env, t, 1402 1354 "meta_left:%u meta_needed:%u", 1403 1355 meta_left, meta_needed); 1356 + return -EINVAL; 1357 + } 1358 + 1359 + /* array type should not have a name */ 1360 + if (t->name_off) { 1361 + btf_verifier_log_type(env, t, "Invalid name"); 1404 1362 return -EINVAL; 1405 1363 } 1406 1364 ··· 1586 1532 return -EINVAL; 1587 1533 } 1588 1534 1535 + /* struct type either no name or a valid one */ 1536 + if (t->name_off && 1537 + !btf_name_valid_identifier(env->btf, t->name_off)) { 1538 + btf_verifier_log_type(env, t, "Invalid name"); 1539 + return -EINVAL; 1540 + } 1541 + 1589 1542 btf_verifier_log_type(env, t, NULL); 1590 1543 1591 1544 last_offset = 0; ··· 1604 1543 return -EINVAL; 1605 1544 } 1606 1545 1546 + /* struct member either no name or a valid one */ 1547 + if (member->name_off && 1548 + !btf_name_valid_identifier(btf, member->name_off)) { 1549 + btf_verifier_log_member(env, t, member, "Invalid name"); 1550 + return -EINVAL; 1551 + } 1607 1552 /* A member cannot be in type void */ 1608 1553 if (!member->type || !BTF_TYPE_ID_VALID(member->type)) { 1609 1554 btf_verifier_log_member(env, t, member, ··· 1797 1730 return -EINVAL; 1798 1731 } 1799 1732 1733 + /* enum type either no name or a valid one */ 1734 + if (t->name_off && 1735 + !btf_name_valid_identifier(env->btf, t->name_off)) { 1736 + btf_verifier_log_type(env, t, "Invalid name"); 1737 + return -EINVAL; 1738 + } 1739 + 1800 1740 btf_verifier_log_type(env, t, NULL); 1801 1741 1802 1742 for (i = 0; i < nr_enums; i++) { ··· 1812 1738 enums[i].name_off); 1813 1739 return -EINVAL; 1814 1740 } 1741 + 1742 + /* enum member must have a valid name */ 1743 + if (!enums[i].name_off || 1744 + !btf_name_valid_identifier(btf, enums[i].name_off)) { 1745 + btf_verifier_log_type(env, t, "Invalid name"); 1746 + return -EINVAL; 1747 + } 1748 + 1815 1749 1816 1750 btf_verifier_log(env, "\t%s val=%d\n", 1817 1751 btf_name_by_offset(btf, enums[i].name_off),
+89 -14
kernel/bpf/verifier.c
··· 175 175 176 176 #define BPF_COMPLEXITY_LIMIT_INSNS 131072 177 177 #define BPF_COMPLEXITY_LIMIT_STACK 1024 178 + #define BPF_COMPLEXITY_LIMIT_STATES 64 178 179 179 180 #define BPF_MAP_PTR_UNPRIV 1UL 180 181 #define BPF_MAP_PTR_POISON ((void *)((0xeB9FUL << 1) + \ ··· 3752 3751 } 3753 3752 } 3754 3753 3754 + /* compute branch direction of the expression "if (reg opcode val) goto target;" 3755 + * and return: 3756 + * 1 - branch will be taken and "goto target" will be executed 3757 + * 0 - branch will not be taken and fall-through to next insn 3758 + * -1 - unknown. Example: "if (reg < 5)" is unknown when register value range [0,10] 3759 + */ 3760 + static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode) 3761 + { 3762 + if (__is_pointer_value(false, reg)) 3763 + return -1; 3764 + 3765 + switch (opcode) { 3766 + case BPF_JEQ: 3767 + if (tnum_is_const(reg->var_off)) 3768 + return !!tnum_equals_const(reg->var_off, val); 3769 + break; 3770 + case BPF_JNE: 3771 + if (tnum_is_const(reg->var_off)) 3772 + return !tnum_equals_const(reg->var_off, val); 3773 + break; 3774 + case BPF_JGT: 3775 + if (reg->umin_value > val) 3776 + return 1; 3777 + else if (reg->umax_value <= val) 3778 + return 0; 3779 + break; 3780 + case BPF_JSGT: 3781 + if (reg->smin_value > (s64)val) 3782 + return 1; 3783 + else if (reg->smax_value < (s64)val) 3784 + return 0; 3785 + break; 3786 + case BPF_JLT: 3787 + if (reg->umax_value < val) 3788 + return 1; 3789 + else if (reg->umin_value >= val) 3790 + return 0; 3791 + break; 3792 + case BPF_JSLT: 3793 + if (reg->smax_value < (s64)val) 3794 + return 1; 3795 + else if (reg->smin_value >= (s64)val) 3796 + return 0; 3797 + break; 3798 + case BPF_JGE: 3799 + if (reg->umin_value >= val) 3800 + return 1; 3801 + else if (reg->umax_value < val) 3802 + return 0; 3803 + break; 3804 + case BPF_JSGE: 3805 + if (reg->smin_value >= (s64)val) 3806 + return 1; 3807 + else if (reg->smax_value < (s64)val) 3808 + return 0; 3809 + break; 3810 + case BPF_JLE: 3811 + if (reg->umax_value <= val) 3812 + return 1; 3813 + else if (reg->umin_value > val) 3814 + return 0; 3815 + break; 3816 + case BPF_JSLE: 3817 + if (reg->smax_value <= (s64)val) 3818 + return 1; 3819 + else if (reg->smin_value > (s64)val) 3820 + return 0; 3821 + break; 3822 + } 3823 + 3824 + return -1; 3825 + } 3826 + 3755 3827 /* Adjusts the register min/max values in the case that the dst_reg is the 3756 3828 * variable register that we are working on, and src_reg is a constant or we're 3757 3829 * simply doing a BPF_K check. ··· 4226 4152 4227 4153 dst_reg = &regs[insn->dst_reg]; 4228 4154 4229 - /* detect if R == 0 where R was initialized to zero earlier */ 4230 - if (BPF_SRC(insn->code) == BPF_K && 4231 - (opcode == BPF_JEQ || opcode == BPF_JNE) && 4232 - dst_reg->type == SCALAR_VALUE && 4233 - tnum_is_const(dst_reg->var_off)) { 4234 - if ((opcode == BPF_JEQ && dst_reg->var_off.value == insn->imm) || 4235 - (opcode == BPF_JNE && dst_reg->var_off.value != insn->imm)) { 4236 - /* if (imm == imm) goto pc+off; 4237 - * only follow the goto, ignore fall-through 4238 - */ 4155 + if (BPF_SRC(insn->code) == BPF_K) { 4156 + int pred = is_branch_taken(dst_reg, insn->imm, opcode); 4157 + 4158 + if (pred == 1) { 4159 + /* only follow the goto, ignore fall-through */ 4239 4160 *insn_idx += insn->off; 4240 4161 return 0; 4241 - } else { 4242 - /* if (imm != imm) goto pc+off; 4243 - * only follow fall-through branch, since 4162 + } else if (pred == 0) { 4163 + /* only follow fall-through branch, since 4244 4164 * that's where the program will go 4245 4165 */ 4246 4166 return 0; ··· 5048 4980 struct bpf_verifier_state_list *new_sl; 5049 4981 struct bpf_verifier_state_list *sl; 5050 4982 struct bpf_verifier_state *cur = env->cur_state, *new; 5051 - int i, j, err; 4983 + int i, j, err, states_cnt = 0; 5052 4984 5053 4985 sl = env->explored_states[insn_idx]; 5054 4986 if (!sl) ··· 5075 5007 return 1; 5076 5008 } 5077 5009 sl = sl->next; 5010 + states_cnt++; 5078 5011 } 5012 + 5013 + if (!env->allow_ptr_leaks && states_cnt > BPF_COMPLEXITY_LIMIT_STATES) 5014 + return 0; 5079 5015 5080 5016 /* there were no equivalent states, remember current one. 5081 5017 * technically the current state is not proven to be safe yet, ··· 5219 5147 } 5220 5148 goto process_bpf_exit; 5221 5149 } 5150 + 5151 + if (signal_pending(current)) 5152 + return -EAGAIN; 5222 5153 5223 5154 if (need_resched()) 5224 5155 cond_resched();
+2
kernel/events/uprobes.c
··· 572 572 * gets called, we don't get a chance to remove uprobe from 573 573 * delayed_uprobe_list from remove_breakpoint(). Do it here. 574 574 */ 575 + mutex_lock(&delayed_uprobe_lock); 575 576 delayed_uprobe_remove(uprobe, NULL); 577 + mutex_unlock(&delayed_uprobe_lock); 576 578 kfree(uprobe); 577 579 } 578 580 }
+1 -1
kernel/stackleak.c
··· 104 104 } 105 105 NOKPROBE_SYMBOL(stackleak_erase); 106 106 107 - void __used stackleak_track_stack(void) 107 + void __used notrace stackleak_track_stack(void) 108 108 { 109 109 /* 110 110 * N.B. stackleak_erase() fills the kernel stack with the poison value,
+20 -31
mm/huge_memory.c
··· 629 629 * available 630 630 * never: never stall for any thp allocation 631 631 */ 632 - static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma, unsigned long addr) 632 + static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma) 633 633 { 634 634 const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE); 635 - gfp_t this_node = 0; 636 635 637 - #ifdef CONFIG_NUMA 638 - struct mempolicy *pol; 639 - /* 640 - * __GFP_THISNODE is used only when __GFP_DIRECT_RECLAIM is not 641 - * specified, to express a general desire to stay on the current 642 - * node for optimistic allocation attempts. If the defrag mode 643 - * and/or madvise hint requires the direct reclaim then we prefer 644 - * to fallback to other node rather than node reclaim because that 645 - * can lead to excessive reclaim even though there is free memory 646 - * on other nodes. We expect that NUMA preferences are specified 647 - * by memory policies. 648 - */ 649 - pol = get_vma_policy(vma, addr); 650 - if (pol->mode != MPOL_BIND) 651 - this_node = __GFP_THISNODE; 652 - mpol_cond_put(pol); 653 - #endif 654 - 636 + /* Always do synchronous compaction */ 655 637 if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags)) 656 638 return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY); 639 + 640 + /* Kick kcompactd and fail quickly */ 657 641 if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags)) 658 - return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM | this_node; 642 + return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM; 643 + 644 + /* Synchronous compaction if madvised, otherwise kick kcompactd */ 659 645 if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags)) 660 - return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM : 661 - __GFP_KSWAPD_RECLAIM | this_node); 646 + return GFP_TRANSHUGE_LIGHT | 647 + (vma_madvised ? __GFP_DIRECT_RECLAIM : 648 + __GFP_KSWAPD_RECLAIM); 649 + 650 + /* Only do synchronous compaction if madvised */ 662 651 if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags)) 663 - return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM : 664 - this_node); 665 - return GFP_TRANSHUGE_LIGHT | this_node; 652 + return GFP_TRANSHUGE_LIGHT | 653 + (vma_madvised ? __GFP_DIRECT_RECLAIM : 0); 654 + 655 + return GFP_TRANSHUGE_LIGHT; 666 656 } 667 657 668 658 /* Caller must hold page table lock. */ ··· 724 734 pte_free(vma->vm_mm, pgtable); 725 735 return ret; 726 736 } 727 - gfp = alloc_hugepage_direct_gfpmask(vma, haddr); 728 - page = alloc_pages_vma(gfp, HPAGE_PMD_ORDER, vma, haddr, numa_node_id()); 737 + gfp = alloc_hugepage_direct_gfpmask(vma); 738 + page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER); 729 739 if (unlikely(!page)) { 730 740 count_vm_event(THP_FAULT_FALLBACK); 731 741 return VM_FAULT_FALLBACK; ··· 1295 1305 alloc: 1296 1306 if (transparent_hugepage_enabled(vma) && 1297 1307 !transparent_hugepage_debug_cow()) { 1298 - huge_gfp = alloc_hugepage_direct_gfpmask(vma, haddr); 1299 - new_page = alloc_pages_vma(huge_gfp, HPAGE_PMD_ORDER, vma, 1300 - haddr, numa_node_id()); 1308 + huge_gfp = alloc_hugepage_direct_gfpmask(vma); 1309 + new_page = alloc_hugepage_vma(huge_gfp, vma, haddr, HPAGE_PMD_ORDER); 1301 1310 } else 1302 1311 new_page = NULL; 1303 1312
+4 -2
mm/memory-failure.c
··· 1161 1161 LIST_HEAD(tokill); 1162 1162 int rc = -EBUSY; 1163 1163 loff_t start; 1164 + dax_entry_t cookie; 1164 1165 1165 1166 /* 1166 1167 * Prevent the inode from being freed while we are interrogating ··· 1170 1169 * also prevents changes to the mapping of this pfn until 1171 1170 * poison signaling is complete. 1172 1171 */ 1173 - if (!dax_lock_mapping_entry(page)) 1172 + cookie = dax_lock_page(page); 1173 + if (!cookie) 1174 1174 goto out; 1175 1175 1176 1176 if (hwpoison_filter(page)) { ··· 1222 1220 kill_procs(&tokill, flags & MF_MUST_KILL, !unmap_success, pfn, flags); 1223 1221 rc = 0; 1224 1222 unlock: 1225 - dax_unlock_mapping_entry(page); 1223 + dax_unlock_page(page, cookie); 1226 1224 out: 1227 1225 /* drop pgmap ref acquired in caller */ 1228 1226 put_dev_pagemap(pgmap);
+30 -4
mm/mempolicy.c
··· 1116 1116 } else if (PageTransHuge(page)) { 1117 1117 struct page *thp; 1118 1118 1119 - thp = alloc_pages_vma(GFP_TRANSHUGE, HPAGE_PMD_ORDER, vma, 1120 - address, numa_node_id()); 1119 + thp = alloc_hugepage_vma(GFP_TRANSHUGE, vma, address, 1120 + HPAGE_PMD_ORDER); 1121 1121 if (!thp) 1122 1122 return NULL; 1123 1123 prep_transhuge_page(thp); ··· 1662 1662 * freeing by another task. It is the caller's responsibility to free the 1663 1663 * extra reference for shared policies. 1664 1664 */ 1665 - struct mempolicy *get_vma_policy(struct vm_area_struct *vma, 1665 + static struct mempolicy *get_vma_policy(struct vm_area_struct *vma, 1666 1666 unsigned long addr) 1667 1667 { 1668 1668 struct mempolicy *pol = __get_vma_policy(vma, addr); ··· 2011 2011 * @vma: Pointer to VMA or NULL if not available. 2012 2012 * @addr: Virtual Address of the allocation. Must be inside the VMA. 2013 2013 * @node: Which node to prefer for allocation (modulo policy). 2014 + * @hugepage: for hugepages try only the preferred node if possible 2014 2015 * 2015 2016 * This function allocates a page from the kernel page pool and applies 2016 2017 * a NUMA policy associated with the VMA or the current process. ··· 2022 2021 */ 2023 2022 struct page * 2024 2023 alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, 2025 - unsigned long addr, int node) 2024 + unsigned long addr, int node, bool hugepage) 2026 2025 { 2027 2026 struct mempolicy *pol; 2028 2027 struct page *page; ··· 2038 2037 mpol_cond_put(pol); 2039 2038 page = alloc_page_interleave(gfp, order, nid); 2040 2039 goto out; 2040 + } 2041 + 2042 + if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { 2043 + int hpage_node = node; 2044 + 2045 + /* 2046 + * For hugepage allocation and non-interleave policy which 2047 + * allows the current node (or other explicitly preferred 2048 + * node) we only try to allocate from the current/preferred 2049 + * node and don't fall back to other nodes, as the cost of 2050 + * remote accesses would likely offset THP benefits. 2051 + * 2052 + * If the policy is interleave, or does not allow the current 2053 + * node in its nodemask, we allocate the standard way. 2054 + */ 2055 + if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL)) 2056 + hpage_node = pol->v.preferred_node; 2057 + 2058 + nmask = policy_nodemask(gfp, pol); 2059 + if (!nmask || node_isset(hpage_node, *nmask)) { 2060 + mpol_cond_put(pol); 2061 + page = __alloc_pages_node(hpage_node, 2062 + gfp | __GFP_THISNODE, order); 2063 + goto out; 2064 + } 2041 2065 } 2042 2066 2043 2067 nmask = policy_nodemask(gfp, pol);
+1 -1
mm/shmem.c
··· 1439 1439 1440 1440 shmem_pseudo_vma_init(&pvma, info, hindex); 1441 1441 page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN, 1442 - HPAGE_PMD_ORDER, &pvma, 0, numa_node_id()); 1442 + HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(), true); 1443 1443 shmem_pseudo_vma_destroy(&pvma); 1444 1444 if (page) 1445 1445 prep_transhuge_page(page);
+15 -6
net/bpf/test_run.c
··· 28 28 return ret; 29 29 } 30 30 31 - static u32 bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, u32 *time) 31 + static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, u32 *ret, 32 + u32 *time) 32 33 { 33 34 struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE] = { 0 }; 34 35 enum bpf_cgroup_storage_type stype; 35 36 u64 time_start, time_spent = 0; 36 - u32 ret = 0, i; 37 + u32 i; 37 38 38 39 for_each_cgroup_storage_type(stype) { 39 40 storage[stype] = bpf_cgroup_storage_alloc(prog, stype); ··· 50 49 repeat = 1; 51 50 time_start = ktime_get_ns(); 52 51 for (i = 0; i < repeat; i++) { 53 - ret = bpf_test_run_one(prog, ctx, storage); 52 + *ret = bpf_test_run_one(prog, ctx, storage); 54 53 if (need_resched()) { 55 54 if (signal_pending(current)) 56 55 break; ··· 66 65 for_each_cgroup_storage_type(stype) 67 66 bpf_cgroup_storage_free(storage[stype]); 68 67 69 - return ret; 68 + return 0; 70 69 } 71 70 72 71 static int bpf_test_finish(const union bpf_attr *kattr, ··· 166 165 __skb_push(skb, hh_len); 167 166 if (is_direct_pkt_access) 168 167 bpf_compute_data_pointers(skb); 169 - retval = bpf_test_run(prog, skb, repeat, &duration); 168 + ret = bpf_test_run(prog, skb, repeat, &retval, &duration); 169 + if (ret) { 170 + kfree_skb(skb); 171 + kfree(sk); 172 + return ret; 173 + } 170 174 if (!is_l2) { 171 175 if (skb_headroom(skb) < hh_len) { 172 176 int nhead = HH_DATA_ALIGN(hh_len - skb_headroom(skb)); ··· 218 212 rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0); 219 213 xdp.rxq = &rxqueue->xdp_rxq; 220 214 221 - retval = bpf_test_run(prog, &xdp, repeat, &duration); 215 + ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration); 216 + if (ret) 217 + goto out; 222 218 if (xdp.data != data + XDP_PACKET_HEADROOM + NET_IP_ALIGN || 223 219 xdp.data_end != xdp.data + size) 224 220 size = xdp.data_end - xdp.data; 225 221 ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration); 222 + out: 226 223 kfree(data); 227 224 return ret; 228 225 }
+35 -30
net/core/dev.c
··· 2175 2175 return active; 2176 2176 } 2177 2177 2178 + static void reset_xps_maps(struct net_device *dev, 2179 + struct xps_dev_maps *dev_maps, 2180 + bool is_rxqs_map) 2181 + { 2182 + if (is_rxqs_map) { 2183 + static_key_slow_dec_cpuslocked(&xps_rxqs_needed); 2184 + RCU_INIT_POINTER(dev->xps_rxqs_map, NULL); 2185 + } else { 2186 + RCU_INIT_POINTER(dev->xps_cpus_map, NULL); 2187 + } 2188 + static_key_slow_dec_cpuslocked(&xps_needed); 2189 + kfree_rcu(dev_maps, rcu); 2190 + } 2191 + 2178 2192 static void clean_xps_maps(struct net_device *dev, const unsigned long *mask, 2179 2193 struct xps_dev_maps *dev_maps, unsigned int nr_ids, 2180 2194 u16 offset, u16 count, bool is_rxqs_map) ··· 2200 2186 j < nr_ids;) 2201 2187 active |= remove_xps_queue_cpu(dev, dev_maps, j, offset, 2202 2188 count); 2203 - if (!active) { 2204 - if (is_rxqs_map) { 2205 - RCU_INIT_POINTER(dev->xps_rxqs_map, NULL); 2206 - } else { 2207 - RCU_INIT_POINTER(dev->xps_cpus_map, NULL); 2189 + if (!active) 2190 + reset_xps_maps(dev, dev_maps, is_rxqs_map); 2208 2191 2209 - for (i = offset + (count - 1); count--; i--) 2210 - netdev_queue_numa_node_write( 2211 - netdev_get_tx_queue(dev, i), 2212 - NUMA_NO_NODE); 2192 + if (!is_rxqs_map) { 2193 + for (i = offset + (count - 1); count--; i--) { 2194 + netdev_queue_numa_node_write( 2195 + netdev_get_tx_queue(dev, i), 2196 + NUMA_NO_NODE); 2213 2197 } 2214 - kfree_rcu(dev_maps, rcu); 2215 2198 } 2216 2199 } 2217 2200 ··· 2245 2234 false); 2246 2235 2247 2236 out_no_maps: 2248 - if (static_key_enabled(&xps_rxqs_needed)) 2249 - static_key_slow_dec_cpuslocked(&xps_rxqs_needed); 2250 - 2251 - static_key_slow_dec_cpuslocked(&xps_needed); 2252 2237 mutex_unlock(&xps_map_mutex); 2253 2238 cpus_read_unlock(); 2254 2239 } ··· 2362 2355 if (!new_dev_maps) 2363 2356 goto out_no_new_maps; 2364 2357 2365 - static_key_slow_inc_cpuslocked(&xps_needed); 2366 - if (is_rxqs_map) 2367 - static_key_slow_inc_cpuslocked(&xps_rxqs_needed); 2358 + if (!dev_maps) { 2359 + /* Increment static keys at most once per type */ 2360 + static_key_slow_inc_cpuslocked(&xps_needed); 2361 + if (is_rxqs_map) 2362 + static_key_slow_inc_cpuslocked(&xps_rxqs_needed); 2363 + } 2368 2364 2369 2365 for (j = -1; j = netif_attrmask_next(j, possible_mask, nr_ids), 2370 2366 j < nr_ids;) { ··· 2465 2455 } 2466 2456 2467 2457 /* free map if not active */ 2468 - if (!active) { 2469 - if (is_rxqs_map) 2470 - RCU_INIT_POINTER(dev->xps_rxqs_map, NULL); 2471 - else 2472 - RCU_INIT_POINTER(dev->xps_cpus_map, NULL); 2473 - kfree_rcu(dev_maps, rcu); 2474 - } 2458 + if (!active) 2459 + reset_xps_maps(dev, dev_maps, is_rxqs_map); 2475 2460 2476 2461 out_no_maps: 2477 2462 mutex_unlock(&xps_map_mutex); ··· 5014 5009 struct net_device *orig_dev = skb->dev; 5015 5010 struct packet_type *pt_prev = NULL; 5016 5011 5017 - list_del(&skb->list); 5012 + skb_list_del_init(skb); 5018 5013 __netif_receive_skb_core(skb, pfmemalloc, &pt_prev); 5019 5014 if (!pt_prev) 5020 5015 continue; ··· 5170 5165 INIT_LIST_HEAD(&sublist); 5171 5166 list_for_each_entry_safe(skb, next, head, list) { 5172 5167 net_timestamp_check(netdev_tstamp_prequeue, skb); 5173 - list_del(&skb->list); 5168 + skb_list_del_init(skb); 5174 5169 if (!skb_defer_rx_timestamp(skb)) 5175 5170 list_add_tail(&skb->list, &sublist); 5176 5171 } ··· 5181 5176 rcu_read_lock(); 5182 5177 list_for_each_entry_safe(skb, next, head, list) { 5183 5178 xdp_prog = rcu_dereference(skb->dev->xdp_prog); 5184 - list_del(&skb->list); 5179 + skb_list_del_init(skb); 5185 5180 if (do_xdp_generic(xdp_prog, skb) == XDP_PASS) 5186 5181 list_add_tail(&skb->list, &sublist); 5187 5182 } ··· 5200 5195 5201 5196 if (cpu >= 0) { 5202 5197 /* Will be handled, remove from list */ 5203 - list_del(&skb->list); 5198 + skb_list_del_init(skb); 5204 5199 enqueue_to_backlog(skb, cpu, &rflow->last_qtail); 5205 5200 } 5206 5201 } ··· 6209 6204 napi->skb = NULL; 6210 6205 napi->poll = poll; 6211 6206 if (weight > NAPI_POLL_WEIGHT) 6212 - pr_err_once("netif_napi_add() called with weight %d on device %s\n", 6213 - weight, dev->name); 6207 + netdev_err_once(dev, "%s() called with weight %d\n", __func__, 6208 + weight); 6214 6209 napi->weight = weight; 6215 6210 list_add(&napi->dev_list, &dev->napi_list); 6216 6211 napi->dev = dev;
+14 -13
net/core/filter.c
··· 4890 4890 struct net *net; 4891 4891 4892 4892 family = len == sizeof(tuple->ipv4) ? AF_INET : AF_INET6; 4893 - if (unlikely(family == AF_UNSPEC || netns_id > U32_MAX || flags)) 4893 + if (unlikely(family == AF_UNSPEC || flags || 4894 + !((s32)netns_id < 0 || netns_id <= S32_MAX))) 4894 4895 goto out; 4895 4896 4896 4897 if (skb->dev) 4897 4898 caller_net = dev_net(skb->dev); 4898 4899 else 4899 4900 caller_net = sock_net(skb->sk); 4900 - if (netns_id) { 4901 + if ((s32)netns_id < 0) { 4902 + net = caller_net; 4903 + sk = sk_lookup(net, tuple, skb, family, proto); 4904 + } else { 4901 4905 net = get_net_ns_by_id(caller_net, netns_id); 4902 4906 if (unlikely(!net)) 4903 4907 goto out; 4904 4908 sk = sk_lookup(net, tuple, skb, family, proto); 4905 4909 put_net(net); 4906 - } else { 4907 - net = caller_net; 4908 - sk = sk_lookup(net, tuple, skb, family, proto); 4909 4910 } 4910 4911 4911 4912 if (sk) ··· 5436 5435 if (size != size_default) 5437 5436 return false; 5438 5437 break; 5439 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5440 - if (size != sizeof(struct bpf_flow_keys *)) 5438 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5439 + if (size != sizeof(__u64)) 5441 5440 return false; 5442 5441 break; 5443 5442 default: ··· 5465 5464 case bpf_ctx_range(struct __sk_buff, data): 5466 5465 case bpf_ctx_range(struct __sk_buff, data_meta): 5467 5466 case bpf_ctx_range(struct __sk_buff, data_end): 5468 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5467 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5469 5468 case bpf_ctx_range_till(struct __sk_buff, family, local_port): 5470 5469 return false; 5471 5470 } ··· 5490 5489 switch (off) { 5491 5490 case bpf_ctx_range(struct __sk_buff, tc_classid): 5492 5491 case bpf_ctx_range(struct __sk_buff, data_meta): 5493 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5492 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5494 5493 return false; 5495 5494 case bpf_ctx_range(struct __sk_buff, data): 5496 5495 case bpf_ctx_range(struct __sk_buff, data_end): ··· 5531 5530 case bpf_ctx_range(struct __sk_buff, tc_classid): 5532 5531 case bpf_ctx_range_till(struct __sk_buff, family, local_port): 5533 5532 case bpf_ctx_range(struct __sk_buff, data_meta): 5534 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5533 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5535 5534 return false; 5536 5535 } 5537 5536 ··· 5757 5756 case bpf_ctx_range(struct __sk_buff, data_end): 5758 5757 info->reg_type = PTR_TO_PACKET_END; 5759 5758 break; 5760 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5759 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5761 5760 case bpf_ctx_range_till(struct __sk_buff, family, local_port): 5762 5761 return false; 5763 5762 } ··· 5959 5958 switch (off) { 5960 5959 case bpf_ctx_range(struct __sk_buff, tc_classid): 5961 5960 case bpf_ctx_range(struct __sk_buff, data_meta): 5962 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5961 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5963 5962 return false; 5964 5963 } 5965 5964 ··· 6040 6039 case bpf_ctx_range(struct __sk_buff, data_end): 6041 6040 info->reg_type = PTR_TO_PACKET_END; 6042 6041 break; 6043 - case bpf_ctx_range(struct __sk_buff, flow_keys): 6042 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 6044 6043 info->reg_type = PTR_TO_FLOW_KEYS; 6045 6044 break; 6046 6045 case bpf_ctx_range(struct __sk_buff, tc_classid):
+3
net/core/rtnetlink.c
··· 3800 3800 { 3801 3801 int err; 3802 3802 3803 + if (dev->type != ARPHRD_ETHER) 3804 + return -EINVAL; 3805 + 3803 3806 netif_addr_lock_bh(dev); 3804 3807 err = nlmsg_populate_fdb(skb, cb, dev, idx, &dev->uc); 3805 3808 if (err)
+33 -1
net/dsa/master.c
··· 158 158 cpu_dp->orig_ethtool_ops = NULL; 159 159 } 160 160 161 + static ssize_t tagging_show(struct device *d, struct device_attribute *attr, 162 + char *buf) 163 + { 164 + struct net_device *dev = to_net_dev(d); 165 + struct dsa_port *cpu_dp = dev->dsa_ptr; 166 + 167 + return sprintf(buf, "%s\n", 168 + dsa_tag_protocol_to_str(cpu_dp->tag_ops)); 169 + } 170 + static DEVICE_ATTR_RO(tagging); 171 + 172 + static struct attribute *dsa_slave_attrs[] = { 173 + &dev_attr_tagging.attr, 174 + NULL 175 + }; 176 + 177 + static const struct attribute_group dsa_group = { 178 + .name = "dsa", 179 + .attrs = dsa_slave_attrs, 180 + }; 181 + 161 182 int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) 162 183 { 184 + int ret; 185 + 163 186 /* If we use a tagging format that doesn't have an ethertype 164 187 * field, make sure that all packets from this point on get 165 188 * sent to the tag format's receive function. ··· 191 168 192 169 dev->dsa_ptr = cpu_dp; 193 170 194 - return dsa_master_ethtool_setup(dev); 171 + ret = dsa_master_ethtool_setup(dev); 172 + if (ret) 173 + return ret; 174 + 175 + ret = sysfs_create_group(&dev->dev.kobj, &dsa_group); 176 + if (ret) 177 + dsa_master_ethtool_teardown(dev); 178 + 179 + return ret; 195 180 } 196 181 197 182 void dsa_master_teardown(struct net_device *dev) 198 183 { 184 + sysfs_remove_group(&dev->dev.kobj, &dsa_group); 199 185 dsa_master_ethtool_teardown(dev); 200 186 201 187 dev->dsa_ptr = NULL;
-28
net/dsa/slave.c
··· 1058 1058 .name = "dsa", 1059 1059 }; 1060 1060 1061 - static ssize_t tagging_show(struct device *d, struct device_attribute *attr, 1062 - char *buf) 1063 - { 1064 - struct net_device *dev = to_net_dev(d); 1065 - struct dsa_port *dp = dsa_slave_to_port(dev); 1066 - 1067 - return sprintf(buf, "%s\n", 1068 - dsa_tag_protocol_to_str(dp->cpu_dp->tag_ops)); 1069 - } 1070 - static DEVICE_ATTR_RO(tagging); 1071 - 1072 - static struct attribute *dsa_slave_attrs[] = { 1073 - &dev_attr_tagging.attr, 1074 - NULL 1075 - }; 1076 - 1077 - static const struct attribute_group dsa_group = { 1078 - .name = "dsa", 1079 - .attrs = dsa_slave_attrs, 1080 - }; 1081 - 1082 1061 static void dsa_slave_phylink_validate(struct net_device *dev, 1083 1062 unsigned long *supported, 1084 1063 struct phylink_link_state *state) ··· 1353 1374 goto out_phy; 1354 1375 } 1355 1376 1356 - ret = sysfs_create_group(&slave_dev->dev.kobj, &dsa_group); 1357 - if (ret) 1358 - goto out_unreg; 1359 - 1360 1377 return 0; 1361 1378 1362 - out_unreg: 1363 - unregister_netdev(slave_dev); 1364 1379 out_phy: 1365 1380 rtnl_lock(); 1366 1381 phylink_disconnect_phy(p->dp->pl); ··· 1378 1405 rtnl_unlock(); 1379 1406 1380 1407 dsa_slave_notify(slave_dev, DSA_PORT_UNREGISTER); 1381 - sysfs_remove_group(&slave_dev->dev.kobj, &dsa_group); 1382 1408 unregister_netdev(slave_dev); 1383 1409 phylink_destroy(dp->pl); 1384 1410 free_percpu(p->stats64);
+7
net/ipv4/ip_fragment.c
··· 515 515 struct rb_node *rbn; 516 516 int len; 517 517 int ihlen; 518 + int delta; 518 519 int err; 519 520 u8 ecn; 520 521 ··· 557 556 if (len > 65535) 558 557 goto out_oversize; 559 558 559 + delta = - head->truesize; 560 + 560 561 /* Head of list must not be cloned. */ 561 562 if (skb_unclone(head, GFP_ATOMIC)) 562 563 goto out_nomem; 564 + 565 + delta += head->truesize; 566 + if (delta) 567 + add_frag_mem_limit(qp->q.net, delta); 563 568 564 569 /* If the first fragment is fragmented itself, we split 565 570 * it to two chunks: the first with data and paged part
+2 -2
net/ipv4/ip_input.c
··· 547 547 list_for_each_entry_safe(skb, next, head, list) { 548 548 struct dst_entry *dst; 549 549 550 - list_del(&skb->list); 550 + skb_list_del_init(skb); 551 551 /* if ingress device is enslaved to an L3 master device pass the 552 552 * skb to its handler for processing 553 553 */ ··· 594 594 struct net_device *dev = skb->dev; 595 595 struct net *net = dev_net(dev); 596 596 597 - list_del(&skb->list); 597 + skb_list_del_init(skb); 598 598 skb = ip_rcv_core(skb, net); 599 599 if (skb == NULL) 600 600 continue;
+32 -13
net/ipv4/tcp_output.c
··· 1904 1904 * This algorithm is from John Heffner. 1905 1905 */ 1906 1906 static bool tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb, 1907 - bool *is_cwnd_limited, u32 max_segs) 1907 + bool *is_cwnd_limited, 1908 + bool *is_rwnd_limited, 1909 + u32 max_segs) 1908 1910 { 1909 1911 const struct inet_connection_sock *icsk = inet_csk(sk); 1910 1912 u32 age, send_win, cong_win, limit, in_flight; 1911 1913 struct tcp_sock *tp = tcp_sk(sk); 1912 1914 struct sk_buff *head; 1913 1915 int win_divisor; 1914 - 1915 - if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) 1916 - goto send_now; 1917 1916 1918 1917 if (icsk->icsk_ca_state >= TCP_CA_Recovery) 1919 1918 goto send_now; ··· 1972 1973 if (age < (tp->srtt_us >> 4)) 1973 1974 goto send_now; 1974 1975 1975 - /* Ok, it looks like it is advisable to defer. */ 1976 + /* Ok, it looks like it is advisable to defer. 1977 + * Three cases are tracked : 1978 + * 1) We are cwnd-limited 1979 + * 2) We are rwnd-limited 1980 + * 3) We are application limited. 1981 + */ 1982 + if (cong_win < send_win) { 1983 + if (cong_win <= skb->len) { 1984 + *is_cwnd_limited = true; 1985 + return true; 1986 + } 1987 + } else { 1988 + if (send_win <= skb->len) { 1989 + *is_rwnd_limited = true; 1990 + return true; 1991 + } 1992 + } 1976 1993 1977 - if (cong_win < send_win && cong_win <= skb->len) 1978 - *is_cwnd_limited = true; 1994 + /* If this packet won't get more data, do not wait. */ 1995 + if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) 1996 + goto send_now; 1979 1997 1980 1998 return true; 1981 1999 ··· 2372 2356 } else { 2373 2357 if (!push_one && 2374 2358 tcp_tso_should_defer(sk, skb, &is_cwnd_limited, 2375 - max_segs)) 2359 + &is_rwnd_limited, max_segs)) 2376 2360 break; 2377 2361 } 2378 2362 ··· 2510 2494 goto rearm_timer; 2511 2495 } 2512 2496 skb = skb_rb_last(&sk->tcp_rtx_queue); 2497 + if (unlikely(!skb)) { 2498 + WARN_ONCE(tp->packets_out, 2499 + "invalid inflight: %u state %u cwnd %u mss %d\n", 2500 + tp->packets_out, sk->sk_state, tp->snd_cwnd, mss); 2501 + inet_csk(sk)->icsk_pending = 0; 2502 + return; 2503 + } 2513 2504 2514 2505 /* At most one outstanding TLP retransmission. */ 2515 2506 if (tp->tlp_high_seq) 2516 - goto rearm_timer; 2517 - 2518 - /* Retransmit last segment. */ 2519 - if (WARN_ON(!skb)) 2520 2507 goto rearm_timer; 2521 2508 2522 2509 if (skb_still_in_host_queue(sk, skb)) ··· 2939 2920 TCP_SKB_CB(skb)->sacked |= TCPCB_EVER_RETRANS; 2940 2921 trace_tcp_retransmit_skb(sk, skb); 2941 2922 } else if (err != -EBUSY) { 2942 - NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL); 2923 + NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL, segs); 2943 2924 } 2944 2925 return err; 2945 2926 }
+5 -5
net/ipv4/tcp_timer.c
··· 378 378 return; 379 379 } 380 380 381 - if (icsk->icsk_probes_out > max_probes) { 381 + if (icsk->icsk_probes_out >= max_probes) { 382 382 abort: tcp_write_err(sk); 383 383 } else { 384 384 /* Only send another probe if we didn't close things up. */ ··· 484 484 goto out_reset_timer; 485 485 } 486 486 487 + __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPTIMEOUTS); 487 488 if (tcp_write_timeout(sk)) 488 489 goto out; 489 490 490 491 if (icsk->icsk_retransmits == 0) { 491 - int mib_idx; 492 + int mib_idx = 0; 492 493 493 494 if (icsk->icsk_ca_state == TCP_CA_Recovery) { 494 495 if (tcp_is_sack(tp)) ··· 504 503 mib_idx = LINUX_MIB_TCPSACKFAILURES; 505 504 else 506 505 mib_idx = LINUX_MIB_TCPRENOFAILURES; 507 - } else { 508 - mib_idx = LINUX_MIB_TCPTIMEOUTS; 509 506 } 510 - __NET_INC_STATS(sock_net(sk), mib_idx); 507 + if (mib_idx) 508 + __NET_INC_STATS(sock_net(sk), mib_idx); 511 509 } 512 510 513 511 tcp_enter_loss(sk);
+2 -2
net/ipv6/ip6_input.c
··· 95 95 list_for_each_entry_safe(skb, next, head, list) { 96 96 struct dst_entry *dst; 97 97 98 - list_del(&skb->list); 98 + skb_list_del_init(skb); 99 99 /* if ingress device is enslaved to an L3 master device pass the 100 100 * skb to its handler for processing 101 101 */ ··· 296 296 struct net_device *dev = skb->dev; 297 297 struct net *net = dev_net(dev); 298 298 299 - list_del(&skb->list); 299 + skb_list_del_init(skb); 300 300 skb = ip6_rcv_core(skb, dev, net); 301 301 if (skb == NULL) 302 302 continue;
+21 -21
net/ipv6/ip6_output.c
··· 195 195 const struct ipv6_pinfo *np = inet6_sk(sk); 196 196 struct in6_addr *first_hop = &fl6->daddr; 197 197 struct dst_entry *dst = skb_dst(skb); 198 + unsigned int head_room; 198 199 struct ipv6hdr *hdr; 199 200 u8 proto = fl6->flowi6_proto; 200 201 int seg_len = skb->len; 201 202 int hlimit = -1; 202 203 u32 mtu; 203 204 204 - if (opt) { 205 - unsigned int head_room; 205 + head_room = sizeof(struct ipv6hdr) + LL_RESERVED_SPACE(dst->dev); 206 + if (opt) 207 + head_room += opt->opt_nflen + opt->opt_flen; 206 208 207 - /* First: exthdrs may take lots of space (~8K for now) 208 - MAX_HEADER is not enough. 209 - */ 210 - head_room = opt->opt_nflen + opt->opt_flen; 211 - seg_len += head_room; 212 - head_room += sizeof(struct ipv6hdr) + LL_RESERVED_SPACE(dst->dev); 213 - 214 - if (skb_headroom(skb) < head_room) { 215 - struct sk_buff *skb2 = skb_realloc_headroom(skb, head_room); 216 - if (!skb2) { 217 - IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), 218 - IPSTATS_MIB_OUTDISCARDS); 219 - kfree_skb(skb); 220 - return -ENOBUFS; 221 - } 222 - if (skb->sk) 223 - skb_set_owner_w(skb2, skb->sk); 224 - consume_skb(skb); 225 - skb = skb2; 209 + if (unlikely(skb_headroom(skb) < head_room)) { 210 + struct sk_buff *skb2 = skb_realloc_headroom(skb, head_room); 211 + if (!skb2) { 212 + IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), 213 + IPSTATS_MIB_OUTDISCARDS); 214 + kfree_skb(skb); 215 + return -ENOBUFS; 226 216 } 217 + if (skb->sk) 218 + skb_set_owner_w(skb2, skb->sk); 219 + consume_skb(skb); 220 + skb = skb2; 221 + } 222 + 223 + if (opt) { 224 + seg_len += opt->opt_nflen + opt->opt_flen; 225 + 227 226 if (opt->opt_flen) 228 227 ipv6_push_frag_opts(skb, opt, &proto); 228 + 229 229 if (opt->opt_nflen) 230 230 ipv6_push_nfrag_opts(skb, opt, &proto, &first_hop, 231 231 &fl6->saddr);
+7 -1
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 341 341 nf_ct_frag6_reasm(struct frag_queue *fq, struct sk_buff *prev, struct net_device *dev) 342 342 { 343 343 struct sk_buff *fp, *head = fq->q.fragments; 344 - int payload_len; 344 + int payload_len, delta; 345 345 u8 ecn; 346 346 347 347 inet_frag_kill(&fq->q); ··· 363 363 return false; 364 364 } 365 365 366 + delta = - head->truesize; 367 + 366 368 /* Head of list must not be cloned. */ 367 369 if (skb_unclone(head, GFP_ATOMIC)) 368 370 return false; 371 + 372 + delta += head->truesize; 373 + if (delta) 374 + add_frag_mem_limit(fq->q.net, delta); 369 375 370 376 /* If the first fragment is fragmented itself, we split 371 377 * it to two chunks: the first with data and paged part
+7 -1
net/ipv6/reassembly.c
··· 281 281 { 282 282 struct net *net = container_of(fq->q.net, struct net, ipv6.frags); 283 283 struct sk_buff *fp, *head = fq->q.fragments; 284 - int payload_len; 284 + int payload_len, delta; 285 285 unsigned int nhoff; 286 286 int sum_truesize; 287 287 u8 ecn; ··· 322 322 if (payload_len > IPV6_MAXPLEN) 323 323 goto out_oversize; 324 324 325 + delta = - head->truesize; 326 + 325 327 /* Head of list must not be cloned. */ 326 328 if (skb_unclone(head, GFP_ATOMIC)) 327 329 goto out_oom; 330 + 331 + delta += head->truesize; 332 + if (delta) 333 + add_frag_mem_limit(fq->q.net, delta); 328 334 329 335 /* If the first fragment is fragmented itself, we split 330 336 * it to two chunks: the first with data and paged part
+1
net/ipv6/seg6_iptunnel.c
··· 347 347 struct ipv6hdr *hdr = ipv6_hdr(skb); 348 348 struct flowi6 fl6; 349 349 350 + memset(&fl6, 0, sizeof(fl6)); 350 351 fl6.daddr = hdr->daddr; 351 352 fl6.saddr = hdr->saddr; 352 353 fl6.flowlabel = ip6_flowinfo(hdr);
+4 -3
net/mac80211/cfg.c
··· 2891 2891 2892 2892 len = beacon->head_len + beacon->tail_len + beacon->beacon_ies_len + 2893 2893 beacon->proberesp_ies_len + beacon->assocresp_ies_len + 2894 - beacon->probe_resp_len; 2894 + beacon->probe_resp_len + beacon->lci_len + beacon->civicloc_len; 2895 2895 2896 2896 new_beacon = kzalloc(sizeof(*new_beacon) + len, GFP_KERNEL); 2897 2897 if (!new_beacon) ··· 2934 2934 memcpy(pos, beacon->probe_resp, beacon->probe_resp_len); 2935 2935 pos += beacon->probe_resp_len; 2936 2936 } 2937 - if (beacon->ftm_responder) 2938 - new_beacon->ftm_responder = beacon->ftm_responder; 2937 + 2938 + /* might copy -1, meaning no changes requested */ 2939 + new_beacon->ftm_responder = beacon->ftm_responder; 2939 2940 if (beacon->lci) { 2940 2941 new_beacon->lci_len = beacon->lci_len; 2941 2942 new_beacon->lci = pos;
+2
net/mac80211/iface.c
··· 1015 1015 if (local->open_count == 0) 1016 1016 ieee80211_clear_tx_pending(local); 1017 1017 1018 + sdata->vif.bss_conf.beacon_int = 0; 1019 + 1018 1020 /* 1019 1021 * If the interface goes down while suspended, presumably because 1020 1022 * the device was unplugged and that happens before our resume,
+8 -4
net/mac80211/mlme.c
··· 2766 2766 { 2767 2767 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2768 2768 struct sta_info *sta; 2769 + bool result = true; 2769 2770 2770 2771 sdata_info(sdata, "authenticated\n"); 2771 2772 ifmgd->auth_data->done = true; ··· 2779 2778 sta = sta_info_get(sdata, bssid); 2780 2779 if (!sta) { 2781 2780 WARN_ONCE(1, "%s: STA %pM not found", sdata->name, bssid); 2782 - return false; 2781 + result = false; 2782 + goto out; 2783 2783 } 2784 2784 if (sta_info_move_state(sta, IEEE80211_STA_AUTH)) { 2785 2785 sdata_info(sdata, "failed moving %pM to auth\n", bssid); 2786 - return false; 2786 + result = false; 2787 + goto out; 2787 2788 } 2788 - mutex_unlock(&sdata->local->sta_mtx); 2789 2789 2790 - return true; 2790 + out: 2791 + mutex_unlock(&sdata->local->sta_mtx); 2792 + return result; 2791 2793 } 2792 2794 2793 2795 static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+3 -2
net/mac80211/rx.c
··· 1403 1403 return RX_CONTINUE; 1404 1404 1405 1405 if (ieee80211_is_ctl(hdr->frame_control) || 1406 + ieee80211_is_nullfunc(hdr->frame_control) || 1406 1407 ieee80211_is_qos_nullfunc(hdr->frame_control) || 1407 1408 is_multicast_ether_addr(hdr->addr1)) 1408 1409 return RX_CONTINUE; ··· 3064 3063 cfg80211_sta_opmode_change_notify(sdata->dev, 3065 3064 rx->sta->addr, 3066 3065 &sta_opmode, 3067 - GFP_KERNEL); 3066 + GFP_ATOMIC); 3068 3067 goto handled; 3069 3068 } 3070 3069 case WLAN_HT_ACTION_NOTIFY_CHANWIDTH: { ··· 3101 3100 cfg80211_sta_opmode_change_notify(sdata->dev, 3102 3101 rx->sta->addr, 3103 3102 &sta_opmode, 3104 - GFP_KERNEL); 3103 + GFP_ATOMIC); 3105 3104 goto handled; 3106 3105 } 3107 3106 default:
+2
net/mac80211/status.c
··· 964 964 /* Track when last TDLS packet was ACKed */ 965 965 if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) 966 966 sta->status_stats.last_tdls_pkt_time = jiffies; 967 + } else if (test_sta_flag(sta, WLAN_STA_PS_STA)) { 968 + return; 967 969 } else { 968 970 ieee80211_lost_packet(sta, info); 969 971 }
+2 -2
net/mac80211/tx.c
··· 439 439 if (ieee80211_hw_check(&tx->local->hw, QUEUE_CONTROL)) 440 440 info->hw_queue = tx->sdata->vif.cab_queue; 441 441 442 - /* no stations in PS mode */ 443 - if (!atomic_read(&ps->num_sta_ps)) 442 + /* no stations in PS mode and no buffered packets */ 443 + if (!atomic_read(&ps->num_sta_ps) && skb_queue_empty(&ps->bc_buf)) 444 444 return TX_CONTINUE; 445 445 446 446 info->flags |= IEEE80211_TX_CTL_SEND_AFTER_DTIM;
+1 -1
net/openvswitch/conntrack.c
··· 1166 1166 &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple); 1167 1167 if (err) { 1168 1168 net_warn_ratelimited("openvswitch: zone: %u " 1169 - "execeeds conntrack limit\n", 1169 + "exceeds conntrack limit\n", 1170 1170 info->zone.id); 1171 1171 return err; 1172 1172 }
+12 -12
net/sched/act_police.c
··· 85 85 int ovr, int bind, bool rtnl_held, 86 86 struct netlink_ext_ack *extack) 87 87 { 88 - int ret = 0, err; 88 + int ret = 0, tcfp_result = TC_ACT_OK, err, size; 89 89 struct nlattr *tb[TCA_POLICE_MAX + 1]; 90 90 struct tc_police *parm; 91 91 struct tcf_police *police; ··· 93 93 struct tc_action_net *tn = net_generic(net, police_net_id); 94 94 struct tcf_police_params *new; 95 95 bool exists = false; 96 - int size; 97 96 98 97 if (nla == NULL) 99 98 return -EINVAL; ··· 159 160 goto failure; 160 161 } 161 162 163 + if (tb[TCA_POLICE_RESULT]) { 164 + tcfp_result = nla_get_u32(tb[TCA_POLICE_RESULT]); 165 + if (TC_ACT_EXT_CMP(tcfp_result, TC_ACT_GOTO_CHAIN)) { 166 + NL_SET_ERR_MSG(extack, 167 + "goto chain not allowed on fallback"); 168 + err = -EINVAL; 169 + goto failure; 170 + } 171 + } 172 + 162 173 new = kzalloc(sizeof(*new), GFP_KERNEL); 163 174 if (unlikely(!new)) { 164 175 err = -ENOMEM; ··· 176 167 } 177 168 178 169 /* No failure allowed after this point */ 170 + new->tcfp_result = tcfp_result; 179 171 new->tcfp_mtu = parm->mtu; 180 172 if (!new->tcfp_mtu) { 181 173 new->tcfp_mtu = ~0; ··· 205 195 206 196 if (tb[TCA_POLICE_AVRATE]) 207 197 new->tcfp_ewma_rate = nla_get_u32(tb[TCA_POLICE_AVRATE]); 208 - 209 - if (tb[TCA_POLICE_RESULT]) { 210 - new->tcfp_result = nla_get_u32(tb[TCA_POLICE_RESULT]); 211 - if (TC_ACT_EXT_CMP(new->tcfp_result, TC_ACT_GOTO_CHAIN)) { 212 - NL_SET_ERR_MSG(extack, 213 - "goto chain not allowed on fallback"); 214 - err = -EINVAL; 215 - goto failure; 216 - } 217 - } 218 198 219 199 spin_lock_bh(&police->tcf_lock); 220 200 spin_lock_bh(&police->tcfp_lock);
+10 -13
net/sched/cls_flower.c
··· 1238 1238 if (err) 1239 1239 goto errout_idr; 1240 1240 1241 - if (!tc_skip_sw(fnew->flags)) { 1242 - if (!fold && fl_lookup(fnew->mask, &fnew->mkey)) { 1243 - err = -EEXIST; 1244 - goto errout_mask; 1245 - } 1246 - 1247 - err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node, 1248 - fnew->mask->filter_ht_params); 1249 - if (err) 1250 - goto errout_mask; 1241 + if (!fold && fl_lookup(fnew->mask, &fnew->mkey)) { 1242 + err = -EEXIST; 1243 + goto errout_mask; 1251 1244 } 1245 + 1246 + err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node, 1247 + fnew->mask->filter_ht_params); 1248 + if (err) 1249 + goto errout_mask; 1252 1250 1253 1251 if (!tc_skip_hw(fnew->flags)) { 1254 1252 err = fl_hw_replace_filter(tp, fnew, extack); ··· 1301 1303 struct cls_fl_head *head = rtnl_dereference(tp->root); 1302 1304 struct cls_fl_filter *f = arg; 1303 1305 1304 - if (!tc_skip_sw(f->flags)) 1305 - rhashtable_remove_fast(&f->mask->ht, &f->ht_node, 1306 - f->mask->filter_ht_params); 1306 + rhashtable_remove_fast(&f->mask->ht, &f->ht_node, 1307 + f->mask->filter_ht_params); 1307 1308 __fl_delete(tp, f, extack); 1308 1309 *last = list_empty(&head->masks); 1309 1310 return 0;
+3
net/sched/sch_netem.c
··· 431 431 int count = 1; 432 432 int rc = NET_XMIT_SUCCESS; 433 433 434 + /* Do not fool qdisc_drop_all() */ 435 + skb->prev = NULL; 436 + 434 437 /* Random duplication */ 435 438 if (q->duplicate && q->duplicate >= get_crandom(&q->dup_cor)) 436 439 ++count;
+5 -4
net/sctp/associola.c
··· 118 118 asoc->flowlabel = sp->flowlabel; 119 119 asoc->dscp = sp->dscp; 120 120 121 - /* Initialize default path MTU. */ 122 - asoc->pathmtu = sp->pathmtu; 123 - 124 121 /* Set association default SACK delay */ 125 122 asoc->sackdelay = msecs_to_jiffies(sp->sackdelay); 126 123 asoc->sackfreq = sp->sackfreq; ··· 248 251 if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams, 249 252 0, gfp)) 250 253 goto fail_init; 254 + 255 + /* Initialize default path MTU. */ 256 + asoc->pathmtu = sp->pathmtu; 257 + sctp_assoc_update_frag_point(asoc); 251 258 252 259 /* Assume that peer would support both address types unless we are 253 260 * told otherwise. ··· 435 434 436 435 WARN_ON(atomic_read(&asoc->rmem_alloc)); 437 436 438 - kfree(asoc); 437 + kfree_rcu(asoc, rcu); 439 438 SCTP_DBG_OBJCNT_DEC(assoc); 440 439 } 441 440
+6
net/sctp/chunk.c
··· 191 191 * the packet 192 192 */ 193 193 max_data = asoc->frag_point; 194 + if (unlikely(!max_data)) { 195 + max_data = sctp_min_frag_point(sctp_sk(asoc->base.sk), 196 + sctp_datachk_len(&asoc->stream)); 197 + pr_warn_ratelimited("%s: asoc:%p frag_point is zero, forcing max_data to default minimum (%Zu)", 198 + __func__, asoc, max_data); 199 + } 194 200 195 201 /* If the the peer requested that we authenticate DATA chunks 196 202 * we need to account for bundling of the AUTH chunks along with
+3
net/sctp/sm_make_chunk.c
··· 2462 2462 asoc->c.sinit_max_instreams, gfp)) 2463 2463 goto clean_up; 2464 2464 2465 + /* Update frag_point when stream_interleave may get changed. */ 2466 + sctp_assoc_update_frag_point(asoc); 2467 + 2465 2468 if (!asoc->temp && sctp_assoc_set_id(asoc, gfp)) 2466 2469 goto clean_up; 2467 2470
+1 -2
net/sctp/socket.c
··· 3324 3324 __u16 datasize = asoc ? sctp_datachk_len(&asoc->stream) : 3325 3325 sizeof(struct sctp_data_chunk); 3326 3326 3327 - min_len = sctp_mtu_payload(sp, SCTP_DEFAULT_MINSEGMENT, 3328 - datasize); 3327 + min_len = sctp_min_frag_point(sp, datasize); 3329 3328 max_len = SCTP_MAX_CHUNK_LEN - datasize; 3330 3329 3331 3330 if (val < min_len || val > max_len)
+4
net/sunrpc/auth_gss/auth_gss.c
··· 1791 1791 for (i=0; i < rqstp->rq_enc_pages_num; i++) 1792 1792 __free_page(rqstp->rq_enc_pages[i]); 1793 1793 kfree(rqstp->rq_enc_pages); 1794 + rqstp->rq_release_snd_buf = NULL; 1794 1795 } 1795 1796 1796 1797 static int ··· 1799 1798 { 1800 1799 struct xdr_buf *snd_buf = &rqstp->rq_snd_buf; 1801 1800 int first, last, i; 1801 + 1802 + if (rqstp->rq_release_snd_buf) 1803 + rqstp->rq_release_snd_buf(rqstp); 1802 1804 1803 1805 if (snd_buf->page_len == 0) { 1804 1806 rqstp->rq_enc_pages_num = 0;
+8
net/sunrpc/clnt.c
··· 1915 1915 struct rpc_clnt *clnt = task->tk_client; 1916 1916 int status = task->tk_status; 1917 1917 1918 + /* Check if the task was already transmitted */ 1919 + if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) { 1920 + xprt_end_transmit(task); 1921 + task->tk_action = call_transmit_status; 1922 + return; 1923 + } 1924 + 1918 1925 dprint_status(task); 1919 1926 1920 1927 trace_rpc_connect_status(task); ··· 2309 2302 task->tk_status = 0; 2310 2303 /* Note: rpc_verify_header() may have freed the RPC slot */ 2311 2304 if (task->tk_rqstp == req) { 2305 + xdr_free_bvec(&req->rq_rcv_buf); 2312 2306 req->rq_reply_bytes_recvd = req->rq_rcv_buf.len = 0; 2313 2307 if (task->tk_client->cl_discrtry) 2314 2308 xprt_conditional_disconnect(req->rq_xprt,
+11 -2
net/sunrpc/xprt.c
··· 826 826 return; 827 827 if (xprt_test_and_set_connecting(xprt)) 828 828 return; 829 - xprt->stat.connect_start = jiffies; 830 - xprt->ops->connect(xprt, task); 829 + /* Race breaker */ 830 + if (!xprt_connected(xprt)) { 831 + xprt->stat.connect_start = jiffies; 832 + xprt->ops->connect(xprt, task); 833 + } else { 834 + xprt_clear_connecting(xprt); 835 + task->tk_status = 0; 836 + rpc_wake_up_queued_task(&xprt->pending, task); 837 + } 831 838 } 832 839 xprt_release_write(xprt, task); 833 840 } ··· 1630 1623 req->rq_snd_buf.buflen = 0; 1631 1624 req->rq_rcv_buf.len = 0; 1632 1625 req->rq_rcv_buf.buflen = 0; 1626 + req->rq_snd_buf.bvec = NULL; 1627 + req->rq_rcv_buf.bvec = NULL; 1633 1628 req->rq_release_snd_buf = NULL; 1634 1629 xprt_reset_majortimeo(req); 1635 1630 dprintk("RPC: %5u reserved req %p xid %08x\n", task->tk_pid,
+38 -43
net/sunrpc/xprtsock.c
··· 330 330 { 331 331 size_t i,n; 332 332 333 - if (!(buf->flags & XDRBUF_SPARSE_PAGES)) 333 + if (!want || !(buf->flags & XDRBUF_SPARSE_PAGES)) 334 334 return want; 335 - if (want > buf->page_len) 336 - want = buf->page_len; 337 335 n = (buf->page_base + want + PAGE_SIZE - 1) >> PAGE_SHIFT; 338 336 for (i = 0; i < n; i++) { 339 337 if (buf->pages[i]) 340 338 continue; 341 339 buf->bvec[i].bv_page = buf->pages[i] = alloc_page(gfp); 342 340 if (!buf->pages[i]) { 343 - buf->page_len = (i * PAGE_SIZE) - buf->page_base; 344 - return buf->page_len; 341 + i *= PAGE_SIZE; 342 + return i > buf->page_base ? i - buf->page_base : 0; 345 343 } 346 344 } 347 345 return want; ··· 376 378 xs_read_discard(struct socket *sock, struct msghdr *msg, int flags, 377 379 size_t count) 378 380 { 379 - struct kvec kvec = { 0 }; 380 - return xs_read_kvec(sock, msg, flags | MSG_TRUNC, &kvec, count, 0); 381 + iov_iter_discard(&msg->msg_iter, READ, count); 382 + return sock_recvmsg(sock, msg, flags); 381 383 } 382 384 383 385 static ssize_t ··· 396 398 if (offset == count || msg->msg_flags & (MSG_EOR|MSG_TRUNC)) 397 399 goto out; 398 400 if (ret != want) 399 - goto eagain; 401 + goto out; 400 402 seek = 0; 401 403 } else { 402 404 seek -= buf->head[0].iov_len; 403 405 offset += buf->head[0].iov_len; 404 406 } 405 - if (seek < buf->page_len) { 406 - want = xs_alloc_sparse_pages(buf, 407 - min_t(size_t, count - offset, buf->page_len), 408 - GFP_NOWAIT); 407 + 408 + want = xs_alloc_sparse_pages(buf, 409 + min_t(size_t, count - offset, buf->page_len), 410 + GFP_NOWAIT); 411 + if (seek < want) { 409 412 ret = xs_read_bvec(sock, msg, flags, buf->bvec, 410 413 xdr_buf_pagecount(buf), 411 414 want + buf->page_base, ··· 417 418 if (offset == count || msg->msg_flags & (MSG_EOR|MSG_TRUNC)) 418 419 goto out; 419 420 if (ret != want) 420 - goto eagain; 421 + goto out; 421 422 seek = 0; 422 423 } else { 423 - seek -= buf->page_len; 424 - offset += buf->page_len; 424 + seek -= want; 425 + offset += want; 425 426 } 427 + 426 428 if (seek < buf->tail[0].iov_len) { 427 429 want = min_t(size_t, count - offset, buf->tail[0].iov_len); 428 430 ret = xs_read_kvec(sock, msg, flags, &buf->tail[0], want, seek); ··· 433 433 if (offset == count || msg->msg_flags & (MSG_EOR|MSG_TRUNC)) 434 434 goto out; 435 435 if (ret != want) 436 - goto eagain; 436 + goto out; 437 437 } else 438 438 offset += buf->tail[0].iov_len; 439 439 ret = -EMSGSIZE; 440 - msg->msg_flags |= MSG_TRUNC; 441 440 out: 442 441 *read = offset - seek_init; 443 442 return ret; 444 - eagain: 445 - ret = -EAGAIN; 446 - goto out; 447 443 sock_err: 448 444 offset += seek; 449 445 goto out; ··· 482 486 if (transport->recv.offset == transport->recv.len) { 483 487 if (xs_read_stream_request_done(transport)) 484 488 msg->msg_flags |= MSG_EOR; 485 - return transport->recv.copied; 489 + return read; 486 490 } 487 491 488 492 switch (ret) { 493 + default: 494 + break; 495 + case -EFAULT: 489 496 case -EMSGSIZE: 490 - return transport->recv.copied; 497 + msg->msg_flags |= MSG_TRUNC; 498 + return read; 491 499 case 0: 492 500 return -ESHUTDOWN; 493 - default: 494 - if (ret < 0) 495 - return ret; 496 501 } 497 - return -EAGAIN; 502 + return ret < 0 ? ret : read; 498 503 } 499 504 500 505 static size_t ··· 534 537 535 538 ret = xs_read_stream_request(transport, msg, flags, req); 536 539 if (msg->msg_flags & (MSG_EOR|MSG_TRUNC)) 537 - xprt_complete_bc_request(req, ret); 540 + xprt_complete_bc_request(req, transport->recv.copied); 538 541 539 542 return ret; 540 543 } ··· 567 570 568 571 spin_lock(&xprt->queue_lock); 569 572 if (msg->msg_flags & (MSG_EOR|MSG_TRUNC)) 570 - xprt_complete_rqst(req->rq_task, ret); 573 + xprt_complete_rqst(req->rq_task, transport->recv.copied); 571 574 xprt_unpin_rqst(req); 572 575 out: 573 576 spin_unlock(&xprt->queue_lock); ··· 588 591 if (ret <= 0) 589 592 goto out_err; 590 593 transport->recv.offset = ret; 591 - if (ret != want) { 592 - ret = -EAGAIN; 593 - goto out_err; 594 - } 594 + if (transport->recv.offset != want) 595 + return transport->recv.offset; 595 596 transport->recv.len = be32_to_cpu(transport->recv.fraghdr) & 596 597 RPC_FRAGMENT_SIZE_MASK; 597 598 transport->recv.offset -= sizeof(transport->recv.fraghdr); ··· 597 602 } 598 603 599 604 switch (be32_to_cpu(transport->recv.calldir)) { 605 + default: 606 + msg.msg_flags |= MSG_TRUNC; 607 + break; 600 608 case RPC_CALL: 601 609 ret = xs_read_stream_call(transport, &msg, flags); 602 610 break; ··· 614 616 goto out_err; 615 617 read += ret; 616 618 if (transport->recv.offset < transport->recv.len) { 619 + if (!(msg.msg_flags & MSG_TRUNC)) 620 + return read; 621 + msg.msg_flags = 0; 617 622 ret = xs_read_discard(transport->sock, &msg, flags, 618 623 transport->recv.len - transport->recv.offset); 619 624 if (ret <= 0) ··· 624 623 transport->recv.offset += ret; 625 624 read += ret; 626 625 if (transport->recv.offset != transport->recv.len) 627 - return -EAGAIN; 626 + return read; 628 627 } 629 628 if (xs_read_stream_request_done(transport)) { 630 629 trace_xs_stream_read_request(transport); ··· 634 633 transport->recv.len = 0; 635 634 return read; 636 635 out_err: 637 - switch (ret) { 638 - case 0: 639 - case -ESHUTDOWN: 640 - xprt_force_disconnect(&transport->xprt); 641 - return -ESHUTDOWN; 642 - } 643 - return ret; 636 + return ret != 0 ? ret : -ESHUTDOWN; 644 637 } 645 638 646 639 static void xs_stream_data_receive(struct sock_xprt *transport) ··· 643 648 ssize_t ret = 0; 644 649 645 650 mutex_lock(&transport->recv_mutex); 651 + clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state); 646 652 if (transport->sock == NULL) 647 653 goto out; 648 - clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state); 649 654 for (;;) { 650 655 ret = xs_read_stream(transport, MSG_DONTWAIT); 651 - if (ret <= 0) 656 + if (ret < 0) 652 657 break; 653 658 read += ret; 654 659 cond_resched(); ··· 1340 1345 int err; 1341 1346 1342 1347 mutex_lock(&transport->recv_mutex); 1348 + clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state); 1343 1349 sk = transport->inet; 1344 1350 if (sk == NULL) 1345 1351 goto out; 1346 - clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state); 1347 1352 for (;;) { 1348 1353 skb = skb_recv_udp(sk, 0, 1, &err); 1349 1354 if (skb == NULL)
+2 -2
net/wireless/mlme.c
··· 272 272 273 273 p1 = (u8*)(ht_capa); 274 274 p2 = (u8*)(ht_capa_mask); 275 - for (i = 0; i<sizeof(*ht_capa); i++) 275 + for (i = 0; i < sizeof(*ht_capa); i++) 276 276 p1[i] &= p2[i]; 277 277 } 278 278 279 - /* Do a logical ht_capa &= ht_capa_mask. */ 279 + /* Do a logical vht_capa &= vht_capa_mask. */ 280 280 void cfg80211_oper_and_vht_capa(struct ieee80211_vht_cap *vht_capa, 281 281 const struct ieee80211_vht_cap *vht_capa_mask) 282 282 {
+1
net/wireless/nl80211.c
··· 7870 7870 } 7871 7871 7872 7872 memset(&params, 0, sizeof(params)); 7873 + params.beacon_csa.ftm_responder = -1; 7873 7874 7874 7875 if (!info->attrs[NL80211_ATTR_WIPHY_FREQ] || 7875 7876 !info->attrs[NL80211_ATTR_CH_SWITCH_COUNT])
+7 -1
net/wireless/sme.c
··· 642 642 * All devices must be idle as otherwise if you are actively 643 643 * scanning some new beacon hints could be learned and would 644 644 * count as new regulatory hints. 645 + * Also if there is any other active beaconing interface we 646 + * need not issue a disconnect hint and reset any info such 647 + * as chan dfs state, etc. 645 648 */ 646 649 list_for_each_entry(rdev, &cfg80211_rdev_list, list) { 647 650 list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) { 648 651 wdev_lock(wdev); 649 - if (wdev->conn || wdev->current_bss) 652 + if (wdev->conn || wdev->current_bss || 653 + cfg80211_beaconing_iface_active(wdev)) 650 654 is_all_idle = false; 651 655 wdev_unlock(wdev); 652 656 } ··· 1175 1171 1176 1172 cfg80211_oper_and_ht_capa(&connect->ht_capa_mask, 1177 1173 rdev->wiphy.ht_capa_mod_mask); 1174 + cfg80211_oper_and_vht_capa(&connect->vht_capa_mask, 1175 + rdev->wiphy.vht_capa_mod_mask); 1178 1176 1179 1177 if (connkeys && connkeys->def >= 0) { 1180 1178 int idx;
+2
net/wireless/util.c
··· 1421 1421 ies[pos + ext], 1422 1422 ext == 2)) 1423 1423 pos = skip_ie(ies, ielen, pos); 1424 + else 1425 + break; 1424 1426 } 1425 1427 } else { 1426 1428 pos = skip_ie(ies, ielen, pos);
+11 -7
net/x25/af_x25.c
··· 100 100 } 101 101 102 102 len = *skb->data; 103 - needed = 1 + (len >> 4) + (len & 0x0f); 103 + needed = 1 + ((len >> 4) + (len & 0x0f) + 1) / 2; 104 104 105 105 if (!pskb_may_pull(skb, needed)) { 106 106 /* packet is too short to hold the addresses it claims ··· 288 288 sk_for_each(s, &x25_list) 289 289 if ((!strcmp(addr->x25_addr, 290 290 x25_sk(s)->source_addr.x25_addr) || 291 - !strcmp(addr->x25_addr, 291 + !strcmp(x25_sk(s)->source_addr.x25_addr, 292 292 null_x25_address.x25_addr)) && 293 293 s->sk_state == TCP_LISTEN) { 294 294 /* ··· 688 688 goto out; 689 689 } 690 690 691 - len = strlen(addr->sx25_addr.x25_addr); 692 - for (i = 0; i < len; i++) { 693 - if (!isdigit(addr->sx25_addr.x25_addr[i])) { 694 - rc = -EINVAL; 695 - goto out; 691 + /* check for the null_x25_address */ 692 + if (strcmp(addr->sx25_addr.x25_addr, null_x25_address.x25_addr)) { 693 + 694 + len = strlen(addr->sx25_addr.x25_addr); 695 + for (i = 0; i < len; i++) { 696 + if (!isdigit(addr->sx25_addr.x25_addr[i])) { 697 + rc = -EINVAL; 698 + goto out; 699 + } 696 700 } 697 701 } 698 702
+9
net/x25/x25_in.c
··· 142 142 sk->sk_state_change(sk); 143 143 break; 144 144 } 145 + case X25_CALL_REQUEST: 146 + /* call collision */ 147 + x25->causediag.cause = 0x01; 148 + x25->causediag.diagnostic = 0x48; 149 + 150 + x25_write_internal(sk, X25_CLEAR_REQUEST); 151 + x25_disconnect(sk, EISCONN, 0x01, 0x48); 152 + break; 153 + 145 154 case X25_CLEAR_REQUEST: 146 155 if (!pskb_may_pull(skb, X25_STD_MIN_LEN + 2)) 147 156 goto out_clear;
+5 -3
scripts/gcc-plugins/stackleak_plugin.c
··· 363 363 PASS_POS_INSERT_BEFORE); 364 364 365 365 /* 366 - * The stackleak_cleanup pass should be executed after the 367 - * "reload" pass, when the stack frame size is final. 366 + * The stackleak_cleanup pass should be executed before the "*free_cfg" 367 + * pass. It's the moment when the stack frame size is already final, 368 + * function prologues and epilogues are generated, and the 369 + * machine-dependent code transformations are not done. 368 370 */ 369 - PASS_INFO(stackleak_cleanup, "reload", 1, PASS_POS_INSERT_AFTER); 371 + PASS_INFO(stackleak_cleanup, "*free_cfg", 1, PASS_POS_INSERT_BEFORE); 370 372 371 373 if (!plugin_default_version_check(version, &gcc_version)) { 372 374 error(G_("incompatible gcc/plugin versions"));
+8 -6
sound/core/pcm_native.c
··· 36 36 #include <sound/timer.h> 37 37 #include <sound/minors.h> 38 38 #include <linux/uio.h> 39 + #include <linux/delay.h> 39 40 40 41 #include "pcm_local.h" 41 42 ··· 92 91 * and this may lead to a deadlock when the code path takes read sem 93 92 * twice (e.g. one in snd_pcm_action_nonatomic() and another in 94 93 * snd_pcm_stream_lock()). As a (suboptimal) workaround, let writer to 95 - * spin until it gets the lock. 94 + * sleep until all the readers are completed without blocking by writer. 96 95 */ 97 - static inline void down_write_nonblock(struct rw_semaphore *lock) 96 + static inline void down_write_nonfifo(struct rw_semaphore *lock) 98 97 { 99 98 while (!down_write_trylock(lock)) 100 - cond_resched(); 99 + msleep(1); 101 100 } 102 101 103 102 #define PCM_LOCK_DEFAULT 0 ··· 1968 1967 res = -ENOMEM; 1969 1968 goto _nolock; 1970 1969 } 1971 - down_write_nonblock(&snd_pcm_link_rwsem); 1970 + down_write_nonfifo(&snd_pcm_link_rwsem); 1972 1971 write_lock_irq(&snd_pcm_link_rwlock); 1973 1972 if (substream->runtime->status->state == SNDRV_PCM_STATE_OPEN || 1974 1973 substream->runtime->status->state != substream1->runtime->status->state || ··· 2015 2014 struct snd_pcm_substream *s; 2016 2015 int res = 0; 2017 2016 2018 - down_write_nonblock(&snd_pcm_link_rwsem); 2017 + down_write_nonfifo(&snd_pcm_link_rwsem); 2019 2018 write_lock_irq(&snd_pcm_link_rwlock); 2020 2019 if (!snd_pcm_stream_linked(substream)) { 2021 2020 res = -EALREADY; ··· 2370 2369 2371 2370 static void pcm_release_private(struct snd_pcm_substream *substream) 2372 2371 { 2373 - snd_pcm_unlink(substream); 2372 + if (snd_pcm_stream_linked(substream)) 2373 + snd_pcm_unlink(substream); 2374 2374 } 2375 2375 2376 2376 void snd_pcm_release_substream(struct snd_pcm_substream *substream)
+4
sound/pci/hda/hda_intel.c
··· 2498 2498 /* AMD Hudson */ 2499 2499 { PCI_DEVICE(0x1022, 0x780d), 2500 2500 .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB }, 2501 + /* AMD Stoney */ 2502 + { PCI_DEVICE(0x1022, 0x157a), 2503 + .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB | 2504 + AZX_DCAPS_PM_RUNTIME }, 2501 2505 /* AMD Raven */ 2502 2506 { PCI_DEVICE(0x1022, 0x15e3), 2503 2507 .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB |
+27
sound/pci/hda/patch_realtek.c
··· 4988 4988 { 0x19, 0x21a11010 }, /* dock mic */ 4989 4989 { } 4990 4990 }; 4991 + /* Assure the speaker pin to be coupled with DAC NID 0x03; otherwise 4992 + * the speaker output becomes too low by some reason on Thinkpads with 4993 + * ALC298 codec 4994 + */ 4995 + static hda_nid_t preferred_pairs[] = { 4996 + 0x14, 0x03, 0x17, 0x02, 0x21, 0x02, 4997 + 0 4998 + }; 4991 4999 struct alc_spec *spec = codec->spec; 4992 5000 4993 5001 if (action == HDA_FIXUP_ACT_PRE_PROBE) { 5002 + spec->gen.preferred_dacs = preferred_pairs; 4994 5003 spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP; 4995 5004 snd_hda_apply_pincfgs(codec, pincfgs); 4996 5005 } else if (action == HDA_FIXUP_ACT_INIT) { ··· 5519 5510 ALC221_FIXUP_HP_HEADSET_MIC, 5520 5511 ALC285_FIXUP_LENOVO_HEADPHONE_NOISE, 5521 5512 ALC295_FIXUP_HP_AUTO_MUTE, 5513 + ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE, 5522 5514 }; 5523 5515 5524 5516 static const struct hda_fixup alc269_fixups[] = { ··· 6397 6387 .type = HDA_FIXUP_FUNC, 6398 6388 .v.func = alc_fixup_auto_mute_via_amp, 6399 6389 }, 6390 + [ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE] = { 6391 + .type = HDA_FIXUP_PINS, 6392 + .v.pins = (const struct hda_pintbl[]) { 6393 + { 0x18, 0x01a1913c }, /* use as headset mic, without its own jack detect */ 6394 + { } 6395 + }, 6396 + .chained = true, 6397 + .chain_id = ALC269_FIXUP_HEADSET_MIC 6398 + }, 6400 6399 }; 6401 6400 6402 6401 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 6420 6401 SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), 6421 6402 SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), 6422 6403 SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS), 6404 + SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), 6423 6405 SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK), 6406 + SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), 6407 + SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), 6408 + SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), 6424 6409 SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), 6425 6410 SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS), 6426 6411 SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X), ··· 7088 7065 {0x14, 0x90170110}, 7089 7066 {0x19, 0x04a11040}, 7090 7067 {0x21, 0x04211020}), 7068 + SND_HDA_PIN_QUIRK(0x10ec0286, 0x1025, "Acer", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE, 7069 + {0x12, 0x90a60130}, 7070 + {0x17, 0x90170110}, 7071 + {0x21, 0x02211020}), 7091 7072 SND_HDA_PIN_QUIRK(0x10ec0288, 0x1028, "Dell", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE, 7092 7073 {0x12, 0x90a60120}, 7093 7074 {0x14, 0x90170110},
+4 -1
sound/usb/card.c
··· 682 682 683 683 __error: 684 684 if (chip) { 685 + /* chip->active is inside the chip->card object, 686 + * decrement before memory is possibly returned. 687 + */ 688 + atomic_dec(&chip->active); 685 689 if (!chip->num_interfaces) 686 690 snd_card_free(chip->card); 687 - atomic_dec(&chip->active); 688 691 } 689 692 mutex_unlock(&register_mutex); 690 693 return err;
+1
sound/usb/quirks.c
··· 1373 1373 return SNDRV_PCM_FMTBIT_DSD_U32_BE; 1374 1374 break; 1375 1375 1376 + case USB_ID(0x152a, 0x85de): /* SMSL D1 DAC */ 1376 1377 case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */ 1377 1378 case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */ 1378 1379 case USB_ID(0x16b0, 0x06b2): /* NuPrime DAC-10 */
+3 -3
tools/bpf/bpftool/btf_dumper.c
··· 32 32 } 33 33 34 34 static int btf_dumper_modifier(const struct btf_dumper *d, __u32 type_id, 35 - const void *data) 35 + __u8 bit_offset, const void *data) 36 36 { 37 37 int actual_type_id; 38 38 ··· 40 40 if (actual_type_id < 0) 41 41 return actual_type_id; 42 42 43 - return btf_dumper_do_type(d, actual_type_id, 0, data); 43 + return btf_dumper_do_type(d, actual_type_id, bit_offset, data); 44 44 } 45 45 46 46 static void btf_dumper_enum(const void *data, json_writer_t *jw) ··· 237 237 case BTF_KIND_VOLATILE: 238 238 case BTF_KIND_CONST: 239 239 case BTF_KIND_RESTRICT: 240 - return btf_dumper_modifier(d, type_id, data); 240 + return btf_dumper_modifier(d, type_id, bit_offset, data); 241 241 default: 242 242 jsonw_printf(d->jw, "(unsupported-kind"); 243 243 return -EINVAL;
+37 -19
tools/include/uapi/linux/bpf.h
··· 2170 2170 * Return 2171 2171 * 0 on success, or a negative error in case of failure. 2172 2172 * 2173 - * struct bpf_sock *bpf_sk_lookup_tcp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u32 netns, u64 flags) 2173 + * struct bpf_sock *bpf_sk_lookup_tcp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u64 netns, u64 flags) 2174 2174 * Description 2175 2175 * Look for TCP socket matching *tuple*, optionally in a child 2176 2176 * network namespace *netns*. The return value must be checked, ··· 2187 2187 * **sizeof**\ (*tuple*\ **->ipv6**) 2188 2188 * Look for an IPv6 socket. 2189 2189 * 2190 - * If the *netns* is zero, then the socket lookup table in the 2191 - * netns associated with the *ctx* will be used. For the TC hooks, 2192 - * this in the netns of the device in the skb. For socket hooks, 2193 - * this in the netns of the socket. If *netns* is non-zero, then 2194 - * it specifies the ID of the netns relative to the netns 2195 - * associated with the *ctx*. 2190 + * If the *netns* is a negative signed 32-bit integer, then the 2191 + * socket lookup table in the netns associated with the *ctx* will 2192 + * will be used. For the TC hooks, this is the netns of the device 2193 + * in the skb. For socket hooks, this is the netns of the socket. 2194 + * If *netns* is any other signed 32-bit value greater than or 2195 + * equal to zero then it specifies the ID of the netns relative to 2196 + * the netns associated with the *ctx*. *netns* values beyond the 2197 + * range of 32-bit integers are reserved for future use. 2196 2198 * 2197 2199 * All values for *flags* are reserved for future usage, and must 2198 2200 * be left at zero. ··· 2203 2201 * **CONFIG_NET** configuration option. 2204 2202 * Return 2205 2203 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2204 + * For sockets with reuseport option, the *struct bpf_sock* 2205 + * result is from reuse->socks[] using the hash of the tuple. 2206 2206 * 2207 - * struct bpf_sock *bpf_sk_lookup_udp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u32 netns, u64 flags) 2207 + * struct bpf_sock *bpf_sk_lookup_udp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u64 netns, u64 flags) 2208 2208 * Description 2209 2209 * Look for UDP socket matching *tuple*, optionally in a child 2210 2210 * network namespace *netns*. The return value must be checked, ··· 2223 2219 * **sizeof**\ (*tuple*\ **->ipv6**) 2224 2220 * Look for an IPv6 socket. 2225 2221 * 2226 - * If the *netns* is zero, then the socket lookup table in the 2227 - * netns associated with the *ctx* will be used. For the TC hooks, 2228 - * this in the netns of the device in the skb. For socket hooks, 2229 - * this in the netns of the socket. If *netns* is non-zero, then 2230 - * it specifies the ID of the netns relative to the netns 2231 - * associated with the *ctx*. 2222 + * If the *netns* is a negative signed 32-bit integer, then the 2223 + * socket lookup table in the netns associated with the *ctx* will 2224 + * will be used. For the TC hooks, this is the netns of the device 2225 + * in the skb. For socket hooks, this is the netns of the socket. 2226 + * If *netns* is any other signed 32-bit value greater than or 2227 + * equal to zero then it specifies the ID of the netns relative to 2228 + * the netns associated with the *ctx*. *netns* values beyond the 2229 + * range of 32-bit integers are reserved for future use. 2232 2230 * 2233 2231 * All values for *flags* are reserved for future usage, and must 2234 2232 * be left at zero. ··· 2239 2233 * **CONFIG_NET** configuration option. 2240 2234 * Return 2241 2235 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2236 + * For sockets with reuseport option, the *struct bpf_sock* 2237 + * result is from reuse->socks[] using the hash of the tuple. 2242 2238 * 2243 2239 * int bpf_sk_release(struct bpf_sock *sk) 2244 2240 * Description ··· 2413 2405 /* BPF_FUNC_perf_event_output for sk_buff input context. */ 2414 2406 #define BPF_F_CTXLEN_MASK (0xfffffULL << 32) 2415 2407 2408 + /* Current network namespace */ 2409 + #define BPF_F_CURRENT_NETNS (-1L) 2410 + 2416 2411 /* Mode for BPF_FUNC_skb_adjust_room helper. */ 2417 2412 enum bpf_adj_room_mode { 2418 2413 BPF_ADJ_ROOM_NET, ··· 2432 2421 BPF_LWT_ENCAP_SEG6, 2433 2422 BPF_LWT_ENCAP_SEG6_INLINE 2434 2423 }; 2424 + 2425 + #define __bpf_md_ptr(type, name) \ 2426 + union { \ 2427 + type name; \ 2428 + __u64 :64; \ 2429 + } __attribute__((aligned(8))) 2435 2430 2436 2431 /* user accessible mirror of in-kernel sk_buff. 2437 2432 * new fields can only be added to the end of this structure ··· 2473 2456 /* ... here. */ 2474 2457 2475 2458 __u32 data_meta; 2476 - struct bpf_flow_keys *flow_keys; 2459 + __bpf_md_ptr(struct bpf_flow_keys *, flow_keys); 2477 2460 }; 2478 2461 2479 2462 struct bpf_tunnel_key { ··· 2589 2572 * be added to the end of this structure 2590 2573 */ 2591 2574 struct sk_msg_md { 2592 - void *data; 2593 - void *data_end; 2575 + __bpf_md_ptr(void *, data); 2576 + __bpf_md_ptr(void *, data_end); 2594 2577 2595 2578 __u32 family; 2596 2579 __u32 remote_ip4; /* Stored in network byte order */ ··· 2606 2589 * Start of directly accessible data. It begins from 2607 2590 * the tcp/udp header. 2608 2591 */ 2609 - void *data; 2610 - void *data_end; /* End of directly accessible data */ 2592 + __bpf_md_ptr(void *, data); 2593 + /* End of directly accessible data */ 2594 + __bpf_md_ptr(void *, data_end); 2611 2595 /* 2612 2596 * Total length of packet (starting from the tcp/udp header). 2613 2597 * Note that the directly accessible bytes (data_end - data)
+33 -2
tools/testing/nvdimm/test/nfit.c
··· 15 15 #include <linux/dma-mapping.h> 16 16 #include <linux/workqueue.h> 17 17 #include <linux/libnvdimm.h> 18 + #include <linux/genalloc.h> 18 19 #include <linux/vmalloc.h> 19 20 #include <linux/device.h> 20 21 #include <linux/module.h> ··· 215 214 }; 216 215 217 216 static struct workqueue_struct *nfit_wq; 217 + 218 + static struct gen_pool *nfit_pool; 218 219 219 220 static struct nfit_test *to_nfit_test(struct device *dev) 220 221 { ··· 1135 1132 list_del(&nfit_res->list); 1136 1133 spin_unlock(&nfit_test_lock); 1137 1134 1135 + if (resource_size(&nfit_res->res) >= DIMM_SIZE) 1136 + gen_pool_free(nfit_pool, nfit_res->res.start, 1137 + resource_size(&nfit_res->res)); 1138 1138 vfree(nfit_res->buf); 1139 1139 kfree(nfit_res); 1140 1140 } ··· 1150 1144 GFP_KERNEL); 1151 1145 int rc; 1152 1146 1153 - if (!buf || !nfit_res) 1147 + if (!buf || !nfit_res || !*dma) 1154 1148 goto err; 1155 1149 rc = devm_add_action(dev, release_nfit_res, nfit_res); 1156 1150 if (rc) ··· 1170 1164 1171 1165 return nfit_res->buf; 1172 1166 err: 1167 + if (*dma && size >= DIMM_SIZE) 1168 + gen_pool_free(nfit_pool, *dma, size); 1173 1169 if (buf) 1174 1170 vfree(buf); 1175 1171 kfree(nfit_res); ··· 1180 1172 1181 1173 static void *test_alloc(struct nfit_test *t, size_t size, dma_addr_t *dma) 1182 1174 { 1175 + struct genpool_data_align data = { 1176 + .align = SZ_128M, 1177 + }; 1183 1178 void *buf = vmalloc(size); 1184 1179 1185 - *dma = (unsigned long) buf; 1180 + if (size >= DIMM_SIZE) 1181 + *dma = gen_pool_alloc_algo(nfit_pool, size, 1182 + gen_pool_first_fit_align, &data); 1183 + else 1184 + *dma = (unsigned long) buf; 1186 1185 return __test_alloc(t, size, dma, buf); 1187 1186 } 1188 1187 ··· 2854 2839 goto err_register; 2855 2840 } 2856 2841 2842 + nfit_pool = gen_pool_create(ilog2(SZ_4M), NUMA_NO_NODE); 2843 + if (!nfit_pool) { 2844 + rc = -ENOMEM; 2845 + goto err_register; 2846 + } 2847 + 2848 + if (gen_pool_add(nfit_pool, SZ_4G, SZ_4G, NUMA_NO_NODE)) { 2849 + rc = -ENOMEM; 2850 + goto err_register; 2851 + } 2852 + 2857 2853 for (i = 0; i < NUM_NFITS; i++) { 2858 2854 struct nfit_test *nfit_test; 2859 2855 struct platform_device *pdev; ··· 2920 2894 return 0; 2921 2895 2922 2896 err_register: 2897 + if (nfit_pool) 2898 + gen_pool_destroy(nfit_pool); 2899 + 2923 2900 destroy_workqueue(nfit_wq); 2924 2901 for (i = 0; i < NUM_NFITS; i++) 2925 2902 if (instances[i]) ··· 2945 2916 platform_device_unregister(&instances[i]->pdev); 2946 2917 platform_driver_unregister(&nfit_test_driver); 2947 2918 nfit_test_teardown(); 2919 + 2920 + gen_pool_destroy(nfit_pool); 2948 2921 2949 2922 for (i = 0; i < NUM_NFITS; i++) 2950 2923 put_device(&instances[i]->pdev.dev);
+2 -2
tools/testing/selftests/bpf/bpf_helpers.h
··· 154 154 (void *) BPF_FUNC_skb_ancestor_cgroup_id; 155 155 static struct bpf_sock *(*bpf_sk_lookup_tcp)(void *ctx, 156 156 struct bpf_sock_tuple *tuple, 157 - int size, unsigned int netns_id, 157 + int size, unsigned long long netns_id, 158 158 unsigned long long flags) = 159 159 (void *) BPF_FUNC_sk_lookup_tcp; 160 160 static struct bpf_sock *(*bpf_sk_lookup_udp)(void *ctx, 161 161 struct bpf_sock_tuple *tuple, 162 - int size, unsigned int netns_id, 162 + int size, unsigned long long netns_id, 163 163 unsigned long long flags) = 164 164 (void *) BPF_FUNC_sk_lookup_udp; 165 165 static int (*bpf_sk_release)(struct bpf_sock *sk) =
+368 -7
tools/testing/selftests/bpf/test_btf.c
··· 432 432 /* const void* */ /* [3] */ 433 433 BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), 434 434 /* typedef const void * const_void_ptr */ 435 - BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 3), 436 - /* struct A { */ /* [4] */ 435 + BTF_TYPEDEF_ENC(NAME_TBD, 3), /* [4] */ 436 + /* struct A { */ /* [5] */ 437 437 BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), sizeof(void *)), 438 438 /* const_void_ptr m; */ 439 - BTF_MEMBER_ENC(NAME_TBD, 3, 0), 439 + BTF_MEMBER_ENC(NAME_TBD, 4, 0), 440 440 /* } */ 441 441 BTF_END_RAW, 442 442 }, ··· 494 494 BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_CONST, 0, 0), 0), 495 495 /* const void* */ /* [3] */ 496 496 BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), 497 - /* typedef const void * const_void_ptr */ /* [4] */ 498 - BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 3), 499 - /* const_void_ptr[4] */ /* [5] */ 500 - BTF_TYPE_ARRAY_ENC(3, 1, 4), 497 + /* typedef const void * const_void_ptr */ 498 + BTF_TYPEDEF_ENC(NAME_TBD, 3), /* [4] */ 499 + /* const_void_ptr[4] */ 500 + BTF_TYPE_ARRAY_ENC(4, 1, 4), /* [5] */ 501 501 BTF_END_RAW, 502 502 }, 503 503 .str_sec = "\0const_void_ptr", ··· 1292 1292 .err_str = "type != 0", 1293 1293 }, 1294 1294 1295 + { 1296 + .descr = "typedef (invalid name, name_off = 0)", 1297 + .raw_types = { 1298 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1299 + BTF_TYPEDEF_ENC(0, 1), /* [2] */ 1300 + BTF_END_RAW, 1301 + }, 1302 + .str_sec = "\0__int", 1303 + .str_sec_size = sizeof("\0__int"), 1304 + .map_type = BPF_MAP_TYPE_ARRAY, 1305 + .map_name = "typedef_check_btf", 1306 + .key_size = sizeof(int), 1307 + .value_size = sizeof(int), 1308 + .key_type_id = 1, 1309 + .value_type_id = 1, 1310 + .max_entries = 4, 1311 + .btf_load_err = true, 1312 + .err_str = "Invalid name", 1313 + }, 1314 + 1315 + { 1316 + .descr = "typedef (invalid name, invalid identifier)", 1317 + .raw_types = { 1318 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1319 + BTF_TYPEDEF_ENC(NAME_TBD, 1), /* [2] */ 1320 + BTF_END_RAW, 1321 + }, 1322 + .str_sec = "\0__!int", 1323 + .str_sec_size = sizeof("\0__!int"), 1324 + .map_type = BPF_MAP_TYPE_ARRAY, 1325 + .map_name = "typedef_check_btf", 1326 + .key_size = sizeof(int), 1327 + .value_size = sizeof(int), 1328 + .key_type_id = 1, 1329 + .value_type_id = 1, 1330 + .max_entries = 4, 1331 + .btf_load_err = true, 1332 + .err_str = "Invalid name", 1333 + }, 1334 + 1335 + { 1336 + .descr = "ptr type (invalid name, name_off <> 0)", 1337 + .raw_types = { 1338 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1339 + BTF_TYPE_ENC(NAME_TBD, 1340 + BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 1), /* [2] */ 1341 + BTF_END_RAW, 1342 + }, 1343 + .str_sec = "\0__int", 1344 + .str_sec_size = sizeof("\0__int"), 1345 + .map_type = BPF_MAP_TYPE_ARRAY, 1346 + .map_name = "ptr_type_check_btf", 1347 + .key_size = sizeof(int), 1348 + .value_size = sizeof(int), 1349 + .key_type_id = 1, 1350 + .value_type_id = 1, 1351 + .max_entries = 4, 1352 + .btf_load_err = true, 1353 + .err_str = "Invalid name", 1354 + }, 1355 + 1356 + { 1357 + .descr = "volatile type (invalid name, name_off <> 0)", 1358 + .raw_types = { 1359 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1360 + BTF_TYPE_ENC(NAME_TBD, 1361 + BTF_INFO_ENC(BTF_KIND_VOLATILE, 0, 0), 1), /* [2] */ 1362 + BTF_END_RAW, 1363 + }, 1364 + .str_sec = "\0__int", 1365 + .str_sec_size = sizeof("\0__int"), 1366 + .map_type = BPF_MAP_TYPE_ARRAY, 1367 + .map_name = "volatile_type_check_btf", 1368 + .key_size = sizeof(int), 1369 + .value_size = sizeof(int), 1370 + .key_type_id = 1, 1371 + .value_type_id = 1, 1372 + .max_entries = 4, 1373 + .btf_load_err = true, 1374 + .err_str = "Invalid name", 1375 + }, 1376 + 1377 + { 1378 + .descr = "const type (invalid name, name_off <> 0)", 1379 + .raw_types = { 1380 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1381 + BTF_TYPE_ENC(NAME_TBD, 1382 + BTF_INFO_ENC(BTF_KIND_CONST, 0, 0), 1), /* [2] */ 1383 + BTF_END_RAW, 1384 + }, 1385 + .str_sec = "\0__int", 1386 + .str_sec_size = sizeof("\0__int"), 1387 + .map_type = BPF_MAP_TYPE_ARRAY, 1388 + .map_name = "const_type_check_btf", 1389 + .key_size = sizeof(int), 1390 + .value_size = sizeof(int), 1391 + .key_type_id = 1, 1392 + .value_type_id = 1, 1393 + .max_entries = 4, 1394 + .btf_load_err = true, 1395 + .err_str = "Invalid name", 1396 + }, 1397 + 1398 + { 1399 + .descr = "restrict type (invalid name, name_off <> 0)", 1400 + .raw_types = { 1401 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1402 + BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 1), /* [2] */ 1403 + BTF_TYPE_ENC(NAME_TBD, 1404 + BTF_INFO_ENC(BTF_KIND_RESTRICT, 0, 0), 2), /* [3] */ 1405 + BTF_END_RAW, 1406 + }, 1407 + .str_sec = "\0__int", 1408 + .str_sec_size = sizeof("\0__int"), 1409 + .map_type = BPF_MAP_TYPE_ARRAY, 1410 + .map_name = "restrict_type_check_btf", 1411 + .key_size = sizeof(int), 1412 + .value_size = sizeof(int), 1413 + .key_type_id = 1, 1414 + .value_type_id = 1, 1415 + .max_entries = 4, 1416 + .btf_load_err = true, 1417 + .err_str = "Invalid name", 1418 + }, 1419 + 1420 + { 1421 + .descr = "fwd type (invalid name, name_off = 0)", 1422 + .raw_types = { 1423 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1424 + BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FWD, 0, 0), 0), /* [2] */ 1425 + BTF_END_RAW, 1426 + }, 1427 + .str_sec = "\0__skb", 1428 + .str_sec_size = sizeof("\0__skb"), 1429 + .map_type = BPF_MAP_TYPE_ARRAY, 1430 + .map_name = "fwd_type_check_btf", 1431 + .key_size = sizeof(int), 1432 + .value_size = sizeof(int), 1433 + .key_type_id = 1, 1434 + .value_type_id = 1, 1435 + .max_entries = 4, 1436 + .btf_load_err = true, 1437 + .err_str = "Invalid name", 1438 + }, 1439 + 1440 + { 1441 + .descr = "fwd type (invalid name, invalid identifier)", 1442 + .raw_types = { 1443 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1444 + BTF_TYPE_ENC(NAME_TBD, 1445 + BTF_INFO_ENC(BTF_KIND_FWD, 0, 0), 0), /* [2] */ 1446 + BTF_END_RAW, 1447 + }, 1448 + .str_sec = "\0__!skb", 1449 + .str_sec_size = sizeof("\0__!skb"), 1450 + .map_type = BPF_MAP_TYPE_ARRAY, 1451 + .map_name = "fwd_type_check_btf", 1452 + .key_size = sizeof(int), 1453 + .value_size = sizeof(int), 1454 + .key_type_id = 1, 1455 + .value_type_id = 1, 1456 + .max_entries = 4, 1457 + .btf_load_err = true, 1458 + .err_str = "Invalid name", 1459 + }, 1460 + 1461 + { 1462 + .descr = "array type (invalid name, name_off <> 0)", 1463 + .raw_types = { 1464 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1465 + BTF_TYPE_ENC(NAME_TBD, 1466 + BTF_INFO_ENC(BTF_KIND_ARRAY, 0, 0), 0), /* [2] */ 1467 + BTF_ARRAY_ENC(1, 1, 4), 1468 + BTF_END_RAW, 1469 + }, 1470 + .str_sec = "\0__skb", 1471 + .str_sec_size = sizeof("\0__skb"), 1472 + .map_type = BPF_MAP_TYPE_ARRAY, 1473 + .map_name = "array_type_check_btf", 1474 + .key_size = sizeof(int), 1475 + .value_size = sizeof(int), 1476 + .key_type_id = 1, 1477 + .value_type_id = 1, 1478 + .max_entries = 4, 1479 + .btf_load_err = true, 1480 + .err_str = "Invalid name", 1481 + }, 1482 + 1483 + { 1484 + .descr = "struct type (name_off = 0)", 1485 + .raw_types = { 1486 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1487 + BTF_TYPE_ENC(0, 1488 + BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 4), /* [2] */ 1489 + BTF_MEMBER_ENC(NAME_TBD, 1, 0), 1490 + BTF_END_RAW, 1491 + }, 1492 + .str_sec = "\0A", 1493 + .str_sec_size = sizeof("\0A"), 1494 + .map_type = BPF_MAP_TYPE_ARRAY, 1495 + .map_name = "struct_type_check_btf", 1496 + .key_size = sizeof(int), 1497 + .value_size = sizeof(int), 1498 + .key_type_id = 1, 1499 + .value_type_id = 1, 1500 + .max_entries = 4, 1501 + }, 1502 + 1503 + { 1504 + .descr = "struct type (invalid name, invalid identifier)", 1505 + .raw_types = { 1506 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1507 + BTF_TYPE_ENC(NAME_TBD, 1508 + BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 4), /* [2] */ 1509 + BTF_MEMBER_ENC(NAME_TBD, 1, 0), 1510 + BTF_END_RAW, 1511 + }, 1512 + .str_sec = "\0A!\0B", 1513 + .str_sec_size = sizeof("\0A!\0B"), 1514 + .map_type = BPF_MAP_TYPE_ARRAY, 1515 + .map_name = "struct_type_check_btf", 1516 + .key_size = sizeof(int), 1517 + .value_size = sizeof(int), 1518 + .key_type_id = 1, 1519 + .value_type_id = 1, 1520 + .max_entries = 4, 1521 + .btf_load_err = true, 1522 + .err_str = "Invalid name", 1523 + }, 1524 + 1525 + { 1526 + .descr = "struct member (name_off = 0)", 1527 + .raw_types = { 1528 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1529 + BTF_TYPE_ENC(0, 1530 + BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 4), /* [2] */ 1531 + BTF_MEMBER_ENC(NAME_TBD, 1, 0), 1532 + BTF_END_RAW, 1533 + }, 1534 + .str_sec = "\0A", 1535 + .str_sec_size = sizeof("\0A"), 1536 + .map_type = BPF_MAP_TYPE_ARRAY, 1537 + .map_name = "struct_type_check_btf", 1538 + .key_size = sizeof(int), 1539 + .value_size = sizeof(int), 1540 + .key_type_id = 1, 1541 + .value_type_id = 1, 1542 + .max_entries = 4, 1543 + }, 1544 + 1545 + { 1546 + .descr = "struct member (invalid name, invalid identifier)", 1547 + .raw_types = { 1548 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1549 + BTF_TYPE_ENC(NAME_TBD, 1550 + BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 4), /* [2] */ 1551 + BTF_MEMBER_ENC(NAME_TBD, 1, 0), 1552 + BTF_END_RAW, 1553 + }, 1554 + .str_sec = "\0A\0B*", 1555 + .str_sec_size = sizeof("\0A\0B*"), 1556 + .map_type = BPF_MAP_TYPE_ARRAY, 1557 + .map_name = "struct_type_check_btf", 1558 + .key_size = sizeof(int), 1559 + .value_size = sizeof(int), 1560 + .key_type_id = 1, 1561 + .value_type_id = 1, 1562 + .max_entries = 4, 1563 + .btf_load_err = true, 1564 + .err_str = "Invalid name", 1565 + }, 1566 + 1567 + { 1568 + .descr = "enum type (name_off = 0)", 1569 + .raw_types = { 1570 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1571 + BTF_TYPE_ENC(0, 1572 + BTF_INFO_ENC(BTF_KIND_ENUM, 0, 1), 1573 + sizeof(int)), /* [2] */ 1574 + BTF_ENUM_ENC(NAME_TBD, 0), 1575 + BTF_END_RAW, 1576 + }, 1577 + .str_sec = "\0A\0B", 1578 + .str_sec_size = sizeof("\0A\0B"), 1579 + .map_type = BPF_MAP_TYPE_ARRAY, 1580 + .map_name = "enum_type_check_btf", 1581 + .key_size = sizeof(int), 1582 + .value_size = sizeof(int), 1583 + .key_type_id = 1, 1584 + .value_type_id = 1, 1585 + .max_entries = 4, 1586 + }, 1587 + 1588 + { 1589 + .descr = "enum type (invalid name, invalid identifier)", 1590 + .raw_types = { 1591 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1592 + BTF_TYPE_ENC(NAME_TBD, 1593 + BTF_INFO_ENC(BTF_KIND_ENUM, 0, 1), 1594 + sizeof(int)), /* [2] */ 1595 + BTF_ENUM_ENC(NAME_TBD, 0), 1596 + BTF_END_RAW, 1597 + }, 1598 + .str_sec = "\0A!\0B", 1599 + .str_sec_size = sizeof("\0A!\0B"), 1600 + .map_type = BPF_MAP_TYPE_ARRAY, 1601 + .map_name = "enum_type_check_btf", 1602 + .key_size = sizeof(int), 1603 + .value_size = sizeof(int), 1604 + .key_type_id = 1, 1605 + .value_type_id = 1, 1606 + .max_entries = 4, 1607 + .btf_load_err = true, 1608 + .err_str = "Invalid name", 1609 + }, 1610 + 1611 + { 1612 + .descr = "enum member (invalid name, name_off = 0)", 1613 + .raw_types = { 1614 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1615 + BTF_TYPE_ENC(0, 1616 + BTF_INFO_ENC(BTF_KIND_ENUM, 0, 1), 1617 + sizeof(int)), /* [2] */ 1618 + BTF_ENUM_ENC(0, 0), 1619 + BTF_END_RAW, 1620 + }, 1621 + .str_sec = "", 1622 + .str_sec_size = sizeof(""), 1623 + .map_type = BPF_MAP_TYPE_ARRAY, 1624 + .map_name = "enum_type_check_btf", 1625 + .key_size = sizeof(int), 1626 + .value_size = sizeof(int), 1627 + .key_type_id = 1, 1628 + .value_type_id = 1, 1629 + .max_entries = 4, 1630 + .btf_load_err = true, 1631 + .err_str = "Invalid name", 1632 + }, 1633 + 1634 + { 1635 + .descr = "enum member (invalid name, invalid identifier)", 1636 + .raw_types = { 1637 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1638 + BTF_TYPE_ENC(0, 1639 + BTF_INFO_ENC(BTF_KIND_ENUM, 0, 1), 1640 + sizeof(int)), /* [2] */ 1641 + BTF_ENUM_ENC(NAME_TBD, 0), 1642 + BTF_END_RAW, 1643 + }, 1644 + .str_sec = "\0A!", 1645 + .str_sec_size = sizeof("\0A!"), 1646 + .map_type = BPF_MAP_TYPE_ARRAY, 1647 + .map_name = "enum_type_check_btf", 1648 + .key_size = sizeof(int), 1649 + .value_size = sizeof(int), 1650 + .key_type_id = 1, 1651 + .value_type_id = 1, 1652 + .max_entries = 4, 1653 + .btf_load_err = true, 1654 + .err_str = "Invalid name", 1655 + }, 1295 1656 { 1296 1657 .descr = "arraymap invalid btf key (a bit field)", 1297 1658 .raw_types = {
+9 -9
tools/testing/selftests/bpf/test_sk_lookup_kern.c
··· 72 72 return TC_ACT_SHOT; 73 73 74 74 tuple_len = ipv4 ? sizeof(tuple->ipv4) : sizeof(tuple->ipv6); 75 - sk = bpf_sk_lookup_tcp(skb, tuple, tuple_len, 0, 0); 75 + sk = bpf_sk_lookup_tcp(skb, tuple, tuple_len, BPF_F_CURRENT_NETNS, 0); 76 76 if (sk) 77 77 bpf_sk_release(sk); 78 78 return sk ? TC_ACT_OK : TC_ACT_UNSPEC; ··· 84 84 struct bpf_sock_tuple tuple = {}; 85 85 struct bpf_sock *sk; 86 86 87 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 87 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 88 88 if (sk) 89 89 bpf_sk_release(sk); 90 90 return 0; ··· 97 97 struct bpf_sock *sk; 98 98 __u32 family = 0; 99 99 100 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 100 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 101 101 if (sk) { 102 102 bpf_sk_release(sk); 103 103 family = sk->family; ··· 112 112 struct bpf_sock *sk; 113 113 __u32 family; 114 114 115 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 115 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 116 116 if (sk) { 117 117 sk += 1; 118 118 bpf_sk_release(sk); ··· 127 127 struct bpf_sock *sk; 128 128 __u32 family; 129 129 130 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 130 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 131 131 sk += 1; 132 132 if (sk) 133 133 bpf_sk_release(sk); ··· 139 139 { 140 140 struct bpf_sock_tuple tuple = {}; 141 141 142 - bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 142 + bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 143 143 return 0; 144 144 } 145 145 ··· 149 149 struct bpf_sock_tuple tuple = {}; 150 150 struct bpf_sock *sk; 151 151 152 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 152 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 153 153 bpf_sk_release(sk); 154 154 bpf_sk_release(sk); 155 155 return 0; ··· 161 161 struct bpf_sock_tuple tuple = {}; 162 162 struct bpf_sock *sk; 163 163 164 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 164 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 165 165 bpf_sk_release(sk); 166 166 return 0; 167 167 } ··· 169 169 void lookup_no_release(struct __sk_buff *skb) 170 170 { 171 171 struct bpf_sock_tuple tuple = {}; 172 - bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 172 + bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 173 173 } 174 174 175 175 SEC("fail_no_release_subcall")
+3 -3
tools/testing/selftests/bpf/test_verifier.c
··· 8576 8576 BPF_JMP_IMM(BPF_JA, 0, 0, -7), 8577 8577 }, 8578 8578 .fixup_map_hash_8b = { 4 }, 8579 - .errstr = "R0 invalid mem access 'inv'", 8579 + .errstr = "unbounded min value", 8580 8580 .result = REJECT, 8581 8581 }, 8582 8582 { ··· 10547 10547 "check deducing bounds from const, 5", 10548 10548 .insns = { 10549 10549 BPF_MOV64_IMM(BPF_REG_0, 0), 10550 - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1), 10550 + BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 1), 10551 10551 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 10552 10552 BPF_EXIT_INSN(), 10553 10553 }, ··· 14230 14230 14231 14231 reject_from_alignment = fd_prog < 0 && 14232 14232 (test->flags & F_NEEDS_EFFICIENT_UNALIGNED_ACCESS) && 14233 - strstr(bpf_vlog, "Unknown alignment."); 14233 + strstr(bpf_vlog, "misaligned"); 14234 14234 #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 14235 14235 if (reject_from_alignment) { 14236 14236 printf("FAIL\nFailed due to alignment despite having efficient unaligned access: '%s'!\n",